Analysis of Deep Learning Techniques for Dental Informatics: A Systematic Literature Review
Abstract
:1. Introduction
2. Research Methodology
2.1. Plan Review
2.1.1. Research Questions
2.1.2. Review Protocols
2.1.3. Searching Keywords
- Taking the key terms from our research questions and extracting them.
- Referring to the terms by various names.
- Adding keywords from pertinent publications to our search terms.
2.1.4. Literature Resources
2.2. Conduct Review
2.2.1. Study Selection
2.2.2. Data Extraction
2.2.3. Information Synthesis
2.3. Report Review
3. Results
3.1. What Are the Existing DL Techniques Used in Dental Practice?
3.1.1. Artificial Neural Networks (ANNs)
3.1.2. Recurrent Neural Networks (RNNs)
3.1.3. Convolutional Neural Networks (CNNs)
3.1.4. Generative Adversarial Networks (GANs)
3.1.5. Graph Neural Networks (GNNs)
3.2. Which Categories of DI Used DL Techniques?
3.2.1. Computer Aided Design (CAD)/Computer Aided Manufacturing (CAM)
3.2.2. Three-Dimensional (3D) Printing
3.2.3. Electronic Dental Records (EDR)
3.2.4. Cone Beam Computed Tomography (CBCT)
3.2.5. Finite Element Analysis (FEA)
3.2.6. Virtual Reality (VR)/Augmented Reality (AR)/Mixed Reality (MR)
3.2.7. Teledentistry
3.3. Which Types of Images Are Used to Evaluate DL Techniques?
3.4. What Are the Performance Measurement Techniques Used to Measure DL Techniques?
4. Discussion
4.1. Contribution
4.2. Implications for Practice
5. Limitations and Future Recommendations
- The need to collect and annotate a dental image dataset. When compared to other imaging fields, the dentistry industry finds it difficult to obtain annotated data, which is necessary for DL applications. Dental data annotation is costly, laborious, and time consuming because it requires professionals to devote a lot of their time to it. In addition, it might not always be achievable in unusual circumstances.
- Advancement in DL methods. For supervised DL, the majority of DL algorithms rely on annotated data. To address the issue of enormous data unavailability, the supervised DL industry needs switch from being supervised to unsupervised or semi-supervised. How useful can unsupervised and semi-supervised methods be in medicine as a result, and how can we move from supervised learning to transform learning without sacrificing accuracy while keeping in mind how sensitive healthcare systems are?
- Implementation of AR or VR applications in various fields of dental medicine and education. By testing the capabilities of AR and VR virtually before putting them to use on patients, dental surgeons and trainees can gain knowledge and confidence in their abiilty to use these methods. Although several research studies have highlighted important limits for users working with haptics, dental surgeions and trainees can perform common and difficult treatments quickly and efficiently by using real-time haptic feedback on virtual patients [170]. There are several potential uses for AR and VR in dentistry, including specialized dental fields, dental education and training, oral and maxillofacial surgery, and pediatric dentistry.
- Implement an automated tooth disease diagnosis system based on DL methods. The dental care industry has been severely affected by the COVID-19 pandemic [171]. Indeed, it is necessary to progress an automated method for scanning teeth, identifying, categorizing, and diagnosing dental diseases.
- Structuring electronic dental records through DL for a clinical decision support system. Medical informatics’ fundamental and difficult objective is to extract information from unstructured clinical text [172].
- GNN-based approach. The most commonly used methods in the DL disciplines is the GNN. By using this method, it is possible to create artificial data that resembles real data nearly exactly and to comprehend the links (i.e., visual relationships) between them, which aids in the identification and classification of dental illnesses. Therefore, GNN might be a great option for handling data ambiguity.
6. Conclusions
Supplementary Materials
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
Abbreviation | Full Name |
SLR | Systematic Literature Review |
DL | Deep Learning |
DI | Dental Informatics |
MI | Medical Informatics |
IT | information technology |
CAD | Computer-aided design |
CAM | computer-aided manufacturing |
CBCT | Cone Beam Computed Tomography |
CT | Computerized Tomography |
EDR | Electronic Dental Record |
EHR | Electronic Health Records |
FEA | Finite Element Analysis |
FEM | Finite Element Method |
VR | Virtual Reality |
AR | Augmented Reality |
ML | Machine Learning |
ANN | Artificial Neural Network |
CNN | Convolutional Neural Network |
GAN | Generative Adversarial Network |
ELM | Extreme Learning Machines |
GCN | Graph Convolutional Network |
RNN | Recurrent Neural Network |
RRC | Radiation-Related Caries |
NILT | Near-infrared Light Transillumination |
PCT | Periodontally Damaged Teeth |
pBL | Periodontal Bone Loss |
SVM | Support Vector Machine |
DPR | Dental Panoramic Radiograph |
TI | Transillumination |
R-CNN | Region-based Convolutional Neural Network |
GNN | Graph Neural Network |
FP | False Positive |
ROC | Receiver Operating Characteristic |
PRC | precision recall curve |
AUC | Area Under Curve |
mIOU | mean Intersection over Union |
NPV | Negative Predictive Value |
mAP | Mean Average Precision |
IOU | Intersection Over Union |
DSC | Dice similarity coefficient |
DA | Detection Accuracy |
FA | identification accuracy |
FPR | false-positive predictions |
MI-DCNNE | multi-input deep convolutional neural network ensemble |
ICDAS | International Caries Detection and Assessment System |
DNN | Deep Neural Network |
UCDA | novel caries detection and assessment |
TSGCNet | two-stream graph convolutional network |
PGGAN | progressive growing of generative adversarial networks |
SLFNs | Single-Hidden Layer Feed forward Neural Networks |
LSTM | Long Short-Term Memory |
SSD | Single Shot MultiBox Detector |
BPNN | Back-Propagation Neural Network |
References
- Schleyer, T.; Mattsson, U.; Ni Riordain, R.; Brailo, V.; Glick, M.; Zain, R.; Jontell, M. Advancing oral medicine through informatics and information technology: A proposed framework and strategy. Oral Dis. 2011, 17, 85–94. [Google Scholar] [CrossRef]
- Islam, L.; Islam, M.R.; Akter, S.; Hasan, M.Z.; Moni, M.A.; Uddin, M.N. Identifying Heterogeneity of Diabetics Mellitus Based on the Demographical and Clinical Characteristics. Hum.-Cent. Intell. Syst. 2022, 2, 44–54. [Google Scholar] [CrossRef]
- Wyatt, J.C.; Liu, J.L. Basic concepts in medical informatics. J. Epidemiol. Community Health 2002, 56, 808–812. [Google Scholar] [CrossRef]
- Cimino, J.J.; Shortliffe, E.H. Biomedical Informatics: Computer Applications in Health Care and Biomedicine (Health Informatics); Springer: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
- Reynolds, P.; Harper, J.; Dunne, S. Better informed in clinical practice—A brief overview of dental informatics. Br. Dent. J. 2008, 204, 313–317. [Google Scholar] [CrossRef] [PubMed]
- Schleyer, T. Dental informatics: An emerging biomedical informatics discipline. Adv. Dent. Res. 2003, 17, 4–8. [Google Scholar] [CrossRef]
- Salagare, S.; Prasad, R. An overview of internet of dental things: New frontier in advanced dentistry. Wirel. Pers. Commun. 2020, 110, 1345–1371. [Google Scholar] [CrossRef]
- Joda, T.; Zarone, F.; Ferrari, M. The complete digital workflow in fixed prosthodontics: A systematic review. BMC Oral Health 2017, 17, 124. [Google Scholar] [CrossRef] [PubMed]
- Pauwels, R.; Araki, K.; Siewerdsen, J.; Thongvigitmanee, S.S. Technical aspects of dental CBCT: State of the art. Dentomaxillofac. Radiol. 2015, 44, 20140224. [Google Scholar] [CrossRef] [PubMed]
- Revilla-León, M.; Özcan, M. Additive manufacturing technologies used for processing polymers: Current status and potential application in prosthetic dentistry. J. Prosthodont. 2019, 28, 146–158. [Google Scholar] [CrossRef] [PubMed]
- Colombo, M.; Mangano, C.; Mijiritsky, E.; Krebs, M.; Hauschild, U.; Fortin, T. Clinical applications and effectiveness of guided implant surgery: A critical review based on randomized controlled trials. BMC Oral Health 2017, 17, 150. [Google Scholar] [CrossRef]
- Zhou, W.; Liu, Z.; Song, L.; Kuo, C.l.; Shafer, D.M. Clinical factors affecting the accuracy of guided implant surgery—A systematic review and meta-analysis. J. Evid. Based Dent. Pract. 2018, 18, 28–40. [Google Scholar] [CrossRef] [PubMed]
- Islam, M.; Kabir, M.A.; Ahmed, A.; Kamal, A.R.M.; Wang, H.; Ulhaq, A. Depression detection from social network data using machine learning techniques. Health Inf. Sci. Syst. 2018, 6, 8. [Google Scholar] [CrossRef]
- Islam, M.R.; Kamal, A.R.M.; Sultana, N.; Islam, R.; Moni, M.A.; Ulhaq, A. Detecting depression using k-nearest neighbors (knn) classification technique. In Proceedings of the 2018 International Conference on Computer, Communication, Chemical, Material and Electronic Engineering (IC4ME2), Rajshahi, Bangladesh, 8–9 February 2018; pp. 1–4. [Google Scholar]
- Islam, M.R.; Miah, S.J.; Kamal, A.R.M.; Burmeister, O. A design construct of developing approaches to measure mental health conditions. Australas. J. Inf. Syst. 2019, 23. [Google Scholar] [CrossRef]
- Prajapati, S.A.; Nagaraj, R.; Mitra, S. Classification of dental diseases using CNN and transfer learning. In Proceedings of the 2017 5th International Symposium on Computational and Business Intelligence (ISCBI), Dubai, United Arab Emirates, 11–14 August 2017; pp. 70–74. [Google Scholar]
- Hwang, J.J.; Jung, Y.H.; Cho, B.H.; Heo, M.S. An overview of deep learning in the field of dentistry. Imaging Sci. Dent. 2019, 49, 1–7. [Google Scholar] [CrossRef] [PubMed]
- Zhang, X.; Liang, Y.; Li, W.; Liu, C.; Gu, D.; Sun, W.; Miao, L. Development and evaluation of deep learning for screening dental caries from oral photographs. Oral Dis. 2022, 28, 173–181. [Google Scholar] [CrossRef] [PubMed]
- Yauney, G.; Rana, A.; Wong, L.C.; Javia, P.; Muftu, A.; Shah, P. Automated process incorporating machine learning segmentation and correlation of oral diseases with systemic health. In Proceedings of the 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Berlin, Germany, 23–27 July 2019; pp. 3387–3393. [Google Scholar]
- Khanna, S.S.; Dhaimade, P.A. Artificial intelligence: Transforming dentistry today. Indian J. Basic Appl. Med. Res. 2017, 6, 161–167. [Google Scholar]
- Park, W.J.; Park, J.B. History and application of artificial neural networks in dentistry. Eur. J. Dent. 2018, 12, 594–601. [Google Scholar] [CrossRef]
- Islam, M.R.; Liu, S.; Wang, X.; Xu, G. Deep learning for misinformation detection on online social networks: A survey and new perspectives. Soc. Netw. Anal. Min. 2020, 10, 82. [Google Scholar] [CrossRef]
- Dalveren, G.G.M.; Mishra, D. Software engineering in medical informatics: A systematic literature review. In Proceedings of the 9th International Conference on Information Communication and Management, Prague, Czech Republic, 23–26 August 2019; pp. 112–117. [Google Scholar]
- Katne, T.; Kanaparthi, A.; Gotoor, S.; Muppirala, S.; Devaraju, R.; Gantala, R. Artificial intelligence: Demystifying dentistry—The future and beyond. Int. J. Contemp. Med. Surg. Radiol. 2019, 4, D6–D9. [Google Scholar] [CrossRef]
- Tandon, D.; Rajawat, J.; Banerjee, M. Present and future of artificial intelligence in dentistry. J. Oral Biol. Craniofacial Res. 2020, 10, 391–396. [Google Scholar] [CrossRef]
- Bindushree, V.; Sameen, R.; Vasudevan, V.; Shrihari, T.; Devaraju, D.; Mathew, N.S. Artificial intelligence: In modern dentistry. J. Dent. Res. Rev. 2020, 7, 27. [Google Scholar]
- Schwendicke, F.A.; Samek, W.; Krois, J. Artificial intelligence in dentistry: Chances and challenges. J. Dent. Res. 2020, 99, 769–774. [Google Scholar] [CrossRef] [PubMed]
- Chitnis, G.; Bhanushali, V.; Ranade, A.; Khadase, T.; Pelagade, V.; Chavan, J. A review of machine learning methodologies for dental disease detection. In Proceedings of the 2020 IEEE India Council International Subsections Conference (INDISCON), Visakhapatnam, India, 3–4 October 2020; pp. 63–65. [Google Scholar]
- Ahmed, N.; Abbasi, M.S.; Zuberi, F.; Qamar, W.; Halim, M.S.B.; Maqsood, A.; Alam, M.K. Artificial Intelligence Techniques: Analysis, Application, and Outcome in Dentistry—A Systematic Review. Biomed Res. Int. 2021, 2021, 9751564. [Google Scholar] [CrossRef]
- Corbella, S.; Srinivas, S.; Cabitza, F. Applications of deep learning in dentistry. Oral Surg. Oral Med. Oral Pathol. Oral Radiol. 2021, 132, 225–238. [Google Scholar] [CrossRef]
- Rodrigues, J.A.; Krois, J.; Schwendicke, F. Demystifying artificial intelligence and deep learning in dentistry. Braz. Oral Res. 2021, 35, e094. [Google Scholar] [CrossRef]
- Babu, A.; Onesimu, J.A.; Sagayam, K.M. Artificial Intelligence in dentistry: Concepts, Applications and Research Challenges. In Proceedings of the E3S Web of Conferences. EDP Sciences, Agadir, Morocco, 22–24 July 2021; Volume 297. [Google Scholar]
- Carrillo-Perez, F.; Pecho, O.E.; Morales, J.C.; Paravina, R.D.; Della Bona, A.; Ghinea, R.; Pulgar, R.; Pérez, M.d.M.; Herrera, L.J. Applications of artificial intelligence in dentistry: A comprehensive review. J. Esthet. Restor. Dent. 2022, 34, 259–280. [Google Scholar] [CrossRef]
- Schwendicke, F.; Krois, J. Data dentistry: How data are changing clinical care and research. J. Dent. Res. 2022, 101, 21–29. [Google Scholar] [CrossRef]
- Khanagar, S.B.; Al-Ehaideb, A.; Maganur, P.C.; Vishwanathaiah, S.; Patil, S.; Baeshen, H.A.; Sarode, S.C.; Bhandi, S. Developments, application, and performance of artificial intelligence in dentistry—A systematic review. J. Dent. Sci. 2021, 16, 508–522. [Google Scholar] [CrossRef] [PubMed]
- Creswell, A.; White, T.; Dumoulin, V.; Arulkumaran, K.; Sengupta, B.; Bharath, A.A. Generative adversarial networks: An overview. IEEE Signal Process. Mag. 2018, 35, 53–65. [Google Scholar] [CrossRef]
- Huang, G.; Huang, G.B.; Song, S.; You, K. Trends in extreme learning machines: A review. Neural Netw. 2015, 61, 32–48. [Google Scholar] [CrossRef] [PubMed]
- Kipf, T.N.; Welling, M. Semi-supervised classification with graph convolutional networks. arXiv 2016, arXiv:1609.02907. [Google Scholar]
- Islam, M.R.; Yin, X.; Ulhaq, A.; Zhang, Y.; Wang, H.; Anjum, N.; Kron, T. A survey of graph based complex brain network analysis using functional and diffusional MRI. Am. J. Appl. Sci. 2018, 14, 1186–1208. [Google Scholar] [CrossRef]
- Keele, S.; Charters, S. Guidelines for Performing Systematic Literature Reviews in Software Engineering. Technical Report EBSE 2007-001, Keele University and Durham University Joint Report, Version 2.3. 2007. Available online: https://www.elsevier.com/__data/promis_misc/525444systematicreviewsguide.pdf (accessed on 18 August 2022).
- De Araujo Faria, V.; Azimbagirad, M.; Viani Arruda, G.; Fernandes Pavoni, J.; Cezar Felipe, J.; dos Santos, E.M.C.M.F.; Murta, L.O., Jr. Prediction of radiation-related dental caries through pyradiomics features and artificial neural network on panoramic radiography. J. Digit. Imaging 2021, 34, 1237–1248. [Google Scholar] [CrossRef]
- Li, G.H.; Hsung, T.C.; Ling, W.K.; Lam, W.Y.H.; Pelekos, G.; McGrath, C. Automatic Site-Specific Multiple Level Gum Disease Detection Based on Deep Neural Network. In Proceedings of the 2021 15th International Symposium on Medical Information and Communication Technology (ISMICT), Xiamen, China, 14–16 April 2021; pp. 201–205. [Google Scholar]
- Geetha, V.; Aprameya, K.; Hinduja, D.M. Dental caries diagnosis in digital radiographs using back-propagation neural network. Health Inf. Sci. Syst. 2020, 8, 8. [Google Scholar] [CrossRef] [PubMed]
- Zanella-Calzada, L.A.; Galván-Tejada, C.E.; Chávez-Lamas, N.M.; Rivas-Gutierrez, J.; Magallanes-Quintanar, R.; Celaya-Padilla, J.M.; Galván-Tejada, J.I.; Gamboa-Rosales, H. Deep artificial neural networks for the diagnostic of caries using socioeconomic and nutritional features as determinants: Data from NHANES 2013–2014. Bioengineering 2018, 5, 47. [Google Scholar] [CrossRef]
- Huang, G.B.; Zhou, H.; Ding, X.; Zhang, R. Extreme learning machine for regression and multiclass classification. IEEE Trans. Syst. Man Cybern. Part B (Cybern.) 2011, 42, 513–529. [Google Scholar] [CrossRef]
- Rochman, E.S.; Rachmad, A.; Syakur, M.; Suzanti, I. Method extreme learning machine for forecasting number of patients’ visits in dental poli (A case study: Community Health Centers Kamal Madura Indonesia). J. Phys. Conf. Ser. 2018, 953, 012133. [Google Scholar] [CrossRef]
- Li, Z.; Guo, T.; Bao, F.; Payne, R. Teeth category classification via Hu moment invariant and extreme learning machine. In Proceedings of the 2018 International Conference on Computer Modeling, Simulation and Algorithm (CMSA 2018), Beijing, China, 22–23 April 2018; pp. 220–223. [Google Scholar]
- Lu, S.; Yang, J.; Wang, W.; Li, Z.; Lu, Z. Teeth classification based on extreme learning machine. In Proceedings of the 2018 Second World Conference on Smart Trends in Systems, Security and Sustainability (WorldS4), London, UK, 30–31 October 2018; pp. 198–202. [Google Scholar]
- Li, W.; Chen, Y.; Miao, L.; Brown, M.; Sun, W.; Zhang, X. Gingivitis identification via grey-level cooccurrence matrix and extreme learning machine. In Proceedings of the 8th International Conference on Education, Management, Information and Management Society (EMIM 2018), Shenyang, China, 28–30 June 2018; pp. 486–492. [Google Scholar]
- Djuričić, G.J.; Radulovic, M.; Sopta, J.P.; Nikitović, M.; Milošević, N.T. Fractal and gray level cooccurrence matrix computational analysis of primary osteosarcoma magnetic resonance images predicts the chemotherapy response. Front. Oncol. 2017, 7, 246. [Google Scholar] [CrossRef]
- Alarifi, A.; AlZubi, A.A. Memetic search optimization along with genetic scale recurrent neural network for predictive rate of implant treatment. J. Med. Syst. 2018, 42, 202. [Google Scholar] [CrossRef]
- Kumari, A.R.; Rao, S.N.; Reddy, P.R. Design of hybrid dental caries segmentation and caries detection with meta-heuristic-based ResneXt-RNN. Biomed. Signal Process. Control 2022, 78, 103961. [Google Scholar] [CrossRef]
- Singh, P.; Sehgal, P. GV Black dental caries classification and preparation technique using optimal CNN-LSTM classifier. Multimed. Tools Appl. 2021, 80, 5255–5272. [Google Scholar] [CrossRef]
- Schwendicke, F.; Golla, T.; Dreher, M.; Krois, J. Convolutional neural networks for dental image diagnostics: A scoping review. J. Dent. 2019, 91, 103226. [Google Scholar] [CrossRef] [PubMed]
- Bejnordi, B.E.; Litjens, G.; van der Laak, J.A. Machine learning compared with pathologist assessment—Reply. JAMA 2018, 319, 1726. [Google Scholar] [CrossRef] [PubMed]
- Lee, J.H.; Kim, D.H.; Jeong, S.N.; Choi, S.H. Detection and diagnosis of dental caries using a deep learning-based convolutional neural network algorithm. J. Dent. 2018, 77, 106–111. [Google Scholar] [CrossRef]
- Vinayahalingam, S.; Kempers, S.; Limon, L.; Deibel, D.; Maal, T.; Hanisch, M.; Bergé, S.; Xi, T. Classification of caries in third molars on panoramic radiographs using deep learning. Sci. Rep. 2021, 11, 12609. [Google Scholar] [CrossRef] [PubMed]
- Chen, H.; Zhang, K.; Lyu, P.; Li, H.; Zhang, L.; Wu, J.; Lee, C.H. A deep learning approach to automatic teeth detection and numbering based on object detection in dental periapical films. Sci. Rep. 2019, 9, 3840. [Google Scholar] [CrossRef] [PubMed]
- Mahoor, M.H.; Abdel-Mottaleb, M. Classification and numbering of teeth in dental bitewing images. Pattern Recognit. 2005, 38, 577–586. [Google Scholar] [CrossRef]
- Nardi, C.; Calistri, L.; Grazzini, G.; Desideri, I.; Lorini, C.; Occhipinti, M.; Mungai, F.; Colagrande, S. Is panoramic radiography an accurate imaging technique for the detection of endodontically treated asymptomatic apical periodontitis? J. Endod. 2018, 44, 1500–1508. [Google Scholar] [CrossRef]
- Kühnisch, J.; Söchtig, F.; Pitchika, V.; Laubender, R.; Neuhaus, K.W.; Lussi, A.; Hickel, R. In vivo validation of near-infrared light transillumination for interproximal dentin caries detection. Clin. Oral Investig. 2016, 20, 821–829. [Google Scholar] [CrossRef]
- Simon, J.C.; Lucas, S.A.; Lee, R.C.; Darling, C.L.; Staninec, M.; Vaderhobli, R.; Pelzner, R.; Fried, D. Near-infrared imaging of secondary caries lesions around composite restorations at wavelengths from 1300–1700-nm. Dent. Mater. 2016, 32, 587–595. [Google Scholar] [CrossRef]
- Choi, J.; Eun, H.; Kim, C. Boosting proximal dental caries detection via combination of variational methods and convolutional neural network. J. Signal Process. Syst. 2018, 90, 87–97. [Google Scholar] [CrossRef]
- Cantu, A.G.; Gehrung, S.; Krois, J.; Chaurasia, A.; Rossi, J.G.; Gaudin, R.; Elhennawy, K.; Schwendicke, F. Detecting caries lesions of different radiographic extension on bitewings using deep learning. J. Dent. 2020, 100, 103425. [Google Scholar] [CrossRef] [PubMed]
- Lee, S.; Oh, S.i.; Jo, J.; Kang, S.; Shin, Y.; Park, J.w. Deep learning for early dental caries detection in bitewing radiographs. Sci. Rep. 2021, 11, 16807. [Google Scholar] [CrossRef] [PubMed]
- Saini, D.; Jain, R.; Thakur, A. Dental caries early detection using convolutional neural network for tele dentistry. In Proceedings of the 2021 7th International Conference on Advanced Computing and Communication Systems (ICACCS), Coimbatore, India, 19–20 March 2021; Volume 1, pp. 958–963. [Google Scholar]
- Yang, J.; Xie, Y.; Liu, L.; Xia, B.; Cao, Z.; Guo, C. Automated dental image analysis by deep learning on small dataset. In Proceedings of the 2018 IEEE 42nd Annual Computer Software and Applications Conference (COMPSAC), Tokyo, Japan, 23–27 July 2018; Volume 1, pp. 492–497. [Google Scholar]
- Lee, J.H.; Kim, D.h.; Jeong, S.N.; Choi, S.H. Diagnosis and prediction of periodontally compromised teeth using a deep learning-based convolutional neural network algorithm. J. Periodontal Implant. Sci. 2018, 48, 114–123. [Google Scholar] [CrossRef] [PubMed]
- Al Kheraif, A.A.; Wahba, A.A.; Fouad, H. Detection of dental diseases from radiographic 2d dental image using hybrid graph-cut technique and convolutional neural network. Measurement 2019, 146, 333–342. [Google Scholar] [CrossRef]
- Murata, M.; Ariji, Y.; Ohashi, Y.; Kawai, T.; Fukuda, M.; Funakoshi, T.; Kise, Y.; Nozawa, M.; Katsumata, A.; Fujita, H.; et al. Deep-learning classification using convolutional neural network for evaluation of maxillary sinusitis on panoramic radiography. Oral Radiol. 2019, 35, 301–307. [Google Scholar] [CrossRef] [PubMed]
- Ekert, T.; Krois, J.; Meinhold, L.; Elhennawy, K.; Emara, R.; Golla, T.; Schwendicke, F. Deep learning for the radiographic detection of apical lesions. J. Endod. 2019, 45, 917–922. [Google Scholar] [CrossRef] [PubMed]
- Krois, J.; Ekert, T.; Meinhold, L.; Golla, T.; Kharbot, B.; Wittemeier, A.; Dörfer, C.; Schwendicke, F. Deep learning for the radiographic detection of periodontal bone loss. Sci. Rep. 2019, 9, 8495. [Google Scholar] [CrossRef]
- Verma, D.; Puri, S.; Prabhu, S.; Smriti, K. Anomaly detection in panoramic dental X-rays using a hybrid Deep Learning and Machine Learning approach. In Proceedings of the 2020 IEEE Region 10 Conference (TENCON), Osaka, Japan, 16–19 November 2020; pp. 263–268. [Google Scholar]
- Muresan, M.P.; Barbura, A.R.; Nedevschi, S. Teeth detection and dental problem classification in panoramic X-ray images using deep learning and image processing techniques. In Proceedings of the 2020 IEEE 16th International Conference on Intelligent Computer Communication and Processing (ICCP), Cluj-Napoca, Romania, 3–5 September 2020; pp. 457–463. [Google Scholar]
- Mahdi, F.P.; Yagi, N.; Kobashi, S. Automatic teeth recognition in dental X-ray images using transfer learning based faster R-CNN. In Proceedings of the 2020 IEEE 50th International Symposium on Multiple-Valued Logic (ISMVL), Miyazaki, Japan, 9–11 November 2020; pp. 16–21. [Google Scholar]
- Lakshmi, M.M.; Chitra, P. Tooth decay prediction and classification from X-ray images using deep CNN. In Proceedings of the 2020 International Conference on Communication and Signal Processing (ICCSP), Melmaruvathur, India, 28–30 July 2020; pp. 1349–1355. [Google Scholar]
- Zhao, Y.; Li, P.; Gao, C.; Liu, Y.; Chen, Q.; Yang, F.; Meng, D. TSASNet: Tooth segmentation on dental panoramic X-ray images by Two-Stage Attention Segmentation Network. Knowl.-Based Syst. 2020, 206, 106338. [Google Scholar] [CrossRef]
- Fariza, A.; Arifin, A.Z.; Astuti, E.R. Automatic Tooth and Background Segmentation in Dental X-ray Using U-Net Convolution Network. In Proceedings of the 2020 6th International Conference on Science in Information Technology (ICSITech), Palu, Indonesia, 21–22 October 2020; pp. 144–149. [Google Scholar]
- Lakshmi, M.M.; Chitra, P. Classification of Dental Cavities from X-ray Images Using Deep CNN Algorithm. In Proceedings of the 2020 4th International Conference on Trends in Electronics and Informatics (ICOEI) (48184), Tirunelveli, India, 15–17 June 2020; pp. 774–779. [Google Scholar]
- Khan, H.A.; Haider, M.A.; Ansari, H.A.; Ishaq, H.; Kiyani, A.; Sohail, K.; Muhammad, M.; Khurram, S.A. Automated feature detection in dental periapical radiographs by using deep learning. Oral Surg. Oral Med. Oral Pathol. Oral Radiol. 2021, 131, 711–720. [Google Scholar] [CrossRef]
- Moran, M.B.H.; Faria, M.; Giraldi, G.; Bastos, L.; da Silva Inacio, B.; Conci, A. On using convolutional neural networks to classify periodontal bone destruction in periapical radiographs. In Proceedings of the 2020 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Seoul, Korea, 16–19 December 2020; pp. 2036–2039. [Google Scholar]
- Chen, H.; Li, H.; Zhao, Y.; Zhao, J.; Wang, Y. Dental disease detection on periapical radiographs based on deep convolutional neural networks. Int. J. Comput. Assist. Radiol. Surg. 2021, 16, 649–661. [Google Scholar] [CrossRef] [PubMed]
- Kabir, T.; Lee, C.T.; Nelson, J.; Sheng, S.; Meng, H.W.; Chen, L.; Walji, M.F.; Jiang, X.; Shams, S. An End-to-End Entangled Segmentation and Classification Convolutional Neural Network for Periodontitis Stage Grading from Periapical Radiographic Images. In Proceedings of the 2021 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Houston, TX, USA, 9–12 December 2021; pp. 1370–1375. [Google Scholar]
- Lin, S.Y.; Chang, H.Y. Tooth Numbering and Condition Recognition on Dental Panoramic Radiograph Images Using CNNs. IEEE Access 2021, 9, 166008–166026. [Google Scholar] [CrossRef]
- Zhang, K.; Chen, H.; Lyu, P.; Wu, J. A relation-based framework for effective teeth recognition on dental periapical X-rays. Comput. Med. Imaging Graph. 2022, 95, 102022. [Google Scholar] [CrossRef]
- Hossam, A.; Mohamed, K.; Tarek, R.; Elsayed, A.; Mostafa, H.; Selim, S. Automated Dental Diagnosis using Deep Learning. In Proceedings of the 2021 16th International Conference on Computer Engineering and Systems (ICCES), Cairo, Egypt, 15–16 December 2021; pp. 1–5. [Google Scholar]
- Imak, A.; Celebi, A.; Siddique, K.; Turkoglu, M.; Sengur, A.; Salam, I. Dental Caries Detection Using Score-Based Multi-Input Deep Convolutional Neural Network. IEEE Access 2022, 10, 18320–18329. [Google Scholar] [CrossRef]
- Casalegno, F.; Newton, T.; Daher, R.; Abdelaziz, M.; Lodi-Rizzini, A.; Schürmann, F.; Krejci, I.; Markram, H. Caries detection with near-infrared transillumination using deep learning. J. Dent. Res. 2019, 98, 1227–1233. [Google Scholar] [CrossRef]
- Schwendicke, F.; Elhennawy, K.; Paris, S.; Friebertshäuser, P.; Krois, J. Deep learning for caries lesion detection in near-infrared light transillumination images: A pilot study. J. Dent. 2020, 92, 103260. [Google Scholar] [CrossRef]
- Holtkamp, A.; Elhennawy, K.; Cejudo Grano de Oro, J.E.; Krois, J.; Paris, S.; Schwendicke, F. Generalizability of deep learning models for caries detection in near-infrared light transillumination images. J. Clin. Med. 2021, 10, 961. [Google Scholar] [CrossRef]
- Yu, H.; Lin, Z.; Liu, Y.; Su, J.; Chen, B.; Lu, G. A New Technique for Diagnosis of Dental Caries on the Children’s First Permanent Molar. IEEE Access 2020, 8, 185776–185785. [Google Scholar] [CrossRef]
- Rana, A.; Yauney, G.; Wong, L.C.; Gupta, O.; Muftu, A.; Shah, P. Automated segmentation of gingival diseases from oral images. In Proceedings of the 2017 IEEE Healthcare Innovations and Point of Care Technologies (HI-POCT), Bethesda, MD, USA, 6–8 November 2017; pp. 144–147. [Google Scholar]
- Moutselos, K.; Berdouses, E.; Oulis, C.; Maglogiannis, I. Recognizing occlusal caries in dental intraoral images using deep learning. In Proceedings of the 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Berlin, Germany, 23–27 July 2019; pp. 1617–1620. [Google Scholar]
- Tanriver, G.; Soluk Tekkesin, M.; Ergen, O. Automated detection and classification of oral lesions using deep learning to detect oral potentially malignant disorders. Cancers 2021, 13, 2766. [Google Scholar] [CrossRef]
- Schlickenrieder, A.; Meyer, O.; Schönewolf, J.; Engels, P.; Hickel, R.; Gruhn, V.; Hesenius, M.; Kühnisch, J. Automatized detection and categorization of fissure sealants from intraoral digital photographs using artificial intelligence. Diagnostics 2021, 11, 1608. [Google Scholar] [CrossRef]
- Takahashi, T.; Nozaki, K.; Gonda, T.; Mameno, T.; Ikebe, K. Deep learning-based detection of dental prostheses and restorations. Sci. Rep. 2021, 11, 1960. [Google Scholar] [CrossRef] [PubMed]
- Askar, H.; Krois, J.; Rohrer, C.; Mertens, S.; Elhennawy, K.; Ottolenghi, L.; Mazur, M.; Paris, S.; Schwendicke, F. Detecting white spot lesions on dental photography using deep learning: A pilot study. J. Dent. 2021, 107, 103615. [Google Scholar] [CrossRef]
- Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial nets. Commun. ACM 2014, 63, 139–144. [Google Scholar] [CrossRef]
- Kim, T.; Cho, Y.; Kim, D.; Chang, M.; Kim, Y.J. Tooth segmentation of 3D scan data using generative adversarial networks. Appl. Sci. 2020, 10, 490. [Google Scholar] [CrossRef]
- Kokomoto, K.; Okawa, R.; Nakano, K.; Nozaki, K. Intraoral image generation by progressive growing of generative adversarial network and evaluation of generated image quality by dentists. Sci. Rep. 2021, 11, 18517. [Google Scholar] [CrossRef] [PubMed]
- Wu, Z.; Pan, S.; Chen, F.; Long, G.; Zhang, C.; Philip, S.Y. A comprehensive survey on graph neural networks. IEEE Trans. Neural Netw. Learn. Syst. 2020, 32, 4–24. [Google Scholar] [CrossRef] [PubMed]
- Zhang, L.; Zhao, Y.; Meng, D.; Cui, Z.; Gao, C.; Gao, X.; Lian, C.; Shen, D. TSGCNet: Discriminative geometric feature learning with two-stream graph convolutional network for 3D dental model segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 6699–6708. [Google Scholar]
- Zheng, Y.; Chen, B.; Shen, Y.; Shen, K. TeethGNN: Semantic 3D Teeth Segmentation with Graph Neural Networks. IEEE Trans. Vis. Comput. Graph. 2022. [Google Scholar] [CrossRef]
- Li, X.; Zhou, Y.; Dvornek, N.C.; Zhang, M.; Zhuang, J.; Ventola, P.; Duncan, J.S. Pooling regularized graph neural network for fmri biomarker analysis. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Lima, Peru, 4–8 October 2020; pp. 625–635. [Google Scholar]
- Zhang, X.; He, L.; Chen, K.; Luo, Y.; Zhou, J.; Wang, F. Multi-view graph convolutional network and its applications on neuroimage analysis for parkinson’s disease. In Proceedings of the AMIA Annual Symposium Proceedings. American Medical Informatics Association, San Francisco, CA, USA, 3–7 November 2018; Volume 2018, p. 1147. [Google Scholar]
- McDaniel, C.; Quinn, S. Developing a Graph Convolution-Based Analysis Pipeline for Multi-Modal Neuroimage Data: An Application to Parkinson’s Disease. In Proceedings of the Python in Science Conference, Austin, TX, USA, 8–14 July 2019; pp. 42–49. [Google Scholar]
- Wang, S.H.; Govindaraj, V.V.; Górriz, J.M.; Zhang, X.; Zhang, Y.D. Covid-19 classification by FGCNet with deep feature fusion from graph convolutional network and convolutional neural network. Inf. Fusion 2021, 67, 208–229. [Google Scholar] [CrossRef]
- Yu, X.; Lu, S.; Guo, L.; Wang, S.H.; Zhang, Y.D. ResGNet-C: A graph convolutional neural network for detection of COVID-19. Neurocomputing 2021, 452, 592–605. [Google Scholar] [CrossRef]
- Zhang, Y.D.; Satapathy, S.C.; Guttery, D.S.; Górriz, J.M.; Wang, S.H. Improved breast cancer classification through combining graph convolutional network and convolutional neural network. Inf. Process. Manag. 2021, 58, 102439. [Google Scholar] [CrossRef]
- Dwivedi, T.; Jakhanwal, I.; Anupama, T.; Singh Gill, G.; Narang, A.; Bhatheja, A. CAD CAM in Prosthetic Dentistry: A Comprehensive Review. Int. J. Commu. Health Med. Res. 2017, 3, 56–59. [Google Scholar]
- Adel, D.; Mounir, J.; El-Shafey, M.; Eldin, Y.A.; El Masry, N.; AbdelRaouf, A.; Abd Elhamid, I.S. Oral epithelial dysplasia computer aided diagnostic approach. In Proceedings of the 2018 13th International Conference on Computer Engineering and Systems (ICCES), Cairo, Egypt, 18–19 December 2018; pp. 313–318. [Google Scholar]
- Chatterjee, S.; Nawn, D.; Mandal, M.; Chatterjee, J.; Mitra, S.; Pal, M.; Paul, R.R. Augmentation of statistical features in cytopathology towards computer aided diagnosis of oral precancerlcancer. In Proceedings of the 2018 Fourth International Conference on Biosignals, Images and Instrumentation (ICBSII), Chennai, India, 22–24 March 2018; pp. 206–212. [Google Scholar]
- Yamaguchi, S.; Lee, C.; Karaer, O.; Ban, S.; Mine, A.; Imazato, S. Predicting the debonding of CAD/CAM composite resin crowns with AI. J. Dent. Res. 2019, 98, 1234–1238. [Google Scholar] [CrossRef] [PubMed]
- Xu, X.; Liu, C.; Zheng, Y. 3D tooth segmentation and labeling using deep convolutional neural networks. IEEE Trans. Vis. Comput. Graph. 2018, 25, 2336–2348. [Google Scholar] [CrossRef] [PubMed]
- Tian, S.; Dai, N.; Zhang, B.; Yuan, F.; Yu, Q.; Cheng, X. Automatic classification and segmentation of teeth on 3D dental model using hierarchical deep learning networks. IEEE Access 2019, 7, 84817–84828. [Google Scholar] [CrossRef]
- Oberoi, G.; Nitsch, S.; Edelmayer, M.; Janjić, K.; Müller, A.S.; Agis, H. 3D Printing—Encompassing the facets of dentistry. Front. Bioeng. Biotechnol. 2018, 6, 172. [Google Scholar] [CrossRef]
- Prechtel, A.; Reymus, M.; Edelhoff, D.; Hickel, R.; Stawarczyk, B. Comparison of various 3D printed and milled PAEK materials: Effect of printing direction and artificial aging on Martens parameters. Dent. Mater. 2020, 36, 197–209. [Google Scholar] [CrossRef]
- Tian, Y.; Chen, C.; Xu, X.; Wang, J.; Hou, X.; Li, K.; Lu, X.; Shi, H.; Lee, E.S.; Jiang, H.B. A review of 3D printing in dentistry: Technologies, affecting factors, and applications. Scanning 2021, 2021, 9950131. [Google Scholar] [CrossRef] [PubMed]
- Liu, Y.; Shang, X.; Shen, Z.; Hu, B.; Wang, Z.; Xiong, G. 3D Deep Learning for 3D Printing of Tooth Model. In Proceedings of the 2019 IEEE International Conference on Service Operations and Logistics, and Informatics (SOLI), Zhengzhou, China, 6–8 November 2019; pp. 274–279. [Google Scholar]
- Cui, Q.; Chen, Q.; Liu, P.; Liu, D.; Wen, Z. Clinical decision support model for tooth extraction therapy derived from electronic dental records. J. Prosthet. Dent. 2021, 126, 83–90. [Google Scholar] [CrossRef] [PubMed]
- Kang, I.A.; Ngnamsie Njimbouom, S.; Lee, K.O.; Kim, J.D. DCP: Prediction of Dental Caries Using Machine Learning in Personalized Medicine. Appl. Sci. 2022, 12, 3043. [Google Scholar] [CrossRef]
- Chen, Q.; Zhou, X.; Wu, J.; Zhou, Y. Structuring electronic dental records through deep learning for a clinical decision support system. Health Inform. J. 2021, 27, 1460458220980036. [Google Scholar] [CrossRef] [PubMed]
- Dutra, K.L.; Haas, L.; Porporatti, A.L.; Flores-Mir, C.; Santos, J.N.; Mezzomo, L.A.; Correa, M.; Canto, G.D.L. Diagnostic accuracy of cone-beam computed tomography and conventional radiography on apical periodontitis: A systematic review and meta-analysis. J. Endod. 2016, 42, 356–364. [Google Scholar] [CrossRef] [PubMed]
- Miki, Y.; Muramatsu, C.; Hayashi, T.; Zhou, X.; Hara, T.; Katsumata, A.; Fujita, H. Classification of teeth in cone-beam CT using deep convolutional neural network. Comput. Biol. Med. 2017, 80, 24–29. [Google Scholar] [CrossRef] [PubMed]
- Sorkhabi, M.M.; Khajeh, M.S. Classification of alveolar bone density using 3-D deep convolutional neural network in the cone-beam CT images: A 6-month clinical study. Measurement 2019, 148, 106945. [Google Scholar] [CrossRef]
- Jaskari, J.; Sahlsten, J.; Järnstedt, J.; Mehtonen, H.; Karhu, K.; Sundqvist, O.; Hietanen, A.; Varjonen, V.; Mattila, V.; Kaski, K. Deep learning method for mandibular canal segmentation in dental cone beam computed tomography volumes. Sci. Rep. 2020, 10, 5842. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Kwak, G.H.; Kwak, E.J.; Song, J.M.; Park, H.R.; Jung, Y.H.; Cho, B.H.; Hui, P.; Hwang, J.J. Automatic mandibular canal detection using a deep convolutional neural network. Sci. Rep. 2020, 10, 5711. [Google Scholar] [CrossRef]
- Cipriano, M.; Allegretti, S.; Bolelli, F.; Di Bartolomeo, M.; Pollastri, F.; Pellacani, A.; Minafra, P.; Anesi, A.; Grana, C. Deep segmentation of the mandibular canal: A new 3d annotated dataset of CBCT volumes. IEEE Access 2022, 10, 11500–11510. [Google Scholar] [CrossRef]
- Kim, I.; Misra, D.; Rodriguez, L.; Gill, M.; Liberton, D.K.; Almpani, K.; Lee, J.S.; Antani, S. Malocclusion classification on 3D cone-beam CT craniofacial images using multi-channel deep learning models. In Proceedings of the 2020 42nd Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Montreal, QC, Canada, 20–24 July 2020; pp. 1294–1298. [Google Scholar]
- Orhan, K.; Bayrakdar, I.; Ezhov, M.; Kravtsov, A.; Özyürek, T. Evaluation of artificial intelligence for detecting periapical pathosis on cone-beam computed tomography scans. Int. Endod. J. 2020, 53, 680–689. [Google Scholar] [CrossRef]
- Cui, Z.; Li, C.; Wang, W. ToothNet: Automatic tooth instance segmentation and identification from cone beam CT images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 6368–6377. [Google Scholar]
- Chen, Y.; Du, H.; Yun, Z.; Yang, S.; Dai, Z.; Zhong, L.; Feng, Q.; Yang, W. Automatic segmentation of individual tooth in dental CBCT images from tooth surface map by a multi-task FCN. IEEE Access 2020, 8, 97296–97309. [Google Scholar] [CrossRef]
- Lee, S.; Woo, S.; Yu, J.; Seo, J.; Lee, J.; Lee, C. Automated CNN-Based tooth segmentation in cone-beam ct for dental implant planning. IEEE Access 2020, 8, 50507–50518. [Google Scholar] [CrossRef]
- Wang, H.; Minnema, J.; Batenburg, K.; Forouzanfar, T.; Hu, F.; Wu, G. Multiclass CBCT image segmentation for orthodontics with deep learning. J. Dent. Res. 2021, 100, 943–949. [Google Scholar] [CrossRef]
- Hiraiwa, T.; Ariji, Y.; Fukuda, M.; Kise, Y.; Nakata, K.; Katsumata, A.; Fujita, H.; Ariji, E. A deep-learning artificial intelligence system for assessment of root morphology of the mandibular first molar on panoramic radiography. Dentomaxillofac. Radiol. 2019, 48, 20180218. [Google Scholar] [CrossRef] [PubMed]
- Lee, J.H.; Kim, D.H.; Jeong, S.N. Diagnosis of cystic lesions using panoramic and cone beam computed tomographic images based on deep learning neural network. Oral Dis. 2020, 26, 152–158. [Google Scholar] [CrossRef] [PubMed]
- Ezhov, M.; Gusarev, M.; Golitsyna, M.; Yates, J.M.; Kushnerev, E.; Tamimi, D.; Aksoy, S.; Shumilov, E.; Sanders, A.; Orhan, K. Clinically applicable artificial intelligence system for dental diagnosis with CBCT. Sci. Rep. 2021, 11, 15006. [Google Scholar] [CrossRef]
- Qiu, B.; van der Wel, H.; Kraeima, J.; Hendrik Glas, H.; Guo, J.; Borra, R.J.; Witjes, M.J.H.; van Ooijen, P. Robust and Accurate Mandible Segmentation on Dental CBCT Scans Affected by Metal Artifacts Using a Prior Shape Model. J. Pers. Med. 2021, 11, 364. [Google Scholar] [CrossRef] [PubMed]
- Sherwood, A.A.; Sherwood, A.I.; Setzer, F.C.; Shamili, J.V.; John, C.; Schwendicke, F. A Deep Learning Approach to Segment and Classify C-Shaped Canal Morphologies in Mandibular Second Molars Using Cone-beam Computed Tomography. J. Endod. 2021, 47, 1907–1916. [Google Scholar] [CrossRef] [PubMed]
- Roy, S.; Dey, S.; Khutia, N.; Chowdhury, A.R.; Datta, S. Design of patient specific dental implant using FE analysis and computational intelligence techniques. Appl. Soft Comput. 2018, 65, 272–279. [Google Scholar] [CrossRef]
- Lin, P.J.; Su, K.C. Biomechanical design application on the effect of different occlusion conditions on dental implants with different positions—A finite element analysis. Appl. Sci. 2020, 10, 5826. [Google Scholar] [CrossRef]
- Prati, C.; Tribst, J.P.M.; Dal Piva, A.M.d.O.; Borges, A.L.S.; Ventre, M.; Zamparini, F.; Ausiello, P. 3D finite element analysis of rotary instruments in root canal dentine with different elastic moduli. Appl. Sci. 2021, 11, 2547. [Google Scholar] [CrossRef]
- Phanijjiva, A.; Limjeerajarus, C.N.; Limjeerajarus, N. Study on Occlusion-induced Mechanical Force Distribution in Dental Pulp Using 3-D Modeling Based on Finite Element Analysis. In Proceedings of the 10th International Conference on Computer Modeling and Simulation, Sydney, Australia, 8–10 January 2018; pp. 290–293. [Google Scholar]
- Li, Y.; Ye, H.; Ye, F.; Liu, Y.; Lv, L.; Zhang, P.; Zhang, X.; Zhou, Y. The current situation and future prospects of simulators in dental education. J. Med. Internet Res. 2021, 23, e23635. [Google Scholar] [CrossRef] [PubMed]
- Gandedkar, N.H.; Wong, M.T.; Darendeliler, M.A. Role of Virtual Reality (VR), Augmented Reality (AR) and Artificial Intelligence (AI) in tertiary education and research of orthodontics: An insight. Semin. Orthod. 2021, 27, 69–77. [Google Scholar] [CrossRef]
- Dyulicheva, Y.Y.; Gaponov, D.A.; Mladenovic, R.; Kosova, Y.A. The virtual reality simulator development for dental students training: A pilot study. In Proceedings of the AREdu, CEUR Workshop Proceedings, Kryvyi Rih, Ukraine, 11 May 2021; Volume 2898, pp. 56–67. [Google Scholar]
- Dixon, J.; Towers, A.; Martin, N.; Field, J. Re-defining the virtual reality dental simulator: Demonstrating concurrent validity of clinically relevant assessment and feedback. Eur. J. Dent. Educ. 2021, 25, 108–116. [Google Scholar] [CrossRef] [PubMed]
- Huang, T.K.; Yang, C.H.; Hsieh, Y.H.; Wang, J.C.; Hung, C.C. Augmented reality (AR) and virtual reality (VR) applied in dentistry. Kaohsiung J. Med. Sci. 2018, 34, 243–248. [Google Scholar] [CrossRef] [PubMed]
- Rao, G.K.L.; Mokhtar, N.; Iskandar, Y.H.P.; Srinivasa, A.C. Learning orthodontic cephalometry through augmented reality: A conceptual machine learning validation approach. In Proceedings of the 2018 International Conference on Electrical Engineering and Informatics (ICELTICs), Banda Aceh, Indonesia, 19–20 September 2018; pp. 133–138. [Google Scholar]
- Touati, R.; Richert, R.; Millet, C.; Farges, J.C.; Sailer, I.; Ducret, M. Comparison of two innovative strategies using augmented reality for communication in aesthetic dentistry: A pilot study. J. Healthc. Eng. 2019, 2019, 7019046. [Google Scholar] [CrossRef] [PubMed]
- Monterubbianesi, R.; Tosco, V.; Vitiello, F.; Orilisi, G.; Fraccastoro, F.; Putignano, A.; Orsini, G. Augmented, Virtual and Mixed Reality in Dentistry: A Narrative Review on the Existing Platforms and Future Challenges. Appl. Sci. 2022, 12, 877. [Google Scholar] [CrossRef]
- Alabdullah, J.H.; Daniel, S.J. A systematic review on the validity of teledentistry. Telemed. e-Health 2018, 24, 639–648. [Google Scholar] [CrossRef]
- Estai, M.; Kanagasingam, Y.; Tennant, M.; Bunt, S. A systematic review of the research evidence for the benefits of teledentistry. J. Telemed. Telecare 2018, 24, 147–156. [Google Scholar] [CrossRef]
- Al-Khalifa, K.S.; AlSheikh, R. Teledentistry awareness among dental professionals in Saudi Arabia. PLoS ONE 2020, 15, e0240825. [Google Scholar] [CrossRef]
- Babar, M.; Tariq, M.U.; Alshehri, M.D.; Ullah, F.; Uddin, M.I. Smart teledentistry healthcare architecture for medical big data analysis using IoT-enabled environment. Sustain. Comput. Inform. Syst. 2022, 35, 100719. [Google Scholar] [CrossRef]
- Ghai, S. Teledentistry during COVID-19 pandemic. Diabetes Metab. Syndr. Clin. Res. Rev. 2020, 14, 933–935. [Google Scholar] [CrossRef]
- Vinayahalingam, S.; Goey, R.s.; Kempers, S.; Schoep, J.; Cherici, T.; Moin, D.A.; Hanisch, M. Automated chart filing on panoramic radiographs using deep learning. J. Dent. 2021, 115, 103864. [Google Scholar] [CrossRef]
- Welikala, R.A.; Remagnino, P.; Lim, J.H.; Chan, C.S.; Rajendran, S.; Kallarakkal, T.G.; Zain, R.B.; Jayasinghe, R.D.; Rimal, J.; Kerr, A.R.; et al. Automated detection and classification of oral lesions using deep learning for early detection of oral cancer. IEEE Access 2020, 8, 132677–132693. [Google Scholar] [CrossRef]
- Goswami, M.; Maheshwari, M.; Baruah, P.D.; Singh, A.; Gupta, R. Automated Detection of Oral Cancer and Dental Caries Using Convolutional Neural Network. In Proceedings of the 2021 9th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO), Noida, India, 3–4 September 2021; pp. 1–5. [Google Scholar]
- Shang, W.; Li, Z.; Li, Y. Identification of Common Oral Disease Lesions Based on U-Net. In Proceedings of the 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), Greenville, SC, USA, 12–14 November 2021; pp. 194–200. [Google Scholar]
- Cui, Z.; Li, C.; Chen, N.; Wei, G.; Chen, R.; Zhou, Y.; Shen, D.; Wang, W. TSegNet: An efficient and accurate tooth segmentation network on 3D dental model. Med. Image Anal. 2021, 69, 101949. [Google Scholar] [CrossRef]
- Huang, Y.; Fan, F.; Syben, C.; Roser, P.; Mill, L.; Maier, A. Cephalogram synthesis and landmark detection in dental cone-beam CT systems. Med. Image Anal. 2021, 70, 102028. [Google Scholar] [CrossRef] [PubMed]
- Chung, M.; Lee, M.; Hong, J.; Park, S.; Lee, J.; Lee, J.; Yang, I.H.; Lee, J.; Shin, Y.G. Pose-aware instance segmentation framework from cone beam CT images for tooth segmentation. Comput. Biol. Med. 2020, 120, 103720. [Google Scholar] [CrossRef] [Green Version]
- Zheng, Z.; Yan, H.; Setzer, F.C.; Shi, K.J.; Mupparapu, M.; Li, J. Anatomically constrained deep learning for automating dental CBCT segmentation and lesion detection. IEEE Trans. Autom. Sci. Eng. 2020, 18, 603–614. [Google Scholar] [CrossRef]
- Kurt Bayrakdar, S.; Orhan, K.; Bayrakdar, I.S.; Bilgir, E.; Ezhov, M.; Gusarev, M.; Shumilov, E. A deep learning approach for dental implant planning in cone-beam computed tomography images. BMC Med. Imaging 2021, 21, 47. [Google Scholar] [CrossRef]
- Jang, T.J.; Kim, K.C.; Cho, H.C.; Seo, J.K. A fully automated method for 3D individual tooth identification and segmentation in dental CBCT. arXiv 2021, arXiv:2102.06060. [Google Scholar] [CrossRef]
- Alsomali, M.; Alghamdi, S.; Alotaibi, S.; Alfadda, S.; Altwaijry, N.; Alturaiki, I.; Al-Ekrish, A. Development of a deep learning model for automatic localization of radiographic markers of proposed dental implant site locations. Saudi Dent. J. 2022, 34, 220–225. [Google Scholar] [CrossRef] [PubMed]
- Shaheen, E.; Leite, A.; Alqahtani, K.A.; Smolders, A.; Van Gerven, A.; Willems, H.; Jacobs, R. A novel deep learning system for multi-class tooth segmentation and classification on cone beam computed tomography. A validation study. J. Dent. 2021, 115, 103865. [Google Scholar] [CrossRef] [PubMed]
- Liu, M.Q.; Xu, Z.N.; Mao, W.Y.; Li, Y.; Zhang, X.H.; Bai, H.L.; Ding, P.; Fu, K.Y. Deep learning-based evaluation of the relationship between mandibular third molar and mandibular canal on CBCT. Clin. Oral Investig. 2022, 26, 981–991. [Google Scholar] [CrossRef]
- Fahim, S.; Maqsood, A.; Das, G.; Ahmed, N.; Saquib, S.; Lal, A.; Khan, A.A.G.; Alam, M.K. Augmented Reality and Virtual Reality in Dentistry: Highlights from the Current Research. Appl. Sci. 2022, 12, 3719. [Google Scholar] [CrossRef]
- Tonkaboni, A.; Amirzade-Iranaq, M.H.; Ziaei, H.; Ather, A. Impact of COVID-19 on Dentistry. Adv. Exp. Med. Biol. 2021, 1318, 623–636. [Google Scholar] [PubMed]
- Kumar, G.; Basri, S.; Imam, A.; Khowaja, S.; Capretz, L.; Balogun, A. Data Harmonization for Heterogeneous Datasets: A Systematic Literature Review. Appl. Sci. 2021, 11, 8275. [Google Scholar] [CrossRef]
ID | Keywords |
---|---|
1 | (“Data Learning” OR “DL”) AND (“Dental Informatics” OR “DI”) AND (“Image Data” OR “Dental Data”) |
2 | (“Data Learning” OR “DL”) AND (“Dental Informatics” OR “DI” OR “Dentistry”) AND (“Image Data” OR “Dental Data”) |
3 | (“Data Learning” OR “DL”) AND (“Dental Informatics” OR “DI” OR “Dental”) AND (“Image Data” OR “Dental Data”) |
4 | (“Data Learning” OR “DL”) AND (“Dental Informatics” OR “DI” OR “Dentist”) AND (“Image Data” OR “Dental Data”) |
RQ | Studies |
---|---|
Deep learning techniques | 40 |
Dental informatics using deep learning | 39 |
Images to evaluate deep learning techniques | 73 |
Performance measurement techniques to evaluate the deep learning techniques | 56 |
Authors Name and Year | Methods | Results | Authors Suggestions/Conclusions |
---|---|---|---|
Faria et al., (2021) [41] | Custom-made ANN | Detect accuracy = 98.8%, predict accuracy = 99.2%, AUC= 0.9886, 0.9869 | This approach may be beneficial for detecting and predicting the RRC’s development in other photos. |
Li et al., (2021) [42] | DeepLabv3+, Xception and MobileNetV2 | AUC = 0.7, precision = 0.606, recall = 0.415, mIOU = 0.650 | Small dataset was used and data augmentation cannot overcome all biases present in small dataset. |
Geetha et al., (2020) [43] | Customized BPNN | Accuracy = 97.1%, false positive (FP) rate = 2.8%, ROC area = 0.987, PRC area = 0.987 | High quality datasets and improved algorithm can demonstrate good results towards dental practice. |
Zanella-Calzada et al., (2018) [44] | Customized ANN | Accuracy = 0.69, AUC values = 0.69 and 0.75 | This model can help dentists by providing an easy, free and fast tool for the diagnosis of DC. |
Rochman et al., (2018) [46] | ELM | Low error rate = 0.0426 | ELM is a powerful predictive tool. |
Li et al., (2018) [47] | HMI and ELM | Sensitivities of incisors, canine, premolar, and molars were 78.25 ± 6.02%, 78.00 ± 5.99%, 79.25 ± 7.91%, and 78.75 ± 5.17% | Compared to the ANN approach, this method had a greater classification. |
Lu et al., (2018) [48] | PCA and ELM | Accuracy = 79.75% | They are not able to detect the correct name for each landmark, especially for the teeth with similar teeth anatomy. |
Li et al., (2018) [49] | GLCM, ELM | Sensitivity= 72%, specificity= 70%, accuracy= 71% | This method is more sensitive and accurate than the wavelet energy and naïve Bayes classifier. |
Authors Name and Year | Methods | Results | Authors Suggestions/Conclusions |
---|---|---|---|
Alarifi and AlZubi, (2018) [51] | MSGSRNN | Accuracy = 99.25%, sensitivity = 97.63%, specificity = 98.28% | Outlined methodology analyzes patient characteristics and aids to know the failure and success rate of the process of implant treatment |
Kumari et al., (2022) [52] | M–ResneXt–RNN, HSLnSSO algorithm | Accuracy = 93.67, sensitivity = 94.66, specificity = 92.73, precision = 92.44, FPR = 7.27, FNR = 5.34, NPV = 94.88, FDR = 7.56, F1-Score = 93.54, MCC = 87.35 | Difficult to distinguish tiny items and produces rather coarse characteristics. |
Singh and Sehgal, (2021) [53] | customized CNN-LSTM | Accuracy = 96% | This model gets lower performance using large datasets. |
Authors Name and Year | Methods | Results | Authors Suggestions/Conclusions |
---|---|---|---|
Prajapati et al., (2017) [16] | Transfer learning with VGG16 pre-trained model | Accuracy = 88.46% | Transfer learning with the VGG16 pre-trained model achieved better accuracy. |
Lee et al., (2018) [56] | Pre-trained GoogLeNet Inception v3 network | Accuracy of 89%, 88%, and 82% was observed in the premolar, molar, and both the premolar-molar regions. | In terms of diagnosing dental caries, Deep CNN algorithms are anticipated to be among the best and most productive technique. |
Vinayahalingam et al., (2021) [57] | CNN MobileNet V2 | Accuracy = 0.87, sensitivity = 0.86, specificity = 0.88, AUC = 0.90 | This method forms a promising foundation for the further development of automatic third molar removal assessment. |
Choi et al., (2018) [63] | Customized CNN | F1max = 0.74, FPs = 0.88 | This system can be used to detect proximal dental caries on several periapical images. |
Lee et al., (2021) [65] | Deep CNN (U-Net) | Precision = 63.29%, recall = 65.02%, F1-score = 64.14% | Clinicians should not wholly rely on AI-based dental caries detection results, but should instead use them only for reference. |
Yang et al., (2018) [67] | Customized CNN | F1 score = 0.749 | The method doesn’t always work on images of molars. |
Lee et al., (2018) [68] | Pre-trained deep CNN (VGG-19) and self-trained network | Premolars (accuracy = 82.8%), molars (accuracy = 73.4%) | Using a low-resolution dataset can reduced the accuracy of the diagnosis and prediction of PCT. |
Al Kheraif et al., (2019) [69] | Hybrid graph-cut technique and CNN | Accuracy = 97.07% | The DL with convolution neural network system effectively recognizes the dental disease. |
Murata et al., (2019) [70] | Customized AlexNet CNN | Accuracy = 87.5%, sensitivity = 86.7%, specificity = 88.3%, AUC = 0.875 | The AI model can be a supporting tool for inexperienced dentists. |
Krois et al., (2019) [72] | Custom-made CNN | Accuracy = 0.81, sensitivity = 0.81, Specificity = 0.81 | ML-based models could minimize the efforts. |
Zhao et al., (2020) [77] | Customized Two-staged attention segmentation network | Accuracy = 96.94%, dice = 92.72%, recall = 93.77% | Failure to properly divide the foreground image into teeth areas due to inaccurate pixel segmentation. |
Fariza et al., (2020) [78] | U-Net convolution network | Accuracy = 97.61% | Segmentation with the proposed U-Net convolution network results in fast segmentation and smooth image edges. |
Lakshmi and Chitra, (2020) [79] | Sobel edge detection with deep CNN | Accuracy = 96.08% | Sobel edge detection with deep CNN is efficient for cavities prediction compared to other methods. |
Khan et al., (2021) [80] | U-Net + Densenet121 | mIoU = 0.501, Dice coefficient = 0.569 | DL can be a viable option for segmentation of caries, ABR, and IRR in dental radiographs. |
Moran et al., (2020) [81] | Pre-trained ResNet and an Inception model | Accuracy = 0.817, precision = 0.762, recall = 0.923, specificity = 0.711, negative predictive = 0.902 | Clinically, the examined CNN model can aid in the diagnosis of periodontal bone deterioration during periapical examinations. |
Chen et al., (2021) [82] | Customized Faster R-CNN | Precision = 0.5, recall = 0.6 | Disease lesions with too small sizes may not be indications for faster R-CNN. |
Lin and Chang, (2021) [84] | ResNet | Accuracy = 93.33% | In the second stage, endodontic therapy is the most vulnerable to incorrect labeling. |
Zhang et al., (2022) [85] | Customized multi-task CNN | Precision = 0.951, recall = 0.955, F-score = 0.953 | The method can provide reliable and comprehensive diagnostic support for dentists. |
Yu et al., (2020) [91] | Customized ResNet50-FPN | Accuracy = 95.25%, sensitivity = 89.83%, specificity = 96.10% | Only implement caries detection for First Permanent Molar not all teeth. |
Rana et al., (2017) [92] | Customized CNN | AUC = 0.746, precision = 0.347, recall = 0.621 | Dental professionals and patients can benefit from automated point-of-care early diagnosis of periodontal diseases provided. |
Tanriver et al., (2021) [94] | Multiple pre-trained NNs; EfcientNet-b4 architecture | sensitivity = 89.3, precision = 86.2, F1 = 85.7 | The suggested model shows significant promise as a low-cost, noninvasive tool to aid in screening procedures and enhance OPMD identification. |
Schlickenrieder et al., (2021) [95] | pre-trained ResNeXt-101–32x8d | accuracy = 98.7%, AUC = 0.996 | More training is needed in AI-based detection, classification of common and uncommon dental disorders, and all types of restorations. |
Takahashi et al., (2021) [96] | YOLO v3 and SSD | mAP = 0.80, mIoU = 0.76 | This method was limited accuracy in identifying tooth-colored prosthese. |
Authors Name and Year | Methods | Results | Authors Suggestions/Conclusions |
---|---|---|---|
Kim et al., (2020) [99] | CNN, GLCIC, Edge Connect | Improvement of 0.004 mm in the tooth segmentation | The segmentation approach for complete arch intraoral scan data is efficient, time-saving, and as accurate as a manual segmentation method. |
Kokomoto et al., (2021) [100] | PGGAN | p value < 0.0001 | The quantity of trained photos has a significant impact on PGGAN’s ability to generate realistic visuals. |
Authors Name and Year | Methods | Results | Authors Suggestions/ Conclusions |
---|---|---|---|
Zhang et al., (2021) [102] | PointNet, DGCNN, PointNet++, PointCNN, MeshSegNet | Accuracy = 95.25, mIoU = 88.99 | TSGCNet cannot robustly handle special cases with 12 teeth. |
Zheng et al., (2022) [103] | Modified Dynamic Graph CNN (DGCNN) | mIoU = 97.49, accuracy = 98.94 | The proposed teeth segmentation is robust to rotten, missing, crowded, and ectopic-tooth cases. |
Authors Name and Year | Methods | Results | Authors Suggestions/Conclusions |
---|---|---|---|
Adel et al., (2018) [111] | SVM, ORB | Accuracy = 92.8% | Regarding the detection of oral epithelial dysplasia, this approach had the highest success rates. |
Chatterjee et al., (2018) [112] | SVM, k nearest neighbor, random forest. | Accuracy = 90% | Predictive classifiers are better able to distinguish between illness and control groups when statistical and cytomorphometric features are combined. |
Xu et al., (2018) [114] | Customized CNN | Accuracy = 99.06% | It directly satisfies the industrial clinical treatment demands and is also robust to any possible foreign matters on dental model surface. |
Tian et al., (2019) [115] | Sparse voxel octree and 3D CNN | Accuracy = 95.96% | the proposed method has great application potential in the computer-assisted orthodontic treatment diagnosis. |
Authors Name and Year | Methods | Results | Authors Suggestions/Conclusions |
---|---|---|---|
Cui et al., (2021) [120] | Extreme Gradient Boost (XGBoost) algorithm | Accuracy = 96.2, Precision = 86.5, Recall = 83.0 | ML methods showed promise for forecasting multiclass issues, such as varying therapies depending on EDRs. |
Kang et al., (2022) [121] | RF, ANN, CNN, GBDT, SVM, LR, LSTM | Accuracy = 92%, F1-score = 90%, precision = 94%, recall = 87% | ML is strongly recommended as a decision-making aid for dental practitioners in the early diagnosis and treatment of tooth caries |
Chen, (2021) [122] | NLP | F1-score 83% and 88% | The NLP workflow might be used as the initial stage to training data-based models with structured data. |
Authors Name and Year | Methods | Results | Authors Suggestions/Conclusions |
---|---|---|---|
Miki et al., (2017) [124] | AlexNet network | Accuracy = 91.0% | Automated filling of dental data for forensic identification can benefit from the suggested tooth categorization approach. |
Sorkhabi and Khajeh, (2019) [125] | Customized 3D CNN | Hexagonal prism (precision = 84.63%), cylindrical voxel shapes (precision = 95.20%) | This method may help the dentists in the implant treatment from diagnosis to surgery. |
Jaskari et al., (2020) [126] | Customized FCDNN | DSC were 0.57 (SD = 0.08) for the left canal and 0.58 (SD = 0.09) for the right canal | Automated DL neural network-based system when applied to CBCT scans can produce high quality segmentations of mandibular canals. |
Kwak et al., (2020) [127] | 2D SegNet, 2D and 3D U-Nets | 2D U-Net (accuracy = 0.82), 2D SegNet (accuracy = 0.96), 3D U-Net (accuracy = 0.99) | With the help of DL, a dentist will be able to create an automated method for detecting canals, which will considerably improve the effectiveness of treatment plans and the comfort of patients. |
Kim et al., (2020) [129] | CNN-based DL models | Accuracy = 93% | This method aims at assisting orthodontist to determine the best treatment path for the patient be it orthodontic or surgical treatment or a combination of both. |
Orhan et al., (2020) [130] | U-Net | Accuracy = 92.8% | AI systems based on DL methods can be useful in detecting periapical pathosis in CBCT images for clinical application. |
Cui et al., (2019) [131] | Customized 3D CNN | DSC = 92.37%, DA = 99.55%, FA = 96.85% | The segmentation will fail when there is extreme gray scale value in CT image and if the tooth has the wrong orientation. |
Chen et al., (2020) [132] | Multi-task 3D FCN combined with MWT | Dice = 0.936 (±0.012), Jaccard index = 0.881 (±0.019) | The multi-task 3D FCN combined with MWT can segment individual tooth of various types in dental CBCT images. |
Lee et al., (2020) [133] | Fully automated CNN-based U-Net structure | Dice = 0.935, Recall = 0.956, Precision = 0.915 | Some portions of the wisdom teeth were usually undetected. |
Wang et al., (2021) [134] | Customized CNN | Dice similarity coefficient = 0.934 ± 0.019 | DL has the potential to accurately and simultaneously segment jaw and teeth in CBCT scans. |
Hiraiwa et al., (2019) [135] | AlexNet and GoogleNet | Accuracy = 86.9% | The deep learning system showed high accuracy in the differential diagnosis of a single or extra root in the distal roots of mandibular first molars. |
Lee et al., (2020) [136] | GoogLeNet Inception-v3 architecture | Sensitivity = 96.1%, specificity = 77.1%, AUC = 0.91 | Deep CNN architecture trained with CBCT images achieved higher diagnostic performance than that trained with panoramic images. |
Ezhov et al., (2021) [137] | Customized CNN | The sensitivity values for aided and unaided groups were 0.8537 and 0.7672 while specificity was 0.9672 and 0.9616 respectively. | The proposed AI system significantly improved the diagnostic capabilities of dentists. |
Qiu et al., (2021) [138] | Customized CNN | Dice (%) = 95.29 | This model can be viewed as a training goal for a particular application. |
Image Type | No. of Studies | Studies References |
---|---|---|
Radiographic images | 25 | Faria et al., (2021) [41], Geetha et al., (2020) [43], Lee et al., (2018) [68], Prajapati et al., (2017) [16], Choi et al., (2018) [63], Lee et al., (2018) [56], Yang et al., (2018) [67], Al Kheraif, (2019) [69], Murata et al., (2019) [70], Krois et al., (2019) [72], Ekert et al., (2019) [71], Verma et al., (2020) [73], Zhao et al., (2020) [77], Mahdi et al., (2020) [75], Fariza et al., (2020) [78], Lakshmi and Chitra, (2020) [79], Moran et al., (2020) [81], Muresan et al., (2020) [74], Lakshmi and Chitra, (2020) [76], Cantu et al., (2020) [64], Khan et al., (2021) [80], Vinayahalingam et al., (2021) [157], Lee et al., (2021) [65], Chen et al., (2021) [82], Kabir et al., (2021) [83], Lin and Chang, (2021) [84], Zhang et al., (2022) [85], Imak et al., (2022) [87] |
NILT | 3 | Casalegno et al., (2019) [88], Schwendicke et al., (2020) [89], Holtkamp et al., (2021) [90] |
Intraoral images | 11 | Rana et al., (2017)
[92],
Moutselos et al., (2019)
[93],
Welikala et al., (2020)
[158],
Yu et al., (2020)
[91],
Schlickenrieder et al., (2021)
[95],
Hossam et al., (2021)
[86],
Saini et al., (2021)
[66],
Takahashi et al., (2021)
[96],
Askar et al., (2021)
[97],
Goswami et al., (2021) [159], Shang et al., (2021) [160] |
3D Model | 5 | Xu et al., (2018) [114], Tian et al., (2019) [115], Yamaguchi et al., (2019) [113], Cui et al., (2021) [161], Zhang et al., (2021) [102] |
CT/CBCT images | 26 | Miki et al., (2017) [124], Roy et al., (2018) [140], Cui et al., (2019) [131], Phanijjiva et al., (2018) [143], Huang et al., (2021) [162], Hiraiwa et al., (2019) [135], Lee et al., (2020) [136], Sorkhabi and Khajeh, (2019) [125], Jaskari et al., (2020) [126], Kim et al., (2020) [129], Kwak et al., (2020) [127], Orhan et al., (2020) [130], Chung et al., (2020) [163], Lee et al., (2020) [133], Wang et al., (2021) [134], Zheng et al., (2020) [164], Kurt Bayrakdar et al., (2021) [165], Ezhov et al., (2021) [137], Jang et al., (2021) [166], Qiu et al., (2021) [138], Sherwood et al., (2021) [139], Shaheen et al., (2021) [168], Alsomali et al., (2022) [167], Cipriano et al., (2022) [128], Liu et al., (2022) [169], Chen et al., (2020) [132] |
EDRs | 3 | Cui et al., (2021) [120], Kang et al., (2022) [121], Chen et al., (2021) [122] |
Study | Accuracy | Precision | Recall | F1 score | Sensitivity | Specificity | FP | AUC | ROC | PRC | mIOU | FPR | NPV | FNR | mAP | IOU | FDR | MCC | dice | DSC | DA | FA | Jaccard |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
De Araujo Faria et al., (2021) [41] | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
Li et al., (2021) [42] | ✗ | ✓ | ✓ | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
Geetha et al., (2020) [43] | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ | ✓ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
Zanella-Calzada et al., (2018) [44] | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
Li et al., (2018) [47] | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
Lu et al., (2018) [48] | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
Li et al., (2018) [49] | ✓ | ✗ | ✗ | ✗ | ✓ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
Alarifi and AlZubi, (2018) [51] | ✓ | ✗ | ✗ | ✗ | ✓ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
Kumari et al., (2022) [52] | ✓ | ✓ | ✗ | ✓ | ✓ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ | ✓ | ✓ | ✗ | ✗ | ✓ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ |
Singh and Sehgal, (2021) [53] | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
Prajapati et al., (2017) [16] | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
Lee et al., (2018) [56] | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
Vinayahalingam et al., (2021) [57] | ✓ | ✗ | ✗ | ✗ | ✓ | ✓ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
Choi et al., (2018) [63] | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
Lee et al., (2021) [65] | ✗ | ✓ | ✓ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
Yang et al., (2018) [67] | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
Lee et al., (2018) [68] | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
Al Kheraif et al., (2019) [69] | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
Murata et al., (2019) [70] | ✓ | ✗ | ✗ | ✗ | ✓ | ✓ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
Krois et al., (2019) [72] | ✓ | ✗ | ✗ | ✗ | ✓ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
Zhao et al., (2020) [77] | ✓ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ |
Fariza et al., (2020) [78] | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
Lakshmi and Chitra, (2020) [79] | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
Khan et al., (2021) [80] | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ |
Moran et al., (2020) [81] | ✓ | ✓ | ✓ | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
Chen et al., (2021) [82] | ✗ | ✓ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
Lin and Chang, (2021) [84] | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
Zhang et al., (2022) [85] | ✗ | ✓ | ✓ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
Yu et al., (2020) [91] | ✓ | ✗ | ✗ | ✗ | ✓ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
Rana et al., (2017) [92] | ✗ | ✓ | ✓ | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
Tanriver et al., (2021) [94] | ✗ | ✓ | ✗ | ✓ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
Schlickenrieder et al., (2021) [95] | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
Takahashi et al., (2021) [96] | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
Zhang et al., (2021) [102] | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
Zheng et al., (2022) [103] | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
Adel et al., (2018) [111] | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
Chatterjee et al., (2018) [112] | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
Xu et al., (2018) [114] | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
Tian et al., (2019) [115] | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
Cui et al., (2021) [120] | ✓ | ✓ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
Kang et al., (2022) [121] | ✓ | ✓ | ✓ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
Chen, (2021) [122] | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
Miki et al., (2017) [124] | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
Sorkhabi and Khajeh, (2019) [125] | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
Jaskari et al., (2020) [126] | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ |
Kwak et al., (2020) [127] | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
Kim et al., (2020) [129] | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
Orhan et al., (2020) [130] | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
Cui et al., (2019) [131] | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ | ✓ | ✓ | ✗ |
Chen et al., (2020) [132] | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✓ |
Lee et al., (2020) [133] | ✗ | ✓ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ |
Wang et al., (2021) [134] | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ |
Hiraiwa et al., (2019) [135] | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
Lee et al., (2020) [136] | ✗ | ✗ | ✗ | ✗ | ✓ | ✓ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
Ezhov et al., (2021) [137] | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
Qiu et al., (2021) [138] | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
AbuSalim, S.; Zakaria, N.; Islam, M.R.; Kumar, G.; Mokhtar, N.; Abdulkadir, S.J. Analysis of Deep Learning Techniques for Dental Informatics: A Systematic Literature Review. Healthcare 2022, 10, 1892. https://doi.org/10.3390/healthcare10101892
AbuSalim S, Zakaria N, Islam MR, Kumar G, Mokhtar N, Abdulkadir SJ. Analysis of Deep Learning Techniques for Dental Informatics: A Systematic Literature Review. Healthcare. 2022; 10(10):1892. https://doi.org/10.3390/healthcare10101892
Chicago/Turabian StyleAbuSalim, Samah, Nordin Zakaria, Md Rafiqul Islam, Ganesh Kumar, Norehan Mokhtar, and Said Jadid Abdulkadir. 2022. "Analysis of Deep Learning Techniques for Dental Informatics: A Systematic Literature Review" Healthcare 10, no. 10: 1892. https://doi.org/10.3390/healthcare10101892
APA StyleAbuSalim, S., Zakaria, N., Islam, M. R., Kumar, G., Mokhtar, N., & Abdulkadir, S. J. (2022). Analysis of Deep Learning Techniques for Dental Informatics: A Systematic Literature Review. Healthcare, 10(10), 1892. https://doi.org/10.3390/healthcare10101892