Next Article in Journal
Do Serum C-Reactive Protein Trends Predict Treatment Outcome in Patients with Knee Periprosthetic Joint Infection Undergoing Two-Stage Exchange Arthroplasty?
Next Article in Special Issue
Histopathologic Feature of Hyalinization Predicts Recurrence of Conventional/Solid Multicystic Ameloblastomas
Previous Article in Journal
Unselected Population Genetic Testing for Personalised Ovarian Cancer Risk Prediction: A Qualitative Study Using Semi-Structured Interviews
Previous Article in Special Issue
Subepithelial Hyalinisation Predicts Recurrence of Unicystic Ameloblastomas
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Artificial Intelligence in the Diagnosis of Oral Diseases: Applications and Pitfalls

1
Department of Maxillofacial Surgery and Diagnostic Sciences, Division of Oral Pathology, College of Dentistry, Jazan University, Jazan 45142, Saudi Arabia
2
Department of Biotechnology, College of Science, Taif University, Taif 21944, Saudi Arabia
3
Department of Diagnostic Dental Sciences, Oral Pathology Division, Faculty of Dentistry, College of Dentistry, King Khalid University, Abha 61411, Saudi Arabia
4
Division of Oral Medicine & Radiology, College of Dentistry, Jazan University, Jazan 45142, Saudi Arabia
5
Department of Preventive Dental Science, College of Dentistry, Jazan University, Jazan 45142, Saudi Arabia
6
Department of Prosthetic Dental Sciences, College of Dentistry, Jazan University, Jazan 45142, Saudi Arabia
7
Department of Restorative Dental Sciences, Division of Operative Dentistry, College of Dentistry, Jazan University, Jazan 45142, Saudi Arabia
8
Multi-Omics and Drug Discovery Lab, Chettinad Academy of Research and Education, Chennai 600130, India
*
Author to whom correspondence should be addressed.
Diagnostics 2022, 12(5), 1029; https://doi.org/10.3390/diagnostics12051029
Submission received: 29 March 2022 / Revised: 12 April 2022 / Accepted: 18 April 2022 / Published: 19 April 2022
(This article belongs to the Special Issue Current Concepts and Prospects of Diagnostics in Oral Diseases)

Abstract

:
Background: Machine learning (ML) is a key component of artificial intelligence (AI). The terms machine learning, artificial intelligence, and deep learning are erroneously used interchangeably as they appear as monolithic nebulous entities. This technology offers immense possibilities and opportunities to advance diagnostics in the field of medicine and dentistry. This necessitates a deep understanding of AI and its essential components, such as machine learning (ML), artificial neural networks (ANN), and deep learning (DP). Aim: This review aims to enlighten clinicians regarding AI and its applications in the diagnosis of oral diseases, along with the prospects and challenges involved. Review results: AI has been used in the diagnosis of various oral diseases, such as dental caries, maxillary sinus diseases, periodontal diseases, salivary gland diseases, TMJ disorders, and oral cancer through clinical data and diagnostic images. Larger data sets would enable AI to predict the occurrence of precancerous conditions. They can aid in population-wide surveillance and decide on referrals to specialists. AI can efficiently detect microfeatures beyond the human eye and augment its predictive power in critical diagnosis. Conclusion: Although studies have recognized the benefit of AI, the use of artificial intelligence and machine learning has not been integrated into routine dentistry. AI is still in the research phase. The coming decade will see immense changes in diagnosis and healthcare built on the back of this research. Clinical significance: This paper reviews the various applications of AI in dentistry and illuminates the shortcomings faced while dealing with AI research and suggests ways to tackle them. Overcoming these pitfalls will aid in integrating AI seamlessly into dentistry.

1. Introduction

Artificial intelligence (AI) and machine learning (ML) are terms that are often used in research that are interchangeable even though they have different meanings. John McCarthy, called the father of artificial intelligence, coined the term ‘artificial intelligence’ to describe machines with the potential to perform actions that were considered intelligent without any human intervention [1]. These machines are capable of solving problems based on the data input. Artificial intelligence has long been the mainstay of popular science fiction. It originally stemmed from Alan Turing’s “Imitation game” or the “Turing test” [2]. Logic Theorist, developed by Allen Newell and Herbert Simon in the year 1955, was the first-ever AI program [3].
Machine learning (ML) is a subset of artificial intelligence [4]. Simon Cowell coined the term in 1959 [5]. ML predicts the outcome based on the dataset provided to it using algorithms, such as artificial neural networks (ANN). These networks mimic the human brain and have interconnected artificial neurons that receive and analyze data signals. Warren McCulloch and Walter Pitts suggested this concept in a seminal paper published in 1943. Later, Minsky and Dean Edmunds developed the first ANN, the stochastic neural analog reinforcement calculator, in 1951 [6].
Convolutional neural network (CNN) or deep learning (DL) is an approach in ML introduced in 2006 by Hinton et al. [7]. It utilizes multi-layer neural networks to compute data. Deep learning algorithms have the potential to analyze patterns based on the data and improve the outcome. The development of the backpropagation algorithm in 1969 paved the way for deep learning systems [8]. Figure 1 depicts the important milestones in the advancement of AI through the years.
An abundant supply of data sets is crucial for implementing machine learning. Data can refer to a variety of inputs: it can be images in the form of clinical photographs, radiographs, text in the form of patient data, patient symptoms information, and audio in the form of voice, murmurs, bruits, auscultation, or percussion sound. Figure 2 shows the working of AI in a schematic format. Adaptability to a variety of inputs in artificial intelligence added advantage to revolutionizing medical, dental, and healthcare delivery. Recently, artificial intelligence in dentistry alone has created immense attention in specialties such as orthodontics [9,10,11], endodontics [12,13], prosthodontics [14,15], restorative dentistry [16,17], periodontics [18,19,20], oral and maxillofacial surgery [21,22,23]. Research reveals promising results, although most applications are in the developmental phase. It becomes a necessity that dentists need to understand the foundational concepts and applications of AI in dentistry to adapt to a changing healthcare landscape [24].
Today, artificial intelligence (AI) has been suggested useful in disease diagnosis, predicting prognosis, or developing patient-specific treatment strategies [25]. Particularly, AI can assist dentists in making time-sensitive critical decisions. It can remove the human element of error in decision-making, providing a superior and uniform quality of health care while reducing the stress load on the dentists. This paper reviews the available literature selected that are pertaining to the research and development of AI in the diagnosis of various oral and maxillofacial diseases, such as dental caries, periodontal disease, maxillary sinus diseases, salivary gland diseases, temporomandibular joint disorders, osteoporosis, and oral cancer.

2. Search Strategy

We used PubMed, Google Scholar, and ScienceDirect to conduct a systematic search using a variety of key terms that included “Convolutional Neural Network” or “Deep Learning” or “Natural Language Processing” OR “neural network” OR “Machine Learning” OR “unsupervised learning” OR “Artificial Intelligence” OR “supervised learning” for the model. Similarly, for disease, the terms included “dental caries” OR “periodontal disease” OR “maxillary sinus diseases” OR “salivary gland diseases” OR “Temporomandibular joint disorders” OR “osteoporosis” OR oral cancer. We looked for articles that were published between January 2016 and December 2021. In addition to the search, the reference lists of the selected article were examined to add an article for this review.

3. Dental Caries

Dental caries is the most prevalent disease across the globe. Early diagnosis is key in decreasing caries-related indisposition in patients. Caries diagnosis is exceedingly based on visual cues and radiographic data. This visual data can be a form of input dataset for machine learning (ML). Devito et al. (2008) evaluated the efficiency of a multi-layer perceptron neural network in diagnosing proximal caries in bitewing radiographs and concluded that the diagnostic improvement was 39.4% [26]. Lee et al. (2018) used 3000 periapical radiographs to evaluate the efficacy of deep convolutional neural networks to identify dental caries. High accuracy of 89%, 88%, and 82% was observed in the premolar, molar, and both the premolar-molar regions [27]. Hung et al. (2019) conducted a study with the test and training set comprised of data obtained from the National Health and Nutrition Examination Survey. Supervised learning methods were used to classify the data based on the presence or absence of root caries. Among the various ML methods used in their study, the support vector machine (SVM) showed the best performance in identifying root caries [28].
Similarly, the clinical imaging data from various sources have been used in AI models for diagnosing dental caries with excellent results. In 2019, a study examined the use of convolutional neural networks (CNN) to identify dental caries in near-infrared transillumination images. CNN increased the speed and accuracy of caries detection [29]. Cantu et al. (2020) used bitewing radiographs to assess the performance of a deep learning (DL) network in detecting carious lesions. A total of 3686 radiographs were used, out of which 3293 were used for training while 252 were used as test data. The deep neural network showed higher accuracy compared to dentists and can be used to detect initial caries lesions on bitewing radiographs [30] Park et al. (2021) tested ML prediction models for the detection of early childhood caries compared to traditional regression models. Data of 4195 children (1–5 yrs) were obtained from the Korea National Health and Nutrition Examination survey (2007–2018) and analyzed. ML-based prediction models were able to detect ECC, predict high-risk groups, and suggest treatment, similar to traditional prediction models [31].

4. Tooth Fracture

The third most common reason for tooth loss is traumatized or cracked teeth. Early detection and treatment can save a cracked tooth and help retain it. However, cracked teeth often present with discontinuous symptoms, making their detection problematic. Conventional techniques, such as CBCT and intraoral radiographs, have low sensitivity and clarity. Paniagua et al. (2018) developed a novel method capable of detecting, quantifying, and localizing cracked teeth using high-resolution CBCT scans with steerable wavelets and machine learning methods. The performance of ML models was tested using Hr-CBCT scans of healthy teeth with simulated cracks. ML models showed high specificity and sensitivity [32]. Fukuda et al. (2020) used CNN to detect vertical root fractures using 300 panoramic radiographs with 330 vertically fractured teeth with visible fracture lines. Moreover, 80% of the data was used for training while 20% was used as a test data set. Results suggest that CNN can be used as a diagnostic tool for the detection of vertical root fractures [33].

5. Periodontal Diseases

Periodontal disease affects more than a billion people globally, destroying alveolar bone and leading to tooth loss. Early diagnosis of periodontal disease using AI can improve the dental status of the patient and improve their overall health and quality of life. Ozden et al. (2015) examined the use of a support vector machine (SVM), decision tree (DT), and ANN to identify and classify periodontal disease. Data from a total of 150 patients were used, 100 as training data and 50 as test data. The three systems classified the data into six types of periodontal conditions. SVM and DT were more accurate as diagnostic support tools compared to ANN [18]. Nakano et al. (2018) used deep learning (DL) to detect oral malodor from microbiota. A total of 90 patients, 45 patients with weak or no malodor, and 45 patients with marked malodor were selected using organoleptic tests and gas chromatography. Gene analysis of the amplified 16s rRNA from the patient’s saliva was carried out. DL was used to classify the samples into malodor and healthy breath. DL showed a predictive accuracy of 97% compared to SVM, which showed 79% [19]. ANN has been used to predict the occurrence of recurrent aphthous ulcers. Gender, serum B12, hemoglobin, serum ferritin, folate levels, candida count in saliva, tooth brushing frequency, the number of fruits and vegetables consumed daily, were related to the occurrence of ulcers [20] Danks et al. (2021) used a deep neural network to measure periodontal bone loss with the help of periapical radiographs. Periapical radiographs of single, double, and triple rooted teeth obtained from 63 patients were used. First, the DNN was trained to detect dental landmarks on the radiographs, and then the periodontal bone loss was measured using these landmarks by the DNN model. The system achieved a total percentage of correct key points of 89.9%. The system showed promising results, which can be further improved upon by experimentation and cross-validation with extended data sets [34] Similarly, a DL model was used to detect and measure periodontal bone loss from panoramic images, which was then used for staging periodontitis. The performance of the DL model was compared to that of three oral radiologists. The staging was done according to the new classification of periodontal and peri-implant disease and conditions [35]. A total of 340 panoramic radiographs were used out of which 90% were used for training while 10% were used for testing. Data augmentation was carried out to increase the data by 64%. The DL model had high accuracy and excellent reliability, suggesting that it can be used for the automatic diagnosis of periodontal disease and as a routine surveillance tool [36].

6. Maxillary Sinus Diseases

The maxillary sinuses are structures that are commonly visualized using extraoral radiographs. Automated identification of the sinuses and detection of any pathology in them by AI can lead to a manifold decrease in misdiagnoses. AI can be used as a tool to assist inexperienced dentists. Murata et al. (2018) evaluated the performance of a DL system in diagnosing maxillary sinusitis using panoramic radiographs. The AI performance was compared to that of two radiologists and two residents. The diagnostic performance of the system was similar to that of the radiologists. However, the AI was superior to dental residents [37]. Kim et al. (2019) used radiographs of the maxillary sinus in Water’s view to evaluate the diagnostic performance of the DL system. AI showed a statistically significant improved sensitivity and specificity to radiologists [38]. Mucosal thickening and mucosal retention cysts are often missed by radiologists. Kuwana et al. (2021) used OPG to detect and classify lesions in the maxillary sinus using a DL object detection technique. Detection of the normal maxillary sinus and inflamed maxillary sinus showed 100% sensitivity, whereas the detection sensitivity of mucosal retention cysts was 98% and 89% in the two test data sets that were used. This DL model can be reliably used in a clinical setup [39]. A recent study proposed a CNN model to assist radiologists. The CNN model is capable of detecting and segmenting mucosal thickening and mucosal retention cysts of the maxillary sinus using CBCT images. A total of 890 maxillary sinuses from 445 patients were used in the study. Low dose images were used for training and testing, while full-dose images were used as test data sets. The CNN model performed effectively in both dosage images with no significant difference [40].

7. Salivary Gland Diseases

Salivary gland diseases pose a diagnostic challenge to inexperienced dentists due to their confusing and similar morphological resemblances. AI can be a valuable tool in supporting diagnosing diseases of the salivary gland. DL models can, in some instances, be superior to radiologists. In an early Japanese study, researchers used DL to detect fatty degeneration of the salivary gland parenchyma on CT images, which is evident in the case of Sjogren’s syndrome. Of the total 500 CT images, 400 CT images (200 CT images of the control group, 200 CT images of the Sjogren’s syndrome patients) were used as a training dataset while 100 CT images were used as the test data set to analyze the performance of the ML system. The diagnostic performance of DL was equivalent to that of experienced radiologists and significantly superior to inexperienced radiologists [41]. The low incidence and overlapping morphologic features of salivary gland tumors make them challenging to diagnose for clinicians. ML was used to detect malignant salivary gland tumors based on their cytologic appearance. A recursive partitioning algorithm was used to classify 115 malignant tumor samples into 12 morphologic variables. This performance was compared to that of experienced clinicians. The decision tree system test was effective in narrowing down the differential diagnoses, increasing the accuracy of pathological diagnosis [42]. AI has the potential to be used as a tool to predict the recurrence of salivary gland malignancies [43]. Facial nerve injury after surgical treatment for a salivary gland tumor is a severe complication. Chiesa-Estomba et al. (2021) used clinical, radiological, histological, and cytological data to predict the occurrence of facial nerve palsy in patients and reported that AI can be used as an assessment tool for the prediction of facial nerve injury so that both surgeons and patients are well aware of the complications in advance [44].

8. Temporomandibular Joint Disorders

Diagnosing TMJ disorders is a challenging issue for inexperienced dentists. ANN systems can simplify and assist in this diagnosis. The performance of an ANN model was tested by recognizing non-reducing disks in patients. The frontal chewing data from 68 patients with normal disks, unilateral and bilateral non-reducing disks, were obtained. Half the data was used to train the ANN system, while the other half was used for testing. The system showed an acceptable level of error and showed potential as a supporting diagnostic tool with an excellent cost/benefit ratio [45]. Bas et al. (2011) conducted a similar study using clinical symptoms. The clinical symptoms and diagnoses of 219 patients were obtained from experienced oral and maxillofacial surgeons. The data from the first 161 patients was used to train the ANN, while the rest of the data was used to test the ANN. The neural network showed acceptable results in diagnosing internal derangements of the TMJ. Additional patient data, clinical data, radiographs, and images could improve the diagnostic capacity of ANN [46]. Iwasaki (2014) applied Bayesian belief network analysis to MRI images to determine the progression of TMJ disorders. A total of 295 cases with 590 sides of TMJs were used with 11 algorithms. The results suggested that the osteoarthritic changes progressed from condyle to articular fossa, and then to the mandibular bone contours. Age, disk form, bony space, and condylar translations were elements that affected disk displacement and bony changes [47]. Choi et al. (2021) developed an AI model to detect osteoarthritis from OPG images. This AI model can be used in clinical setups where a CT facility or a maxillofacial radiologist is not readily available [48]. Orhan et al. (2021) used magnetic resonance images of TMJs to detect TMJ pathologies, such as condylar osseous changes and disk derangements using an AI model [49]. AI models can use a variety of input data to learn. Researchers have even used infrared thermography images of patients with masseter and lateral pterygoid muscles as the area of interest to diagnose TMJ disorders in an AI model [50].

9. Osteoporosis

Osteoporosis can be detected on panoramic radiographs. Various indices, such as the gonion index, mental index, mandibular cortical index, and panoramic mandibular index have been used previously to detect osteoporosis [51,52,53,54]. AI could simplify the diagnosis of osteoporosis and buttress the work of radiologists. Kim et al. (2019) evaluated the performance of deep convolutional neural networks (DCNN) based on computer-aided diagnosis (CAD) in diagnosing osteoporosis from panoramic images against radiologists with 10 years of experience. Out of the total 1268 images, 200 images were used as test images. The DCNN- CAD showed results that were highly agreeable with the diagnostic results of the radiologists. DCNN can be used to help dentists in early diagnosis, and referral to specialists [55]. Lee et al. (2020) compared different types of CNN models to assess which model worked best for the diagnosis of osteoporosis and found that the CNN model with transfer learning and fine-tuning was best able to diagnose osteoporosis automatedly [56].

10. Oral Cancer and Cervical Lymph Node Metastasis

Oral cancer is the sixth most common malignancy worldwide. Early detection can lead to a better prognosis and a better survival rate [57]. AI can aid in early diagnosis and decrease the mortality and morbidity associated with oral cancer. Nayak et al. (2005) used ANN to discriminate between normal, premalignant, and tissues using laser-induced autofluorescence spectra recordings. This was compared to a principal component analysis of the same issues. The results showed an accuracy of 98.3%, specificity of 100%, and sensitivity of 96.5%, suggesting that this method can have efficient real-time applications [58]. Uthoff et al. (2017) used CNN to detect precancerous and cancerous lesions from autofluorescence images and white light images. CNN was more effective than specialists in diagnosing precancerous and cancerous lesions. The performance of the CNN model can improve with larger data sets [59]. Aubreville et al. (2017) used DL to identify oral cancer based on confocal laser endomicroscopy (CLE) images. This method had an accuracy of 88.3% and a specificity of 90% [60]. Shams et al. (2017) conducted a comparative study to predict the development of oral cancer from oral potentially-malignant lesions using deep neural networks (DNN). DNN was compared to support vector machines, regularized least squares, and multi-layer perception. DNN had a higher accuracy rate of 96% compared to the other systems [61]. These findings were confirmed by Jeyraj et al. (2019). CNN was used to distinguish between cancerous and non-cancerous tissues based on hyperspectral images. Results suggest that CNN can be employed for image-based classification and diagnosis of oral cancer without expert supervision [62]. Recently, a lot of research has taken place in the field of oral cancer research. Many studies have successfully developed AI models that are capable of predicting the occurrence and recurrence of oral cancer [63,64,65,66,67].
Several studies have compared deep learning (DL) systems against experienced radiologists with varied results. Ariji et al. (2014) assessed the performance of DL in the identification of cervical node metastasis using CT images. CT images of 137 positive histologically proven cervical lymph nodes and 314 negative histological lymph nodes from 45 patients with oral squamous cell carcinoma were used. The results of the DL approach were compared against two trained radiologists. The DL network was as accurate as trained radiologists [68]. The researchers also used DL to detect the extra-nodal extension of cervical lymph node metastases. A total of 703 CT images from 51 patients with and without extra-nodal extension were collected and 80% were used as training data while 20% were used as test data. The performance of the DL system was significantly superior to that of the radiologist, suggesting that it can be used as a diagnostic tool for detecting extra-nodal metastasis [69].
Overall, this review on artificial intelligence for diagnosis points towards a positive trend with encouraging results. Neural networks and machine learning appear as effective or better than trained radiologists and clinicians (Table 1) in detecting caries, sinusitis, periodontal disease, and TMJ disorders. Cancer diagnosis by using artificial intelligence models can curate diverse data streams to render judgments, assess risk and referral to specialists (Table 2). Studies on premalignant lesions, lymph nodes, salivary gland tumors, and squamous cell carcinoma show encouraging results for the diagnostic and prognostic value of artificial intelligence. These efforts may reduce mortality rates through early diagnosis and effective therapeutic interventions. These platforms will require large data sets and resources to analyze data to provide a precise and cost-effective diagnosis. In order to be securely integrated into daily clinical procedures, these models are needed to be refined to reach the highest accuracy with specificity and sensitivity. Furthermore, also required are regulatory frameworks for the deployment of these models in clinical practice.

11. Prospects and Challenges

AI in dentistry is mostly in the nascent stages. It has yet to enter the realm of day-to-day dentistry. Numerous hurdles remain before it can seamlessly integrate into diagnosis and healthcare. Machine learning requires large volumes of data that are held by private dental setups and institutions. Data sharing and privacy are issues that need to be dealt with through federated guidelines and laws. This can rectify a common drawback reported in most studies: a shortage of data sets. European and American legislative bodies have passed the General Data Protection Act (GDPRA) and the California Consumer Protection Act (CCPA) to limit the risks of data sharing and protect consumer confidentiality [70,71]. Federated data systems similar to VANTAGE6, Personal Health Train (PHT), and DataSHIELD need to be developed so that data can be shared without breaching data security policies [71,72,73]. AI can also convert widely heterogeneous data into curated homogeneous data that is easy to use and interpret. Most of the studies in this review have been supervised image-based studies for the identification of structures or associations. This only provides partial information required for decision-making or treatment. AI capable of unsupervised diagnosis and prediction of diseases needs to be built to reduce subjective errors and provide standardized decisions. A shortage of manpower and resources is emblematic of rural communities. AI-based healthcare initiatives can connect rural and far-flung places with quality health care, benefiting the local population. Prospective randomized control trials and cohort studies have to be performed to evaluate the impact of AI on treatment and to test the outcomes and cost-effectiveness of AI [74,75,76].

12. Conclusions

The field of artificial intelligence (AI) is rapidly evolving to fill an ever-expanding niche in medicine and dentistry. Most AI research is still in its nascent stage. Increased availability of patient data can accelerate research into artificial intelligence, machine learning, and neural networks. Today, there are few real-time AI applications integrated into the internal operational process of dental clinics. Research has shown that data-driven AI is reliable, transparent, and in certain cases, better than humans in diagnosis. AI can replicate human functions of reasoning, planning, and problem-solving. Its application can save time and storage, reduce manpower and eliminate human errors in diagnosis. The rise of artificial intelligence in dental care will revolutionize dentistry and usher in wider access to dental health care with better patient outcomes.

Author Contributions

Conceptualization, S.P., S.A. and J.H.; methodology, S.S.S.J.A., M.A.K. and S.M.; software, H.N.A. and M.A.M.; validation, M.A.M., S.B. and S.P.; formal analysis, S.M. and S.A.; investigation, M.A.K. and H.N.A.; resources, M.A.M. and S.B.; data curation, S.A. and S.M.; writing—original draft preparation, S.P., S.A., J.H. and S.M.; writing—review and editing, S.S.S.J.A. and M.A.K.; visualization, H.N.A. and M.A.M.; supervision, S.B. and S.S.S.J.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Acknowledgments

The authors would like to acknowledge the inputs provided by Ahmed Alamoudi, Bassam Zidane (King Abdulaziz University) in revising the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ensmenger, N.; Nilsson, N.J. The Quest for Artificial Intelligence: A History of Ideas and Achievements. Xv + 562 pp., Index; Cambridge University Press: Cambridge, MA, USA; New York, NY, USA, 2010; Volume 102, pp. 588–589. [Google Scholar] [CrossRef]
  2. Turing, A.M. On Computable Numbers, with an Application to the Entscheidungsproblem; London Mathematical Society: London, UK, 1937; Volume s2–s42, pp. 230–265. [Google Scholar]
  3. Newell, A.; Simon, H.A. Computer Science as Empirical Inquiry. Commun. ACM 1976, 19, 113–126. [Google Scholar] [CrossRef] [Green Version]
  4. Khanagar, S.B.; Al-Ehaideb, A.; Maganur, P.C.; Vishwanathaiah, S.; Patil, S.; Baeshen, H.A.; Sarode, S.C.; Bhandi, S. Developments, Application, and Performance of Artificial Intelligence in Dentistry—A Systematic Review. J. Dent. Sci. 2021, 16, 508–522. [Google Scholar] [CrossRef] [PubMed]
  5. Bowling, M.; Fürnkranz, J.; Graepel, T.; Musick, R. Machine Learning and Games. Mach. Learn. 2006, 63, 211–215. [Google Scholar] [CrossRef] [Green Version]
  6. Park, W.J.; Park, J.B. History and Application of Artificial Neural Networks in Dentistry. Eur. J. Dent. 2018, 12, 594–601. [Google Scholar] [CrossRef]
  7. Hinton, G.E.; Osindero, S.; Teh, Y.W. A Fast Learning Algorithm for Deep Belief Nets. Neural Comput. 2006, 18, 1527–1554. [Google Scholar] [CrossRef] [PubMed]
  8. Sarker, I.H. Deep Learning: A Comprehensive Overview on Techniques, Taxonomy, Applications, and Research Directions. SN Comput. Sci. 2021, 2, 420. [Google Scholar] [CrossRef] [PubMed]
  9. Jung, S.K.; Kim, T.W. New Approach for the Diagnosis of Extractions with Neural Network Machine Learning. Am. J. Orthod. Dentofac. Orthop. 2016, 149, 127–133. [Google Scholar] [CrossRef] [Green Version]
  10. Niño-Sandoval, T.C.; Perez, S.V.G.; González, F.A.; Jaque, R.A.; Infante-Contreras, C. An Automatic Method for Skeletal Patterns Classification Using Craniomaxillary Variables on a Colombian Population. Forensic Sci. Int. 2016, 261, 159.e1–159.e6. [Google Scholar] [CrossRef]
  11. Niño-Sandoval, T.C.; Pérez, S.V.G.; González, F.A.; Jaque, R.A.; Infante-Contreras, C. Use of Automated Learning Techniques for Predicting Mandibular Morphology in Skeletal Class I, II and III. Forensic Sci. Int. 2017, 281, 187.e1–187.e7. [Google Scholar] [CrossRef]
  12. Saghiri, M.A.; Garcia-Godoy, F.; Gutmann, J.L.; Lotfi, M.; Asgar, K. The Reliability of Artificial Neural Network in Locating Minor Apical Foramen: A Cadaver Study. J. Endod. 2012, 38, 1130–1134. [Google Scholar] [CrossRef]
  13. Saghiri, M.A.; Asgar, K.; Boukani, K.K.; Lotfi, M.; Aghili, H.; Delvarani, A.; Karamifar, K.; Saghiri, A.M.; Mehrvarzfar, P.; Garcia-Godoy, F. A New Approach for Locating the Minor Apical Foramen Using an Artificial Neural Network. Int. Endod. J. 2012, 45, 257–265. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Chen, Q.; Wu, J.; Li, S.; Lyu, P.; Wang, Y.; Li, M. An Ontology-Driven, Case-Based Clinical Decision Support Model for Removable Partial Denture Design. Sci. Rep. 2016, 6, 27855. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Li, H.; Lai, L.; Chen, L.; Lu, C.; Cai, Q. The Prediction in Computer Color Matching of Dentistry Based on GA+BP Neural Network. Comput. Math. Methods Med. 2015, 2015, 816719. [Google Scholar] [CrossRef] [PubMed]
  16. Aliaga, I.J.; Vera, V.; de Paz, J.F.; García, A.E.; Mohamad, M.S. Modelling the Longevity of Dental Restorations by Means of a CBR System. BioMed Res. Int. 2015, 2015, 540306. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  17. Thanathornwong, B.; Suebnukarn, S.; Ouivirach, K. Decision Support System for Predicting Color Change after Tooth Whitening. Comput. Methods Programs Biomed. 2016, 125, 88–93. [Google Scholar] [CrossRef] [PubMed]
  18. Ozden, F.O.; Özgönenel, O.; Özden, B.; Aydogdu, A. Diagnosis of Periodontal Diseases Using Different Classification Algorithms: A Preliminary Study. Niger. J. Clin. Pract. 2015, 18, 416. [Google Scholar] [CrossRef] [Green Version]
  19. Nakano, Y.; Suzuki, N.; Kuwata, F. Predicting Oral Malodour Based on the Microbiota in Saliva Samples Using a Deep Learning Approach. BMC Oral Health 2018, 18, 128. [Google Scholar] [CrossRef]
  20. Dar-Odeh, N.S.; Alsmadi, O.M.; Bakri, F.; Abu-Hammour, Z.; Shehabi, A.A.; Al-Omiri, M.K.; Abu-Hammad, S.M.K.; Al-Mashni, H.; Saeed, M.B.; Muqbil, W.; et al. Predicting Recurrent Aphthous Ulceration Using Genetic Algorithms-Optimized Neural Networks. Adv. Appl. Bioinform. Chem. 2010, 3, 7. [Google Scholar] [CrossRef] [Green Version]
  21. Kositbowornchai, S.; Plermkamon, S.; Tangkosol, T. Performance of an Artificial Neural Network for Vertical Root Fracture Detection: An Ex Vivo Study. Dent. Traumatol. 2013, 29, 151–155. [Google Scholar] [CrossRef]
  22. de Bruijn, M.; ten Bosch, L.; Kuik, D.J.; Langendijk, J.A.; Leemans, C.R.; de Leeuw, I.V. Artificial Neural Network Analysis to Assess Hypernasality in Patients Treated for Oral or Oropharyngeal Cancer. Logop. Phoniatr. Vocol. 2011, 36, 168–174. [Google Scholar] [CrossRef]
  23. Chang, S.W.; Abdul-Kareem, S.; Merican, A.F.; Zain, R.B. Oral Cancer Prognosis Based on Clinicopathologic and Genomic Markers Using a Hybrid of Feature Selection and Machine Learning Methods. BMC Bioinform. 2013, 14, 170. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  24. Pethani, F. Promises and Perils of Artificial Intelligence in Dentistry. Aust. Dent. J. 2021, 66, 124–135. [Google Scholar] [CrossRef] [PubMed]
  25. Shan, T.; Tay, F.R.; Gu, L. Application of Artificial Intelligence in Dentistry. J. Dent. Res. 2021, 100, 232–244. [Google Scholar] [CrossRef] [PubMed]
  26. Devito, K.L.; de Souza Barbosa, F.; Filho, W.N.F. An Artificial Multilayer Perceptron Neural Network for Diagnosis of Proximal Dental Caries. Oral Surg. Oral Med. Oral Pathol. Oral Radiol. Endodontol. 2008, 106, 879–884. [Google Scholar] [CrossRef] [PubMed]
  27. Lee, J.H.; Kim, D.H.; Jeong, S.N.; Choi, S.H. Detection and Diagnosis of Dental Caries Using a Deep Learning-Based Convolutional Neural Network Algorithm. J. Dent. 2018, 77, 106–111. [Google Scholar] [CrossRef] [PubMed]
  28. Hung, M.; Voss, M.W.; Rosales, M.N.; Li, W.; Su, W.; Xu, J.; Bounsanga, J.; Ruiz-Negrón, B.; Lauren, E.; Licari, F.W. Application of Machine Learning for Diagnostic Prediction of Root Caries. Gerodontology 2019, 36, 395–404. [Google Scholar] [CrossRef] [PubMed]
  29. Casalegno, F.; Newton, T.; Daher, R.; Abdelaziz, M.; Lodi-Rizzini, A.; Schürmann, F.; Krejci, I.; Markram, H. Caries Detection with Near-Infrared Transillumination Using Deep Learning. J. Dent. Res. 2019, 98, 1227–1233. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  30. Cantu, A.G.; Gehrung, S.; Krois, J.; Chaurasia, A.; Rossi, J.G.; Gaudin, R.; Elhennawy, K.; Schwendicke, F. Detecting Caries Lesions of Different Radiographic Extension on Bitewings Using Deep Learning. J. Dent. 2020, 100, 103425. [Google Scholar] [CrossRef]
  31. Park, Y.H.; Kim, S.H.; Choi, Y.Y. Prediction Models of Early Childhood Caries Based on Machine Learning Algorithms. Int. J. Environ. Res. Public Health 2021, 18, 8613. [Google Scholar] [CrossRef] [PubMed]
  32. Paniagua, B.; Shah, H.; Hernandez-Cerdan, P.; Budin, F.; Chittajallu, D.; Walter, R.; Mol, A.; Khan, A.; Vimort, J.-B. Automatic Quantification Framework to Detect Cracks in Teeth. Proc. SPIE Int. Soc. Opt. Eng. 2018, 10578, 105781K. [Google Scholar] [CrossRef] [Green Version]
  33. Fukuda, M.; Inamoto, K.; Shibata, N.; Ariji, Y.; Yanashita, Y.; Kutsuna, S.; Nakata, K.; Katsumata, A.; Fujita, H.; Ariji, E. Evaluation of an Artificial Intelligence System for Detecting Vertical Root Fracture on Panoramic Radiography. Oral Radiol. 2019, 36, 337–343. [Google Scholar] [CrossRef] [PubMed]
  34. Danks, R.P.; Bano, S.; Orishko, A.; Tan, H.J.; Moreno Sancho, F.; D’Aiuto, F.; Stoyanov, D. Automating Periodontal Bone Loss Measurement via Dental Landmark Localisation. Int. J. Comput. Assist. Radiol. Surg. 2021, 16, 1189–1199. [Google Scholar] [CrossRef] [PubMed]
  35. Tonetti, M.S.; Greenwell, H.; Kornman, K.S. Staging and Grading of Periodontitis: Framework and Proposal of a New Classification and Case Definition. J. Periodontol. 2018, 89 (Suppl. 1), S159–S172. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  36. Chang, H.J.; Lee, S.J.; Yong, T.H.; Shin, N.Y.; Jang, B.G.; Kim, J.E.; Huh, K.H.; Lee, S.S.; Heo, M.S.; Choi, S.C.; et al. Deep Learning Hybrid Method to Automatically Diagnose Periodontal Bone Loss and Stage Periodontitis. Sci. Rep. 2020, 10, 7531. [Google Scholar] [CrossRef]
  37. Murata, M.; Ariji, Y.; Ohashi, Y.; Kawai, T.; Fukuda, M.; Funakoshi, T.; Kise, Y.; Nozawa, M.; Katsumata, A.; Fujita, H.; et al. Deep-Learning Classification Using Convolutional Neural Network for Evaluation of Maxillary Sinusitis on Panoramic Radiography. Oral Radiol. 2018, 35, 301–307. [Google Scholar] [CrossRef]
  38. Kim, Y.; Lee, K.J.; Sunwoo, L.; Choi, D.; Nam, C.M.; Cho, J.; Kim, J.; Bae, Y.J.; Yoo, R.E.; Choi, B.S.; et al. Deep Learning in Diagnosis of Maxillary Sinusitis Using Conventional Radiography. Investig. Radiol. 2019, 54, 7–15. [Google Scholar] [CrossRef]
  39. Kuwana, R.; Ariji, Y.; Fukuda, M.; Kise, Y.; Nozawa, M.; Kuwada, C.; Muramatsu, C.; Katsumata, A.; Fujita, H.; Ariji, E. Performance of Deep Learning Object Detection Technology in the Detection and Diagnosis of Maxillary Sinus Lesions on Panoramic Radiographs. Dentomaxillofacial Radiol. 2021, 50, 20200171. [Google Scholar] [CrossRef]
  40. Hung, K.F.; Ai, Q.Y.H.; King, A.D.; Bornstein, M.M.; Wong, L.M.; Leung, Y.Y. Automatic Detection and Segmentation of Morphological Changes of the Maxillary Sinus Mucosa on Cone-Beam Computed Tomography Images Using a Three-Dimensional Convolutional Neural Network. Clin. Oral Investig. 2022. online ahead of print. [Google Scholar] [CrossRef]
  41. Kise, Y.; Ikeda, H.; Fujii, T.; Fukuda, M.; Ariji, Y.; Fujita, H.; Katsumata, A.; Ariji, E. Preliminary Study on the Application of Deep Learning System to Diagnosis of Sjögren’s Syndrome on CT Images. Dentomaxillofacial Radiol. 2019, 48, 48. [Google Scholar] [CrossRef]
  42. López-Janeiro, Á.; Cabañuz, C.; Blasco-Santana, L.; Ruiz-Bravo, E. A Tree-Based Machine Learning Model to Approach Morphologic Assessment of Malignant Salivary Gland Tumors. Ann. Diagn. Pathol. 2022, 56, 151869. [Google Scholar] [CrossRef]
  43. De Felice, F.; Valentini, V.; de Vincentiis, M.; di Gioia, C.R.T.; Musio, D.; Tummulo, A.A.; Ricci, L.I.; Converti, V.; Mezi, S.; Messineo, D.; et al. Prediction of Recurrence by Machine Learning in Salivary Gland Cancer Patients After Adjuvant (Chemo)Radiotherapy. Vivo 2021, 35, 3355–3360. [Google Scholar] [CrossRef] [PubMed]
  44. Chiesa-Estomba, C.M.; Echaniz, O.; Sistiaga Suarez, J.A.; González-García, J.A.; Larruscain, E.; Altuna, X.; Medela, A.; Graña, M. Machine Learning Models for Predicting Facial Nerve Palsy in Parotid Gland Surgery for Benign Tumors. J. Surg. Res. 2021, 262, 57–64. [Google Scholar] [CrossRef] [PubMed]
  45. Radke, J.C.; Ketcham, R.; Glassman, B.; Kull, R.S. Artificial Neural Network Learns to Differentiate Normal TMJs and Nonreducing Displaced Disks after Training on Incisor-Point Chewing Movements. Cranio J. Craniomandib. Pract. 2003, 21, 259–264. [Google Scholar] [CrossRef]
  46. Bas, B.; Ozgonenel, O.; Ozden, B.; Bekcioglu, B.; Bulut, E.; Kurt, M. Use of Artificial Neural Network in Differentiation of Subgroups of Temporomandibular Internal Derangements: A Preliminary Study. J. Oral Maxillofac. Surg. 2012, 70, 51–59. [Google Scholar] [CrossRef] [PubMed]
  47. Iwasaki, H. Bayesian Belief Network Analysis Applied to Determine the Progression of Temporomandibular Disorders Using MRI. Dentomaxillofacial Radiol. 2015, 44, 20140279. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  48. Choi, E.; Kim, D.; Lee, J.Y.; Park, H.K. Artificial Intelligence in Detecting Temporomandibular Joint Osteoarthritis on Orthopantomogram. Sci. Rep. 2021, 11, 10246. [Google Scholar] [CrossRef] [PubMed]
  49. Orhan, K.; Driesen, L.; Shujaat, S.; Jacobs, R.; Chai, X. Development and Validation of a Magnetic Resonance Imaging-Based Machine Learning Model for TMJ Pathologies. BioMed Res. Int. 2021, 2021, 6656773. [Google Scholar] [CrossRef] [PubMed]
  50. de Lima, E.D.; Paulino, J.A.S.; de Farias Freitas, A.P.L.; Ferreira, J.E.V.; da Silva Barbosa, J.; Silva, D.F.B.; Bento, P.M.; Araújo Maia Amorim, A.M.; Melo, D.P. Artificial Intelligence and Infrared Thermography as Auxiliary Tools in the Diagnosis of Temporomandibular Disorder. Dentomaxillofacial Radiol. 2022, 51, 20210318. [Google Scholar] [CrossRef]
  51. Taguchi, A.; Ohtsuka, M.; Nakamoto, T.; Naito, K.; Tsuda, M.; Kudo, Y.; Motoyama, E.; Suei, Y.; Tanimoto, K. Identification of Post-Menopausal Women at Risk of Osteoporosis by Trained General Dental Practitioners Using Panoramic Radiographs. Dentomaxillofacial Radiol. 2014, 36, 149–154. [Google Scholar] [CrossRef] [Green Version]
  52. Okabe, S.; Morimoto, Y.; Ansai, T.; Yoshioka, I.; Tanaka, T.; Taguchi, A.; Kito, S.; Wakasugi-Sato, N.; Oda, M.; Kuroiwa, H.; et al. Assessment of the Relationship between the Mandibular Cortex on Panoramic Radiographs and the Risk of Bone Fracture and Vascular Disease in 80-Year-Olds. Oral Surg. Oral Med. Oral Pathol. Oral Radiol. Endod. 2008, 106, 433–442. [Google Scholar] [CrossRef]
  53. Klemetti, E.; Kolmakov, S.; Kröger, H. Pantomography in Assessment of the Osteoporosis Risk Group. Eur. J. Oral Sci. 1994, 102, 68–72. [Google Scholar] [CrossRef] [PubMed]
  54. Taguchi, A.; Suei, Y.; Ohtsuka, M.; Otani, K.; Tanimoto, K.; Ohtaki, M. Usefulness of Panoramic Radiography in the Diagnosis of Postmenopausal Osteoporosis in Women. Width and Morphology of Inferior Cortex of the Mandible. Dentomaxillofacial Radiol. 2014, 25, 263–267. [Google Scholar] [CrossRef] [PubMed]
  55. Lee, J.-S.; Adhikari, S.; Liu, L.; Jeong, H.-G.; Kim, H.; Yoon, S.-J. Osteoporosis Detection in Panoramic Radiographs Using a Deep Convolutional Neural Network-Based Computer-Assisted Diagnosis System: A Preliminary Study. Dentomaxillofacial Radiol. 2019, 48, 20170344. [Google Scholar] [CrossRef] [PubMed]
  56. Lee, K.S.; Jung, S.K.; Ryu, J.J.; Shin, S.W.; Choi, J. Evaluation of Transfer Learning with Deep Convolutional Neural Networks for Screening Osteoporosis in Dental Panoramic Radiographs. J. Clin. Med. 2020, 9, 392. [Google Scholar] [CrossRef] [Green Version]
  57. Cancer. Available online: https://www.who.int/news-room/fact-sheets/detail/cancer (accessed on 31 January 2022).
  58. Nayak, G.S.; Kamath, S.; Pai, K.M.; Sarkar, A.; Ray, S.; Kurien, J.; D’Almeida, L.; Krishnanand, B.R.; Santhosh, C.; Kartha, V.B.; et al. Principal Component Analysis and Artificial Neural Network Analysis of Oral Tissue Fluorescence Spectra: Classification of Normal Premalignant and Malignant Pathological Conditions. Biopolymers 2006, 82, 152–166. [Google Scholar] [CrossRef]
  59. Uthoff, R.D.; Song, B.; Sunny, S.; Patrick, S.; Suresh, A.; Kolur, T.; Keerthi, G.; Spires, O.; Anbarani, A.; Wilder-Smith, P.; et al. Point-of-Care, Smartphone-Based, Dual-Modality, Dual-View, Oral Cancer Screening Device with Neural Network Classification for Low-Resource Communities. PLoS ONE 2018, 13, e0207493. [Google Scholar] [CrossRef]
  60. Aubreville, M.; Knipfer, C.; Oetter, N.; Jaremenko, C.; Rodner, E.; Denzler, J.; Bohr, C.; Neumann, H.; Stelzle, F.; Maier, A. Automatic Classification of Cancerous Tissue in Laserendomicroscopy Images of the Oral Cavity Using Deep Learning. Sci. Rep. 2017, 7, 11979. [Google Scholar] [CrossRef] [Green Version]
  61. Shams, W.K.; Htike, Z.Z. Oral Cancer Prediction Using Gene Expression Profiling and Machine Learning. Int. J. Appl. Eng. Res. 2017, 12, 4893–4898. [Google Scholar]
  62. Jeyaraj, P.R.; Samuel Nadar, E.R. Computer-Assisted Medical Image Classification for Early Diagnosis of Oral Cancer Employing Deep Learning Algorithm. J. Cancer Res. Clin. Oncol. 2019, 145, 829–837. [Google Scholar] [CrossRef]
  63. Kim, D.W.; Lee, S.; Kwon, S.; Nam, W.; Cha, I.H.; Kim, H.J. Deep Learning-Based Survival Prediction of Oral Cancer Patients. Sci. Rep. 2019, 9, 6994. [Google Scholar] [CrossRef] [Green Version]
  64. Alabi, R.O.; Elmusrati, M.; Sawazaki-Calone, I.; Kowalski, L.P.; Haglund, C.; Coletta, R.D.; Mäkitie, A.A.; Salo, T.; Almangush, A.; Leivo, I. Comparison of Supervised Machine Learning Classification Techniques in Prediction of Locoregional Recurrences in Early Oral Tongue Cancer. Int. J. Med. Inform. 2020, 136, 104068. [Google Scholar] [CrossRef] [PubMed]
  65. Alhazmi, A.; Alhazmi, Y.; Makrami, A.; Masmali, A.; Salawi, N.; Masmali, K.; Patil, S. Application of Artificial Intelligence and Machine Learning for Prediction of Oral Cancer Risk. J. Oral Pathol. Med. 2021, 50, 444–450. [Google Scholar] [CrossRef] [PubMed]
  66. Chu, C.S.; Lee, N.P.; Adeoye, J.; Thomson, P.; Choi, S.W. Machine Learning and Treatment Outcome Prediction for Oral Cancer. J. Oral Pathol. Med. 2020, 49, 977–985. [Google Scholar] [CrossRef] [PubMed]
  67. Kirubabai, M.P.; Arumugam, G. Deep Learning Classification Method to Detect and Diagnose the Cancer Regions in Oral MRI Images. Med. Leg. Update 2021, 21, 462–468. [Google Scholar] [CrossRef]
  68. Ariji, Y.; Fukuda, M.; Kise, Y.; Nozawa, M.; Yanashita, Y.; Fujita, H.; Katsumata, A.; Ariji, E. Contrast-Enhanced Computed Tomography Image Assessment of Cervical Lymph Node Metastasis in Patients with Oral Cancer by Using a Deep Learning System of Artificial Intelligence. Oral Surg. Oral Med. Oral Pathol. Oral Radiol. 2019, 127, 458–463. [Google Scholar] [CrossRef]
  69. Ariji, Y.; Sugita, Y.; Nagao, T.; Nakayama, A.; Fukuda, M.; Kise, Y.; Nozawa, M.; Nishiyama, M.; Katumata, A.; Ariji, E. CT Evaluation of Extranodal Extension of Cervical Lymph Node Metastases in Patients with Oral Squamous Cell Carcinoma Using Deep Learning Classification. Oral Radiol. 2019, 36, 148–155. [Google Scholar] [CrossRef]
  70. General Data Protection Regulation (GDPR)—Official Legal Text. Available online: https://gdpr-info.eu/ (accessed on 28 March 2022).
  71. Hulsen, T. Sharing Is Caring—Data Sharing Initiatives in Healthcare. Int. J. Environ. Res. Public Health 2020, 17, 3046. [Google Scholar] [CrossRef]
  72. Sun, C.; Ippel, L.; van Soest, J.; Wouters, B.; Malic, A.; Adekunle, O.; van den Berg, B.; Mussmann, O.; Koster, A.; van der Kallen, C.; et al. A Privacy-Preserving Infrastructure for Analyzing Personal Health Data in a Vertically Partitioned Scenario. Stud. Health Technol. Inform. 2019, 264, 373–377. [Google Scholar] [CrossRef]
  73. Gaye, A.; Marcon, Y.; Isaeva, J.; Laflamme, P.; Turner, A.; Jones, E.M.; Minion, J.; Boyd, A.W.; Newby, C.J.; Nuotio, M.L.; et al. DataSHIELD: Taking the Analysis to the Data, Not the Data to the Analysis. Int. J. Epidemiol. 2014, 43, 1929–1944. [Google Scholar] [CrossRef] [Green Version]
  74. Schwendicke, F.; Samek, W.; Krois, J. Artificial Intelligence in Dentistry: Chances and Challenges. J. Dent. Res. 2020, 99, 769–774. [Google Scholar] [CrossRef]
  75. Rodrigues, J.A.; Krois, J.; Schwendicke, F. Demystifying Artificial Intelligence and Deep Learning in Dentistry. Braz. Oral Res. 2021, 35, 1–7. [Google Scholar] [CrossRef] [PubMed]
  76. MacHoy, M.E.; Szyszka-Sommerfeld, L.; Vegh, A.; Gedrange, T.; Woźniak, K. The Ways of Using Machine Learning in Dentistry. Adv. Clin. Exp. Med. 2020, 29, 375–384. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Important milestones in the advancement of AI.
Figure 1. Important milestones in the advancement of AI.
Diagnostics 12 01029 g001
Figure 2. The working of AI in a schematic format.
Figure 2. The working of AI in a schematic format.
Diagnostics 12 01029 g002
Table 1. Summary of studies examining the use of artificial intelligence in dental diagnosis.
Table 1. Summary of studies examining the use of artificial intelligence in dental diagnosis.
StudyAlgorithm UsedStudy FactorModalityNumber of Input DataPerformanceComparisonOutcome
Lee J et al. (2018) [27]CNNDental cariesPeriapical radiographs600Mean AUC—0.8904 DentistsDeep CNN showed a considerably good performance in detecting dental caries in periapical radiographs.
Casalegno et al. (2019) [29]CNNDental cariesNear-infrared transillumination imaging217ROC of 83.6% for occlusal caries; ROC of 84.6% for proximal cariesDentists with clinical experienceCNN showed increased speed and accuracy in detecting dental caries
Cantu et al. (2019) [30]CNNDental cariesBitewing radiographs141Accuracy 0.80; sensitivity 0.75%; specificity 0.83%;4 experienced dentistsAI model was more accurate than dentists
Radke et al. (2003) [45]ANNDisk displacementFrontal plane jaw recordings from chewing68Accuracy 86.8%, specificity 100%, sensitivity 91.8%NoneThe proposed model has an acceptable level of error and an excellent cost/benefit ratio.
Park YH et al. (2021) [31]MLEarly childhood cariesDemographic details, oral hygiene management details, maternal details4195AUROC between 0.774 and 0.785Traditional regression modelBoth ML-based and traditional regression models showed favorable performance and can be used as a supporting tool.
Kuwana et al. (2021) [39]CNNMaxillary sinus lesionsPanoramic radiographs1174Diagnostic accuracy, sensitivity, and specificity were 90–91%, 81–85% and 91–96% for maxillary sinusitis and 97–100%, 80–100% and 100% for maxillary sinus cysts.NoneThe proposed deep learning model can be reliably used for detecting the maxillary sinuses and identifying lesions in them.
Murata et al. (2018) [37]CNNMaxillary sinusitisPanoramic radiographs120Accuracy 87.5%; sensitivity 86.7%; specificity 88.3%2 experienced radiologists, 2 dental residentsThe AI model can be a supporting tool for inexperienced dentists
Kim et al. (2019) [38]CNNMaxillary sinusitisWater’s view radiographs200AUC of 0.93 for temporal; AUC of 0.88 for geographic external5 radiologiststhe AI-based model showed statistically higher performance than radiologists.
Hung KF et al. (2022) [40]CNNmaxillary sinusitisCone-beam computed tomography890AUC for detection of mucosal thickening and mucous retention cyst was 0.91 and 0.84 in low dose, and 0.89 and 0.93 for high doseNoneThe proposed model can accurately detect mucosal thickening and mucous retention cysts in both low and high-dose protocol CBCT scans.
Danks et al. (2021) [34]DNN symmetric hourglass architecturePeriodontal bone lossPeriapical radiographs340Percentage Correct Keypoints of 83.3% across all root morphologiesAsymmetric hourglass architecture, ResnetThe proposed system showed promising capability in localizing landmarks and periodontal bone loss and performed 1.7% better than the next best architecture.
Chang et al. (2020) [36]CNNPeriodontal bone lossPanoramic radiographs340Pixel accuracy of 0.93; Jaccard index of 0.92; dice coefficient values of 0.88 for localization of periodontal bone.NoneThe proposed model showed high accuracy and excellent reliability in the detection of periodontal bone loss and classification of periodontitis
Ozden et al. (2015) [18]ANNPeriodontal diseaseRisk factors, periodontal data, and radiographic bone loss150Performance of SVM & DT was 98%; ANN was 46%SVM &DTSVM and DT showed good performance in the classification of periodontal disease while ANN had the worst performance
Devito et al. (2008) [26]ANNProximal cariesBitewing radiograph160ROC curve area of 0.88425 examinersANN could improve the performance of diagnosing proximal caries.
Dar-Odeh et al. (2010) [20]ANNRecurrent aphthous ulcersPredisposing factor and RAU status96Accuracy of prediction for network 3 & 8 is 90%; 4,6 & 9 is 80%; 1& 7 is 70%; 2 & 5 is 60%Nonethe ANN model seemed to use gender, hematologic and mycologic data, tooth brushing, fruit, and vegetable consumption for the prediction of RAU.
Hung M et al. (2019) [28]CNNRoot cariesData set 5135Accuracy 97.1%; Precision 95.1%; sensitivity 99.6%; specificity 94.3%Trained medical personnelShows good performance and can be clinically implemented.
Iwasaki et al. (2015) [47]BBNTemporomandibular disordersMagnetic resonance imaging590Of the 11 BBN algorithms used path conditions using resubstitution validation and 10—fold cross-validation showed an accuracy of >99%necessary path condition, path condition, greedy search-and-score with Bayesian information criterion, Chow-Liu tree, Rebane-Pearl poly tree, tree augmented naïve Bayes model, maximum log-likelihood, Akaike information criterion, minimum description length, K2 and C4.5The proposed model can be used to predict the prognosis of TMDs.
Orhan et al. (2021) [49]MLTemporomandibular disordersMagnetic resonance imaging214The performance accuracy for condylar changes and disk displacement are 0.77 and 0.74logistic regression (LR), random forest (RF), decision tree (DT), k-nearest neighbors (KNN), XGBoost, and support vector machine (SVM)The proposed model using KNN and RF was found to be optimal for predicting TMJ pathologies
Diniz de lima et al. (2021) [50]MLTemporomandibular disordersInfrared thermography74Semantic and radiomic-semantic associated ML feature extraction methods and MLP classifier showed statistically good performance in detecting TMDsKNN, SVM, MLPML model associated with infrared thermography can be used for the detection of TMJ pathologies
Bas B et al. (2012) [46]ANNTMJ internal derangementsClinical symptoms and diagnoses219Sensitivity and specificity for unilateral and anterior disk displacement with and without reduction were 80% & 95% and 69% & 91%; for bilateral and anterior disk displacement with and without reduction were 37% &100% and 100% & 89% respectively.Experienced surgeonThe developed model can be used as a supportive diagnostic tool for the diagnoses of subtypes of TMJ internal derangements
Choi et al. (2021) [48]CNNTMJ osteoarthritisPanoramic radiographs1189Accuracy of 0.78, the sensitivity of 0.73, and specificity of 0.82Oral and maxillofacial radiologistThe developed model showed performance equivalent to experts and can be used in general practices where OMFR experts or CT is n
Fukuda et al. (2019) [33]CNNVertical root fracturePanoramic radiograph60The precision of 0.93; Recall of 0.752 Radiologists and 1 EndodontistThe CNN model was a promising supportive tool for the detection of vertical root fracture.
Table 2. Summary of studies examining the use of artificial intelligence in cancer diagnosis.
Table 2. Summary of studies examining the use of artificial intelligence in cancer diagnosis.
StudyAlgorithm UsedStudy FactorModalityNumber of Input DataPerformanceComparisonOutcome
Ariji et al. (2019) [69]CNNExtra-nodal extension of cervical lymph nodeCT images703Accuracy of 84%4 radiologistsThe diagnostic performance of the DL model was significantly higher than the radiologists
Lopez—Janeiro et al. (2022) [42]MLMalignant salivary gland tumorPrimary tumor resection specimens11584–89% of the samples were diagnosed correctlyNoneThe developed model can be used as a guide for the morphological approach to the diagnosis of malignant salivary gland tumors
Felice et al. (2021) [43]Decision treeMalignant salivary gland tumorAge at diagnosis, gender, salivary gland type, histologic type, surgical margin, tumor stage, node stage, lymphovascular invasion/perineural invasion, type of adjuvant treatment545-year disease-free survival was 62.1%. Important variables to predict recurrence were pathological tumor and node stage. Based on the variables, 3 groups were partitioned as pN0, pT1-2 pN+ and PT3-4 pN+ with 26%, 38% and 75% of recurrence and 73.7%, 57.1% and 34.3% disease-free survival rate, respectivelyNoneThe proposed model can be used to classify patients with salivary gland malignancy and predict the recurrence rate.
Ariji et al. (2019) [68]CNNMetastasis of cervical lymph nodesCT images441Accuracy 78.2%; sensitivity 75.4%; specificity 81.1%not clearThe diagnostic performance of the CNN model is similar to that of radiologists
Nayak et al. (2005) [58]ANNNormal, premalignant and malignant conditionsPulsed laser-induced autofluorescence spectroscopic studiesNot clearSpecificity and sensitivity were 100% and 96.5%Principal component analysisANN showed better performance compared to PCA in the classification of normal, premalignant, and malignant conditions
Shams et al. (2017) [61]DNNOral cancerGene expression profiling86Accuracy of 96%support vector machine (SVM), Regularized Least Squares (RLS), multi-layer perceptron (MLP) with backpropagationThe proposed system showed significantly higher performance, which can be easily implemented
Jeyaraj et al. (2019) [62]CNNOral cancerHyperspectral images600Accuracy of 91.4% for benign tissue and 94.5% for normal tissueSupport vector machine and Deep belief networkThe proposed method can be deployed for the automatic classification of
Aubreville et al. (2017) [60]CNNoral squamous cell carcinomaConfocal laser endomicroscopy (CLE) images7894AUC 0.96; Mean accuracy sensitivity 86.6%; specificity 90%;not clearThis method seemed better than the state-of-the-art CLE recognition system
Uthoff et al. (2018) [59]CNNPrecancerous and cancerous lesionsAutofluorescence and white light imaging170sensitivity, specificity, positive, and negative predictive values ranging from 81.25 to 94.94%NoneThe proposed model is a low-cost, portable, and easy-to-use system.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Patil, S.; Albogami, S.; Hosmani, J.; Mujoo, S.; Kamil, M.A.; Mansour, M.A.; Abdul, H.N.; Bhandi, S.; Ahmed, S.S.S.J. Artificial Intelligence in the Diagnosis of Oral Diseases: Applications and Pitfalls. Diagnostics 2022, 12, 1029. https://doi.org/10.3390/diagnostics12051029

AMA Style

Patil S, Albogami S, Hosmani J, Mujoo S, Kamil MA, Mansour MA, Abdul HN, Bhandi S, Ahmed SSSJ. Artificial Intelligence in the Diagnosis of Oral Diseases: Applications and Pitfalls. Diagnostics. 2022; 12(5):1029. https://doi.org/10.3390/diagnostics12051029

Chicago/Turabian Style

Patil, Shankargouda, Sarah Albogami, Jagadish Hosmani, Sheetal Mujoo, Mona Awad Kamil, Manawar Ahmad Mansour, Hina Naim Abdul, Shilpa Bhandi, and Shiek S. S. J. Ahmed. 2022. "Artificial Intelligence in the Diagnosis of Oral Diseases: Applications and Pitfalls" Diagnostics 12, no. 5: 1029. https://doi.org/10.3390/diagnostics12051029

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop