Current Applications, Opportunities, and Limitations of AI for 3D Imaging in Dental Research and Practice
Abstract
:1. Introduction
2. Current Use of AI for 3D Imaging in DMFR
2.1. Automated Diagnosis of Dental and Maxillofacial Diseases
- Input image data;
- Image preprocessing;
- Selection of the region of interest (ROI);
- Segmentation of lesions;
- Extraction of selected texture features in the segmented lesions;
- Analysis of the extracted features;
- Output of the diagnosis or classification.
2.2. Automated Localization of Anatomical Landmarks for Orthodontic and Orthognathic Treatment Planning
2.3. Automated Improvement of Image Quality
2.4. Other Applications
3. Current Use of AI for Intraoral 3D Imaging and Facial Scanning
4. Limitations of the Included Studies
5. Conclusions
Author Contributions
Funding
Conflicts of Interest
References
- Stone, P.; Brooks, R.; Brynjolfsson, E.; Calo, R.; Etzioni, O.; Hager, G.; Hirschberg, J.; Kalyanakrishnan, S.; Kamar, E.; Kraus, S.; et al. Artificial Intelligence and Life in 2030. One Hundred Year Study on Artificial Intelligence: Report of the 2015–2016 Study Panel, Stanford University, Stanford, CA. Available online: https://ai100.stanford.edu/2016-report (accessed on 12 March 2020).
- Gandomi, A.; Haider, M. Beyond the hype: Big data concepts, methods, and analytics. Int. J. Inf. Manag. 2015, 35, 137–144. [Google Scholar] [CrossRef] [Green Version]
- Jiang, F.; Jiang, Y.; Zhi, H.; Dong, Y.; Li, H.; Ma, S.; Wang, Y.; Dong, Q.; Shen, H.; Wang, Y. Artificial intelligence in healthcare: Past, present and future. Stroke Vasc. Neurol. 2017, 2, 230–243. [Google Scholar] [CrossRef] [PubMed]
- Hamet, P.; Tremblay, J. Artificial intelligence in medicine. Metabolism 2017, 69, S36–S40. [Google Scholar] [CrossRef] [PubMed]
- Fazal, M.I.; Patel, M.E.; Tye, J.; Gupta, Y. The past, present and future role of artificial intelligence in imaging. Eur. J. Radiol. 2018, 105, 246–250. [Google Scholar] [CrossRef] [PubMed]
- Ferizi, U.; Besser, H.; Hysi, P.; Jacobs, J.; Rajapakse, C.S.; Chen, C.; Saha, P.K.; Honig, S.; Chang, G. Artificial intelligence applied to osteoporosis: A performance comparison of machine learning algorithms in predicting fragility fractures from MRI data. J. Magn. Reson. Imaging 2019, 49, 1029–1038. [Google Scholar] [CrossRef] [PubMed]
- Schuhbaeck, A.; Otaki, Y.; Achenbach, S.; Schneider, C.; Slomka, P.; Berman, D.S.; Dey, D. Coronary calcium scoring from contrast coronary CT angiography using a semiautomated standardized method. J. Cardiovasc. Comput. Tomogr. 2015, 9, 446–453. [Google Scholar] [CrossRef]
- Esteva, A.; Kuprel, B.; Novoa, R.A.; Ko, J.; Swetter, S.M.; Blau, H.M.; Thrun, S. Dermatologist-level classification of skin cancer with deep neural networks. Nature 2017, 542, 115–118. [Google Scholar] [CrossRef]
- Litjens, G.; Kooi, T.; Bejnordi, B.E.; Setio, A.A.A.; Ciompi, F.; Ghafoorian, M.; van der Laak, J.; van Ginneken, B.; Sánchez, C.I. A survey on deep learning in medical image analysis. Med. Image Anal. 2017, 42, 60–88. [Google Scholar] [CrossRef] [Green Version]
- Hosny, A.; Parmar, C.; Quackenbush, J.; Schwartz, L.H.; Aerts, H.J.W.L. Artificial intelligence in radiology. Nat. Rev. Cancer 2018, 18, 500–510. [Google Scholar] [CrossRef]
- Hung, K.; Montalvao, C.; Tanaka, R.; Kawai, T.; Bornstein, M.M. The use and performance of artificial intelligence applications in dental and maxillofacial radiology: A systematic review. Dentomaxillofac. Radiol. 2020, 49, 20190107. [Google Scholar] [CrossRef]
- Leite, A.F.; Vasconcelos, K.F.; Willems, H.; Jacobs, R. Radiomics and machine learning in oral healthcare. Proteom. Clin. Appl. 2020, 14, e1900040. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Pauwels, R.; Araki, K.; Siewerdsen, J.H.; Thongvigitmanee, S.S. Technical aspects of dental CBCT: State of the art. Dentomaxillofac. Radiol. 2015, 44, 20140224. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Baysal, A.; Sahan, A.O.; Ozturk, M.A.; Uysal, T. Reproducibility and reliability of three-dimensional soft tissue landmark identification using three-dimensional stereophotogrammetry. Angle Orthod. 2016, 86, 1004–1009. [Google Scholar] [CrossRef] [PubMed]
- Hwang, J.J.; Jung, Y.H.; Cho, B.H.; Heo, M.S. An overview of deep learning in the field of dentistry. Imaging Sci. Dent. 2019, 49, 1–7. [Google Scholar] [CrossRef]
- Okada, K.; Rysavy, S.; Flores, A.; Linguraru, M.G. Noninvasive differential diagnosis of dental periapical lesions in cone-beam CT scans. Med. Phys. 2015, 42, 1653–1665. [Google Scholar] [CrossRef]
- Abdolali, F.; Zoroofi, R.A.; Otake, Y.; Sato, Y. Automated classification of maxillofacial cysts in cone beam CT images using contourlet transformation and Spherical Harmonics. Comput. Methods Programs Biomed. 2017, 139, 197–207. [Google Scholar] [CrossRef]
- Yilmaz, E.; Kayikcioglu, T.; Kayipmaz, S. Computer-aided diagnosis of periapical cyst and keratocystic odontogenic tumor on cone beam computed tomography. Comput. Methods Programs Biomed. 2017, 146, 91–100. [Google Scholar] [CrossRef]
- Lee, J.H.; Kim, D.H.; Jeong, S.N. Diagnosis of cystic lesions using panoramic and cone beam computed tomographic images based on deep learning neural network. Oral Dis. 2020, 26, 152–158. [Google Scholar] [CrossRef]
- Ariji, Y.; Fukuda, M.; Kise, Y.; Nozawa, M.; Yanashita, Y.; Fujita, H.; Katsumata, A.; Ariji, E. Contrast-enhanced computed tomography image assessment of cervical lymph node metastasis in patients with oral cancer by using a deep learning system of artificial intelligence. Oral Surg. Oral Med. Oral Pathol. Oral Radiol. 2019, 127, 458–463. [Google Scholar] [CrossRef]
- Montufar, J.; Romero, M.; Scougall-Vilchis, R.J. Automatic 3-dimensional cephalometric landmarking based on active shape models in related projections. Am. J. Orthod. Dentofac. Orthop. 2018, 153, 449–458. [Google Scholar] [CrossRef] [Green Version]
- Montufar, J.; Romero, M.; Scougall-Vilchis, R.J. Hybrid approach for automatic cephalometric landmark annotation on cone-beam computed tomography volumes. Am. J. Orthod. Dentofac. Orthop. 2018, 154, 140–150. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Minnema, J.; van Eijnatten, M.; Hendriksen, A.A.; Liberton, N.; Pelt, D.M.; Batenburg, K.J.; Forouzanfar, T.; Wolff, J. Segmentation of dental cone-beam CT scans affected by metal artifacts using a mixed-scale dense convolutional neural network. Med. Phys. 2019, 46, 5027–5035. [Google Scholar] [CrossRef] [PubMed]
- Ghazvinian Zanjani, F.; Anssari Moin, D.; Verheij, B.; Claessen, F.; Cherici, T.; Tan, T.; de With, P.H.N. Deep learning approach to semantic segmentation in 3D point cloud intra-oral scans of teeth. MIDL 2019, 102, 557–571. [Google Scholar]
- Lian, C.; Wang, L.; Wu, T.H.; Wang, F.; Yap, P.T.; Ko, C.C.; Shen, D. Deep multi-scale mesh feature learning for automated labeling of raw dental surfaces from 3D intraoral scanners. IEEE Trans. Med. Imaging 2020, in press. [Google Scholar] [CrossRef]
- Knoops, P.G.M.; Papaioannou, A.; Borghi, A.; Breakey, R.W.F.; Wilson, A.T.; Jeelani, O.; Zafeiriou, S.; Steinbacher, D.; Padwa, B.L.; Dunaway, D.J.; et al. A machine learning framework for automated diagnosis and computer-assisted planning in plastic and reconstructive surgery. Sci. Rep. 2019, 9, 13597. [Google Scholar] [CrossRef]
- Liu, W.; Li, M.; Yi, L. Identifying children with autism spectrum disorder based on their face processing abnormality: A machine learning framework. Autism Res. 2016, 9, 888–998. [Google Scholar] [CrossRef]
- Orhan, K.; Bayrakdar, I.S.; Ezhov, M.; Kravtsov, A.; Ozyurek, T. Evaluation of artificial intelligence for detecting periapical pathosis on cone-beam computed tomography scans. Int. Endod. J. 2020, 53, 680–689. [Google Scholar] [CrossRef]
- Abdolali, F.; Zoroofi, R.A.; Otake, Y.; Sato, Y. A novel image-based retrieval system for characterization of maxillofacial lesions in cone beam CT images. Int. J. Comput. Assist. Radiol. Surg. 2019, 14, 785–796. [Google Scholar] [CrossRef]
- Johari, M.; Esmaeili, F.; Andalib, A.; Garjani, S.; Saberkari, H. Detection of vertical root fractures in intact and endodontically treated premolar teeth by designing a probabilistic neural network: An ex vivo study. Dentomaxillofac. Radiol. 2017, 46, 20160107. [Google Scholar] [CrossRef] [Green Version]
- Kann, B.H.; Aneja, S.; Loganadane, G.V.; Kelly, J.R.; Smith, S.M.; Decker, R.H.; Yu, J.B.; Park, H.S.; Yarbrough, W.G.; Malhotra, A.; et al. Pretreatment identification of head and neck cancer nodal metastasis and extranodal extension using deep learning neural networks. Sci. Rep. 2018, 8, 14036. [Google Scholar] [CrossRef] [Green Version]
- Kise, Y.; Ikeda, H.; Fujii, T.; Fukuda, M.; Ariji, Y.; Fujita, H.; Katsumata, A.; Ariji, E. Preliminary study on the application of deep learning system to diagnosis of Sjögren's syndrome on CT images. Dentomaxillofac. Radiol. 2019, 48, 20190019. [Google Scholar] [CrossRef] [PubMed]
- Cheng, E.; Chen, J.; Yang, J.; Deng, H.; Wu, Y.; Megalooikonomou, V.; Gable, B.; Ling, H. Automatic Dent-landmark detection in 3-D CBCT dental volumes. Conf. Proc. IEEE Eng. Med. Biol. Soc. 2011, 2011, 6204–6207. [Google Scholar] [PubMed] [Green Version]
- Shahidi, S.; Bahrampour, E.; Soltanimehr, E.; Zamani, A.; Oshagh, M.; Moattari, M.; Mehdizadeh, A. The accuracy of a designed software for automated localization of craniofacial landmarks on CBCT images. BMC Med. Imaging 2014, 14, 32. [Google Scholar] [CrossRef] [PubMed]
- Torosdagli, N.; Liberton, D.K.; Verma, P.; Sincan, M.; Lee, J.S.; Bagci, U. Deep geodesic learning for segmentation and anatomical landmarking. IEEE Trans. Med. Imaging 2019, 38, 919–931. [Google Scholar] [CrossRef] [PubMed]
- Park, J.; Hwang, D.; Kim, K.Y.; Kang, S.K.; Kim, Y.K.; Lee, J.S. Computed tomography super-resolution using deep convolutional neural network. Phys. Med. Biol. 2018, 63, 145011. [Google Scholar] [CrossRef]
- ter Haar Romeny, B.M. A deeper understanding of deep learning. In Artificial Intelligence in Medical Imaging: Opportunities, Applications and Risks, 1st ed.; Ranschaert, E.R., Morozov, S., Algra, P.R., Eds.; Springer: Berlin, Germany, 2019; pp. 25–38. [Google Scholar]
- Miki, Y.; Muramatsu, C.; Hayashi, T.; Zhou, X.; Hara, T.; Katsumata, A.; Fujita, H. Classification of teeth in cone-beam CT using deep convolutional neural network. Comput. Biol. Med. 2017, 80, 24–29. [Google Scholar] [CrossRef]
- Abdolali, F.; Zoroofi, R.A.; Otake, Y.; Sato, Y. Automatic segmentation of maxillofacial cysts in cone beam CT images. Comput. Biol. Med. 2016, 72, 108–119. [Google Scholar] [CrossRef]
- Scarfe, W.C.; Azevedo, B.; Toghyani, S.; Farman, A.G. Cone beam computed tomographic imaging in orthodontics. Aust. Dent. J. 2017, 62, 33–50. [Google Scholar] [CrossRef] [Green Version]
- Bornstein, M.M.; Yeung, W.K.A.; Montalvao, C.; Colsoul, N.; Parker, Q.A.; Jacobs, R. Facts and Fallacies of Radiation Risk in Dental Radiology. Available online: http://facdent.hku.hk/docs/ke/2019_Radiology_KE_booklet_en.pdf (accessed on 12 March 2020).
- Yeung, A.W.K.; Jacobs, R.; Bornstein, M.M. Novel low-dose protocols using cone beam computed tomography in dental medicine: A review focusing on indications, limitations, and future possibilities. Clin. Oral Investig. 2019, 23, 2573–2581. [Google Scholar] [CrossRef]
- Tuzoff, D.V.; Tuzova, L.N.; Bornstein, M.M.; Krasnov, A.S.; Kharchenko, M.A.; Nikolenko, S.I.; Sveshnikov, M.M.; Bednenko, G.B. Tooth detection and numbering in panoramic radiographs using convolutional neural networks. Dentomaxillofac. Radiol. 2019, 48, 20180051. [Google Scholar] [CrossRef]
- Tomita, Y.; Uechi, J.; Konno, M.; Sasamoto, S.; Iijima, M.; Mizoguchi, I. Accuracy of digital models generated by conventional impression/plaster-model methods and intraoral scanning. Dent. Mater. J. 2018, 37, 628–633. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Kim, T.; Cho, Y.; Kim, D.; Chang, M.; Kim, Y.J. Tooth segmentation of 3D scan data using generative adversarial networks. Appl. Sci. 2020, 10, 490. [Google Scholar] [CrossRef] [Green Version]
- Morey, J.M.; Haney, N.M.; Kim, W. Applications of AI beyond image interpretation. In Artificial Intelligence in Medical Imaging: Opportunities, Applications and Risks, 1st ed.; Ranschaert, E.R., Morozov, S., Algra, P.R., Eds.; Springer: Berlin, Germany, 2019; pp. 129–144. [Google Scholar]
- Chen, Y.W.; Stanley, K.; Att, W. Artificial intelligence in dentistry: Current applications and future perspectives. Quintessence Int. 2020, 51, 248–257. [Google Scholar] [PubMed]
Author (Year) | Application | Imaging Modality | AI Technique | Image Data Set Used to Develop the AI Model | Independent Testing Image Data Set / Validation Technique | Performance |
---|---|---|---|---|---|---|
Diagnosis of Dental and Maxillofacial Diseases | ||||||
Okada [16] (2015) | Diagnosis of periapical cysts and granuloma | CBCT | LDA | 28 scans from patients with periapical cysts or granuloma | 7-fold CV | 94.1% (accuracy) |
Abdolali [17] (2017) | Diagnosis of radicular cysts, dentigerous cysts, and keratocysts | CBCT | SVM; SDA | 96 scans from patients with radicular cysts, dentigerous cysts, or keratocysts | 3-fold CV | 94.29–96.48% (accuracy) |
Yilmaz [18] (2017) | Diagnosis of periapical cysts and keratocysts | CBCT | k-NN; Naïve Bayes; Decision tree; Random forest; NN; SVM | 50 scans from patients with cysts or tumors | 10-fold CV/LOOCV | 94–100% (accuracy) |
25 scans from patients with cysts or tumors | 25 scans from patients with cysts or tumors | |||||
Lee [19] (2020) | Diagnosis of periapical cysts, dentigerous cysts, and keratocysts | Panoramic radiography and CBCT | CNN | 912 panoramic images and 789 CBCT scans | 228 panoramic images and 197 CBCT scans | Panoramic radiography 0.847 (AUC); 88.2% (sensitivity); 77.0% (specificity) CBCT 0.914 (AUC); 96.1% (sensitivity); 77.1% (specificity) |
Orhan [28] (2020) | Diagnosis of periapical pathology | CBCT | CNN | 3900 scans acquired using multiple FOVs from 2800 patients with periapical lesions and 1100 subjects without periapical lesions | 109 scans acquired using multiple FOVs from 153 patients with periapical lesions | 92.8% (accuracy) |
Abdolali [29] (2019) | Diagnosis of radiolucent lesion, maxillary sinus perforation, unerupted tooth, and root fracture | CBCT | Symmetry-based analysis model | 686 scans acquired using a large FOV (12 × 15 × 15 cm3), collected from several dental imaging centers in Iran | 459 scans acquired using a large FOV (12 × 15 × 15 cm3), collected from several dental imaging centers in Iran | 0.85–0.92 (DSC) |
Johari [30] (2017) | Detection of vertical root fractures | Periapical radiography and CBCT | CNN | 180 periapical radiographs and 180 CBCT scans of the extracted teeth | 60 periapical radiographs and 60 CBCT scans of the extracted teeth | Periapical radiography 70.0% (accuracy); 97.8% (sensitivity); 67.6% (specificity) CBCT 96.6% (accuracy); 93.3% (sensitivity); 100% (specificity) |
Kise [32] (2019) | Diagnosis of Sjögren’s syndrome | CT | CNN | 400 scans (200 from 20 SjS patients and 200 from 20 control subjects) acquired using a large FOV | 100 scans (50 from 5 SjS patients and 50 from 5 control subjects) acquired using a large FOV | 96.0% (accuracy); 100% (sensitivity); 92.0% (specificity) |
Kann [31] (2018) | Detection of lymph node metastasis and extranodal extension in patients with head and neck cancer | Contrast-enhanced CT | CNN | Images of 2875 CT-segmented lymph node samples with correlating pathology labels | Images of 131 lymph nodes (76 negative and 55 positive) | 0.91 (AUC) |
Ariji [20] (2019) | Detection of lymph node metastasis in patients with oral cancer | Contrast-enhanced CT | CNN | Images of 441 lymph nodes (314 negative and 127 positive) from 45 patients | 5-fold CV | 78.2% (accuracy); 75.4% (sensitivity); 81.0% (specificity), 0.80 (AUC) |
Localization of Anatomical Landmarks for Orthodontic and Orthognathic Treatment Planning | ||||||
Cheng [33] (2011) | Localization of the odontoid process of the second vertebra | CBCT | Random forest | 50 scans | 23 scans | 3.15 mm (mean deviation) |
Shahidi [34] (2014) | Localization of 14 anatomical landmarks | CBCT | Feature-based and voxel similarity-based algorithms | 8 scans acquired using a large FOV from subjects aged 10–45 years | 20 scans acquired using a large FOV from subjects aged 10–45 years | 3.40 mm (mean deviation) |
Montufar [21] (2018) | Localization of 18 anatomical landmarks | CBCT | Active shape model | 24 scans acquired using a large FOV | LOOCV | 3.64 mm (mean deviation) |
Montufar [22] (2018) | Localization of 18 anatomical landmarks | CBCT | Active shape model | 24 scans acquired using a large FOV | LOOCV | 2.51 mm (mean deviation) |
Torosdagli [35] (2019) | Localization of 9 anatomical landmarks | CBCT | CNN | 50 scans | 48 scans | 0.9382 (DSC); 93.42% (sensitivity); 99.97% (specificity), |
Improvement of Image Quality | ||||||
Park [36] (2018) | Improvement of image resolution | CT | CNN | 52 scans | 13 scans | The CNN network can yield high-resolution images based on low-resolution images |
Minnema [23] (2019) | Segmentation of CBCT scans affected by metal artifacts | CBCT | CNN | 20 scans | Leave-2-out CV | The CNN network can accurately segment bony structures in CBCT scans affected by metal artifacts |
Other | ||||||
Miki [38] (2017) | Tooth classification | CBCT | CNN | 42 scans with the diameter of the FOV ranged from 5.1 to 20 cm | 10 scans with the diameter of the FOV ranged from 5.1 to 20 cm | 88.8% (accuracy) |
Author (Year) | Application | Imaging Modality | AI Technique | Image Data Set Used to Develop the AI Model | Independent Testing Image Data Set/Validation Technique | Performance |
---|---|---|---|---|---|---|
Ghazvinian Zanjani [24] (2019) | Tooth segmentation | Intraoral scanning | CNN | 120 scans, comprising 60 upper jaws and 60 lower jaws. | 5-fold CV | 0.94 (intersection over union score) |
Kim [45] (2020) | Tooth segmentation | Intraoral scanning | Generative adversarial network | 10,000 cropped images | Approximate 350 cropped images | An average improvement of 0.004 mm in the tooth segmentation |
Lian [25] (2020) | Tooth labelling | Intraoral scanning | CNN | 30 scans of upper jaws | 5-fold CV | 0.894 to 0.970 (DSC) |
Liu [27] (2016) | Identification of Autism Spectrum Disorder | Facial scanning | SVM | 87 scans from children with and without Autism Spectrum Disorder | LOOCV | 88.51% (accuracy) |
Knoops [26] (2019) | Diagnosis and planning in plastic and reconstructive surgery | Facial scanning | Machine-learning-based 3D morphable model | 4261 scans from healthy subjects and orthognathic patients | LOOCV | Diagnosis 95.5% (sensitivity); 95.2% (specificity) Surgical simulation 1.1 ± 0.3 mm (accuracy) |
Author (Year) | Conclusion | Limitations (Risk of Bias *) |
---|---|---|
Okada [16] (2015) | The proposed model may assist clinicians to accurately differentiate periapical lesions. |
|
Abdolali [17] (2017) | The proposed model can improve the accuracy of the diagnosis of dentigerous cysts, radicular cysts, and keratocysts, and may have a significant impact on future AI diagnostic systems. |
|
Yilmaz [18] (2017) | Periapical cysts and keratocysts can be classified with high accuracy with the proposed model. It can also contribute to the field of automated diagnosis of periapical lesions. |
|
Lee [19] (2020) | Periapical cysts, dentigerous cysts, and keratocysts can be effectively detected and diagnosed with the proposed deep CNN algorithm, but the diagnosis of these lesions using radiological data alone, without histological examination, is still challenging. |
|
Orhan [28] (2020) | The proposed deep learning systems can be useful for detection and volumetric measurement of periapical lesions. The diagnostic performance was comparable to that of an oral and maxillofacial radiologist. |
|
Abdolali [29] (2019) | The proposed system is effective and can automatically diagnose various maxillofacial lesions/conditions. It can facilitate the introduction of content-based image retrieval in clinical CBCT applications. |
|
Johari [30] (2017) | The proposed deep learning model can be used for the diagnosis of vertical root fractures on CBCT images of endodontically treated and also vital teeth. With the aid of the model, the use of CBCT images is more effective than periapical radiographs. |
|
Kise [32] (2019) | The deep learning model showed high diagnostic accuracy for SjS, which is comparable to that of experienced radiologists. It is suggested that the model could be used to assist the diagnosis of SjS, especially for inexperienced radiologists. |
|
Kann [31] (2018) | The proposed deep learning model has the potential for use as a clinical decision-making tool to help guide head and neck cancer patient management. |
|
Ariji [20] (2019) | The proposed deep learning model yielded diagnostic results comparable to that of radiologists, which suggests that the model may be valuable for diagnostic support. |
|
Cheng [33] (2011) | The proposed model can efficiently assist clinicians in locating the odontoid process of the second vertebra. |
|
Shahidi [34] (2014) | The localization performance of the proposed model was acceptable with a mean deviation of 3.40 mm for all automatically identified landmarks. |
|
Montufar [21] (2018) | The proposed algorithm for automatically locating landmarks on CBCT volumes seems to be useful for 3D cephalometric analysis. |
|
Montufar [22] (2018) | The proposed hybrid algorithm for automatic landmarking on CBCT volumes seems to be potentially useful for 3D cephalometric analysis. |
|
Torosdagli [35] (2019) | The proposed deep learning algorithm allows for orthodontic analysis in patients with craniofacial deformities exhibiting excellent performance. |
|
Park [36] (2018) | The proposed deep learning algorithm is useful for super-resolution and de-noising. |
|
Minnema [23] (2019) | The proposed deep learning algorithm allows us to accurately classify metal artifacts as background noise, and to segment teeth and bony structures. |
|
Miki [38] (2017) | The proposed deep learning algorithm to classify tooth types on CBCTs yielded a high performance. This can be effectively used for automated preparation of dental charts and might be useful in forensic identification. |
|
Ghazvinian Zanjani [24] (2019) | The proposed end-to-end deep learning framework for the segmentation of individual teeth and the gingiva from intraoral scans outperforms state-of-the-art networks. |
|
Kim [45] (2020) | The proposed automated segmentation method for full arch intraoral scan data is as accurate as a manual segmentation method. This tool could efficiently facilitate the digital setup process in orthodontic treatment. |
|
Lian [25] (2020) | The proposed end-to-end deep neural network to automatically label individual teeth on raw dental surfaces acquired by 3D intraoral scanners outperforms the state-of-the-art methods for 3D shape segmentation. |
|
Liu [27] (2016) | The proposed machine learning algorithm based on face scanning patterns could support current clinical practice of the screening and diagnosis of ASD |
|
Knoops [26] (2019) | The proposed model can automatically analyze facial shape features and provide patient-specific treatment plans from a 3D facial scan. This may benefit the clinical decision-making process and improve clinical understanding of face shape as a marker for plastic and reconstructive surgery. |
|
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Hung, K.; Yeung, A.W.K.; Tanaka, R.; Bornstein, M.M. Current Applications, Opportunities, and Limitations of AI for 3D Imaging in Dental Research and Practice. Int. J. Environ. Res. Public Health 2020, 17, 4424. https://doi.org/10.3390/ijerph17124424
Hung K, Yeung AWK, Tanaka R, Bornstein MM. Current Applications, Opportunities, and Limitations of AI for 3D Imaging in Dental Research and Practice. International Journal of Environmental Research and Public Health. 2020; 17(12):4424. https://doi.org/10.3390/ijerph17124424
Chicago/Turabian StyleHung, Kuofeng, Andy Wai Kan Yeung, Ray Tanaka, and Michael M. Bornstein. 2020. "Current Applications, Opportunities, and Limitations of AI for 3D Imaging in Dental Research and Practice" International Journal of Environmental Research and Public Health 17, no. 12: 4424. https://doi.org/10.3390/ijerph17124424
APA StyleHung, K., Yeung, A. W. K., Tanaka, R., & Bornstein, M. M. (2020). Current Applications, Opportunities, and Limitations of AI for 3D Imaging in Dental Research and Practice. International Journal of Environmental Research and Public Health, 17(12), 4424. https://doi.org/10.3390/ijerph17124424