Next Article in Journal
Assessment of the Microbial Communities in Soil Contaminated with Petroleum Using Next-Generation Sequencing Tools
Next Article in Special Issue
Digital Design of Laser-Sintered Metal-Printed Dento-Alveolar Anchorage Supporting Orthodontic Treatment
Previous Article in Journal
The Utilization of Lean Six Sigma Methodologies in Enhancing Surgical Pathways and Surgical Rehabilitation
Previous Article in Special Issue
Morphological Evaluation of Cranium Facial Asymmetry in Class III Malocclusion Patients
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Precision and Accuracy Assessment of Cephalometric Analyses Performed by Deep Learning Artificial Intelligence with and without Human Augmentation

1
Department of Orthodontics and Oral Facial Genetics, Indiana University School of Dentistry, Indiana University Purdue University at Indianapolis, Indianapolis, IN 46202, USA
2
Independent Researcher, 3415 S Lafountain St #I, Kokomo, IN 46902, USA
3
Department of Biostatistics and Health Data Science, Indiana University School of Medicine, Indianapolis, IN 46202, USA
4
Department of Oral Pathology, Medicine and Radiology, Indiana University School of Dentistry, Indiana University Purdue University at Indianapolis, Indianapolis, IN 46202, USA
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(12), 6921; https://doi.org/10.3390/app13126921
Submission received: 4 May 2023 / Revised: 31 May 2023 / Accepted: 6 June 2023 / Published: 8 June 2023
(This article belongs to the Special Issue Present and Future of Orthodontics)

Abstract

:
The aim was to assess the precision and accuracy of cephalometric analyses performed by artificial intelligence (AI) with and without human augmentation. Four dental professionals with varying experience levels identified 31 landmarks on 30 cephalometric radiographs twice. These landmarks were re-identified by all examiners with the aid of AI. Precision and accuracy were assessed by using intraclass correlation coefficients (ICCs) and mean absolute errors (MAEs). AI revealed the highest precision, with a mean ICC of 0.97, while the dental student had the lowest (mean ICC: 0.77). The AI/human augmentation method significantly improved the precision of the orthodontist, resident, dentist, and dental student by 3.26%, 2.17%, 19.75%, and 23.38%, respectively. The orthodontist demonstrated the highest accuracy with an MAE of 1.57 mm/°. The AI/human augmentation method improved the accuracy of the orthodontist, resident, dentist, and dental student by 12.74%, 19.10%, 35.69%, and 33.96%, respectively. AI demonstrated excellent precision and good accuracy in automated cephalometric analysis. The precision and accuracy of the examiners with the aid of AI improved by 10.47% and 27.27%, respectively. The AI/human augmentation method significantly improved the precision and accuracy of less experienced dental professionals to the level of an experienced orthodontist.

1. Introduction

Since the development of cephalometric radiography by Broadbent and Hofrath in 1931 [1], a multitude of lateral cephalometric analyses have been developed for the purpose of orthodontic diagnosis and treatment planning. These analyses allow orthodontists to quantify and analyze relationships in the craniofacial region between the skeleton, dentition, and soft tissue. However, an accurate orthodontic diagnosis derived from a cephalometric radiograph requires accurate identification of key landmarks. Current methods of landmark identification include hand tracing with acetate paper or the use of computer software to manually mark digitized cephalometric radiographs, both of which have been shown to be equally reliable [2].
Regardless of the method of cephalometric tracing, manual identification of anatomical structures presents inherent challenges. Factors such as differences in training and radiograph quality can cause variations in intra-examiner and inter-examiner reliability [3]. Additionally, reliability varies depending on the landmark being identified. The landmarks condylion, gnathion, orbitale, and anterior nasal spine tended to have the lowest inter-examiner reliability among experienced dental radiologists and orthodontists [4]. Computing technology demonstrates the potential to automate this process through artificial intelligence (AI).
AI refers to the development of computer systems that can perform tasks that would typically require human intelligence. These tasks include but are not limited to: learning, reasoning, problem-solving, perception, and language understanding. AI systems often employ algorithms and models to process and interpret data, enabling them to make informed decisions or predictions. In dentistry, AI has been used in a variety of applications, such as to assist in the diagnosis of vertical root fractures, diagnosis of proximal caries and localization of root canals [5]. One area in which AI has been used in orthodontics is automatic lateral cephalometric landmark identification. Pioneered by Cohen in 1984, preliminary attempts to locate landmarks using AI were limited to easily identifiable landmarks such as sella and menton [6]. Since Cohen’s attempt to automate landmark identification, several AI methods have been developed to increase accuracy and precision. Success rates of these methods, defined as a deviation of less than 2 mm between the distance measured from the AI-marked landmark and the expert-marked landmark, have varied from 35 to 84.7% depending on the methodology [5]. Convolutional neural networks (CNNs), a type of deep-learning algorithm, have demonstrated some of the highest accuracy in automatic lateral cephalometric landmark identification [7]. The CNN You-Only-Look-Once version 3 (YOLOv3) is a frontrunner in this aspect [8,9].
More recent studies have attempted to improve upon the precision and accuracy of the YOLOv3 algorithm, albeit with limited success. Yao used a customized CNN using PyTorch software (Facebook Artificial Intelligence Research; Facebook, Menlo Park, CA, USA) and found a mean radial error of about 1 mm, averaged among 37 landmarks [10]. However, the mean radial errors varied significantly, ranging from 0.5 mm for pronasale to 2 mm for pogonion [10]. Jeon utilized a CNN using CephX software and found significant differences between AI and manual measurements for saddle angle, U1-NA (mm), and L1-NB (mm) measurements [11]. Manual adjustment of landmarks identified by AI was recommended for further improvement in accuracy [11]. A systematic review conducted to evaluate the accuracy of automatic cephalometric landmark identification concluded that most CNNs demonstrated accuracy that matched the accuracy of an experienced orthodontist for many measurements [12].
In addition to two-dimensional landmark identification, studies have been conducted to evaluate the accuracy of AI in landmark identification in three dimensions. Bao evaluated the accuracy of cephalometric landmark identification with AI on lateral cephalometric radiographs generated from 3D cone-beam computed tomography (CBCT) images. It was discovered that 15 of 23 cephalometric measurements were within the clinically acceptable range, or within 2 mm or degrees from the gold standard [13]. A meta-analysis by Serafin evaluated the accuracy of 3D landmark identification on CBCT images using only recent studies (from 2020 to 2022). As technology advanced, the mean error improved by over 2 mm over the 2-year period [14]. Despite the improvements, AI has fared better in 2D landmark identification compared to 3D, likely due to smaller sample sizes and increased computational complexity in CBCT studies [12].
Although cephalometric landmark identification with AI has improved in the last few decades, a few limitations still exist. Most studies have used far below the recommended 2300 images to train the AI in order to achieve the same level of accuracy as human examiners [15]. Additionally, few studies have used AI trained from multiple types of x-ray machines, which limits real-world clinical application of the software [16]. A study by Silva et al. addressed these limitations by training a CNN named CEFBOT (RadioMemory Ltd., Belo Horizonte, Brazil) with over 250,000 images from various x-ray sources to automatically perform Arnett’s analysis measurements [17].
AI-based fully automated decision-making processes put machines in the driver’s seat and aim to replace human decisions. However, can machines and humans work together in making better decisions? This new concept is defined as AI/human augmentation, where the goal is to improve human decision-making with machine insights and recommendations. Although previous studies show a promising future for AI-based automatic cephalometric landmark identification and analysis, machines have not yet shown the ability to outperform humans. On average, 80% of landmarks are accurately identified by AI alone within 2 mm of the ideal location [18]. The errors in the remaining 20% of landmarks can drastically affect cephalometric angular and linear measurements to the point that the results are not of diagnostic value. The ability to manually alter the location of the inaccurate landmarks could result in significantly improved accuracy levels. However, there is a lack of studies in the literature that specifically evaluate the extent to which clinicians with different levels of experience can benefit from this collaborative approach. To the best of our knowledge, this study will be the first to assess the impact of AI on the precision and accuracy of cephalometric analyses conducted by examiners with varying levels of experience. The objective of this study was to evaluate the precision and accuracy of cephalometric analyses performed using AI alone and in combination with human augmentation, and to determine whether this collaboration leads to varying levels of improvement among examiners with different levels of experience. Our hypothesis is that the least experienced clinicians would derive the greatest benefit from the AI/human augmentation in cephalometric analyses.

2. Materials and Methods

This study was reviewed and marked as IRB exempt by Indiana University Institutional Review Board (Protocol:14334) on 14 March 2022.

2.1. Sample Size Justification

Prior to the start of the study, a power analysis was conducted to determine the sample size. With a sample size of 30 images, the width of the 95% confidence interval for the intraclass correlation coefficients (ICCs) ranged from 0.08 and 0.28 from the estimated ICC, assuming the ICC is between 0.8 and 0.95. In addition, the width of the 95% confidence interval for the percentage of measurements within 1.5 mm of the ideal ranged between 20% and 30% from the estimated percentage, assuming the percentage is between 75% and 95%.

2.2. Data Collection

Thirty lateral cephalograms were randomly selected from a pool of adult (age 18 or older) orthodontic patients treated in the Indiana University School of Dentistry (IUSD). Inclusion criteria included patients with a class I skeletal relationship (defined as ANB between 0.5 to 5.0°) and an angle class I dental relationship. Patients were excluded if they had craniofacial anomalies, missing or unerupted permanent incisors, missing or unerupted permanent molars, or poor quality initial cephalometric radiographs. The cephalometric radiographs were saved as JPEG files at a resolution of 200 dpi.
A calibration session led by an orthodontist with 25 years of experience was conducted to clarify the cephalometric landmark identification process and to define each of the landmarks. Four examiners, including an orthodontist with two years of experience, a second-year orthodontics resident, a fourth-year dental student, and a general dentist with one year of experience, each separately and manually identified 31 cephalometric landmarks on the 30 radiographs (Figure 1). The same examiners retraced all radiographs after a period of one week in a randomized order. After completion of the manual tracings, the examiners traced the same cephalometric radiographs with the aid of a deep-learning AI (AI/human augmentation method). For these tracings, AI first located all of the landmarks, and the examiners later adjusted points as needed. The AI/human augmentation method tracings were completed again after one week in a randomized order. Additionally, AI alone identified each of the landmarks without any adjustments and repeated the process one week later.
The gold standard for landmark positions was identified with the collaboration of an orthodontist and dental radiologist, each with 25 years of experience within their respective fields. Any disagreements resulted in the point being located at the midpoint both vertically and horizontally between the two points.

2.3. Software

RadioCef software v.3 (RadioMemory Ltd., Belo Horizonte, Brazil) with CEFBOT module was used for automated cephalometric landmark identification. This software uses four subsystems to locate cephalometric landmarks. Subsystem 1 is a convolutional neural network (CNN) that processes radiographs into probability maps of cephalometric landmark positions. Subsystem 2 is another CNN that partitions radiographs by vectorizing skeletal and soft tissue borders and estimates the positions of cranial landmarks. Subsystem 3 combines the two CNN subsystems to estimate the position of landmarks, whereas subsystem 4 acts as a quality check to validate the previous results [17]. The CEFBOT was trained with substantial cephalometric data comprising over 250,000 cephalometric images from multiple X-ray sources and providers [17].

2.4. Data Analyses

Using the traced landmarks, 26 cephalometric measurements were measured for each of the traced radiographs (Supplementary Table S1). Precision was evaluated by comparing the initial tracing to the final tracing completed after a period of one week for each of the examiners, with and without the aid of AI. The accuracy of each of these measurements was evaluated by comparing each examiner’s values, with and without the aid of AI, to the gold standard values through mean absolute error (MAE). MAEs were expressed as millimeters or degrees depending on whether it was a linear or angular measurement.

2.5. Statistical Analyses

For both precision and accuracy analyses, MAEs with 95% confidence intervals were provided for differences between the gold standard and each measurement. For precision analysis, ICCs and Bland–Altman plots were used to measure the agreement between initial and final measurements. One-way analysis of variance (ANOVA) with a random effect was used to test for the differences between the 9 examiner groups. All the analyses were done using SAS 9.4 (Cary, NC, USA) software.

3. Results

3.1. Precision Analysis

Table 1 shows the ICCs of the cephalometric measurements for each group. Mean ICC values were also calculated for all groups of examiners by averaging the individual cephalometric measurement ICC values. According to the results, AI showed excellent precision (ICCs > 0.90). The precision of the examiners without the aid of AI generally followed their level of experience and exposure to the cephalometric analyses. The orthodontist, resident, dentist, and dental student had a mean ICC of 0.92, 0.92, 0.81, and 0.77, respectively. With the aid of AI, the precision of the orthodontist, resident, dentist, and dental student improved by 3.26%, 2.17%, 19.75%, and 23.38%, respectively (Figure 2). Overall, the precision of the examiners improved by 10.47% with the aid of AI. As we expected, the least experienced clinicians benefitted the most from the help of AI.
Without the aid of AI, the cephalometric measurements that yielded moderate precision were maxillary length, U1-NA (mm), SNA, ANB, and convexity angles. With the AI/human augmentation method, 23 out of 26 measurements showed excellent precision, while U1-NA (mm), ANB, and convexity angles improved to good precision (Table 1).
Table 2 shows the results of the one-way ANOVA analysis, with a random effect used to test for the differences in precision between the methods. A statistically significant difference was found in 13 out of 26 measurements (p < 0.05). In general, the dentist and student, both without the aid of AI, had more significant differences in precision compared to the rest of the examiner groups.

3.2. Accuracy Analysis

To compare each examiner’s overall accuracy with and without AI’s aid, an average of the MAEs was calculated for each group (Table 3). The average MAE of CEFBOT was 1.83 (in mm/°), which was within the clinically acceptable range (MAEs < 2 mm/°). The examiners’ accuracy without AI generally followed their level of experience and exposure to the cephalometric analyses. The orthodontist and resident both had MAEs within the clinically acceptable range (1.57 and 1.99 mm/°, respectively) while the accuracy of the dentist and dental student were out of the range, with MAEs of 2.55 and 2.68 mm/°, respectively. With the aid of AI, the accuracy of the orthodontist, resident, dentist, and dental student improved by 12.74%, 19.10%, 35.69%, and 33.96%, respectively (Figure 3). The AI/human augmentation method revealed clinically acceptable accuracies for all of the examiner groups. Although the examiners’ accuracy improved by 27.27% with the aid of AI, the least experienced clinicians benefitted the most from the AI/human augmentation method of cephalometric landmark identification.
Without the aid of AI, 13 out of 26 cephalometric measurements (maxillary and mandibular lengths, posterior face height, SNA, FMA, cranial base, U1-SN, L1-MP, L1-NB, interincisal, convexity, and facial angles) yielded MAEs out of the clinically acceptable range (Table 3). With the AI/human augmentation method, five of these measurements (SNA, maxillary and mandibular lengths, convexity, and facial angles) improved to be in the clinically acceptable range, while others remained out of the range although they showed some improvement in their accuracies (Table 3).
Table 4 shows the results of the one-way ANOVA analysis with a random effect used to test for the differences in accuracies between the methods. A statistically significant difference was found in 23 out of 26 measurements (p < 0.0001). No significant differences between groups were found in the ANB, L1-NB (°), and NA-Apo measurements (p > 0.05).

4. Discussion

Accurate identification of cephalometric landmarks is crucial in orthodontic diagnosis and treatment planning. Approximately 20% of landmarks identified by AI deviate by more than 2 mm from the ideal location [18]. These landmarks are used for angular and linear cephalometric measurements. A slight deviation in the position of a single landmark can significantly alter the resulting measurement and reduce the diagnostic quality of the traced cephalometric radiograph. Manual adjustment of the misidentified landmarks has been suggested to improve accuracy and allow orthodontists to use the radiograph in a diagnostic manner [11]. Currently, there is no literature that compares the precision and accuracy of AI/human modified landmarks to AI alone. Additionally, no studies have evaluated how less experienced examiners, such as dentists and dental students, fare in cephalometric landmark identification, with or without the aid of AI. As the proportion of orthodontic cases treated by general dentists compared to orthodontists increases, it is important to evaluate the accuracy of these less experienced clinicians.
The precision of AI alone obtained in this study was similar to the levels of precision reported in other studies [9,17]. Silva and his colleagues found that the ICC of AI ranged from 0.768 to 0.997 for the 10 Arnett’s measurements. Excluding the lowest ICC measurement of Frankfort plane to true horizontal line (ICC = 0.768), the remaining ICC values were greater than 0.94, indicating excellent precision [17]. This is similar to our study, in that the ICC values of AI traced cephalometric measurements ranging from 0.890 to 1.00 across the 26 cephalometric measurements and demonstrated the greatest precision of all examiners. Furthermore, Hwang found that the YOLOv3 deep-learning software had excellent precision and marked each landmark at the exact same position compared to the human inter-examiner reliability of 0.97 mm [9]. The accuracy of AI-derived cephalometric measurements differed from Silva’s study, which found that there were no statistically significant differences in accuracy of nine of the 10 Arnett’s measurements used, and AI was unable to measure the 10th measurement [9]. In this study, 10 out of 26 AI-derived measurements revealed accuracies out of the clinically acceptable range. An interesting note was that of the four of these measurements involved the gonion landmark (posterior face height, FMA, SN-MP, and L1-MP). Posterior face height and L1-MP were significantly underestimated, whereas SN-MP and FMA were overestimated. These errors were consistent with AI software identifying gonion too superiorly relative to the gold standard, which is a likely source contributing to the mean error. This error was not limited to our study, as Hwang found that the mean error of YOLOv3 was 2.9 mm in the identification of the gonion landmark [9]. Additionally, a meta-analysis conducted by Schwendicke showed that porion, subspinale, gonion, articulare, and anterior nasal spine were the least accurate landmarks identified by AI, although it varied depending on the specific AI software being tested [18].
Of the five cephalometric measurements that demonstrated moderate precision without the aid of AI, four of them (U1-NA distance, SNA, ANB, and convexity angles) involved “Point A” landmark. This suggests “Point A” may pose significant challenges in terms of reproducibility. Possibilities for this include the proximity of this point to the maxillary incisor root, superimposition of nearby structures, blurriness of the curvature in the anterior maxilla, and proximity of the canine fossa. Subspinale, also known as Point A, has shown good reliability in the x-axis but significant variation in the y-axis [19]. Jeon also reported a significant reduction in accuracy in U1-NA (mm) compared to other measurements and theorized it may be due to superimposition of surrounding anatomy [11].
In terms of accuracy averaged among all the examiners, 13 of the 26 measurements demonstrated MAEs greater than 2 mm or degrees without the use of AI. The increased MAEs of FMA and posterior face height may be due to the difficulty of identification of the gonion landmark. Gonion is a constructed point that relies on accurate identification of other landmarks, such as articulare and menton, to create tangent lines to the inferior body of the mandible and posterior border of the ramus. Slight inaccuracies in the tangent lines can dramatically affect the location of gonion. Among the 16 landmarks evaluated by Baumrind and Frantz, gonion was found to be the least reliable landmark in both the x and y-axes in lateral cephalometric films [20]. Additionally, in the case of FMA, it is possible that the orbitale landmark was not accurately identified by all examiners. Orbitale has been shown to have significant variability in lateral cephalometric identification, especially in the y-axis [21]. Inaccuracy of the maxillary length measurement may be due to difficulty of identification of the posterior nasal spine landmark, as it is often superimposed with the third molar germ. Condylion and basion landmarks are also challenging to identify due to the superimposition of other craniofacial structures and thus may explain the higher MAEs of mandibular length and cranial base angle. Basion has shown significant variation in identification in both the x and y-coordinates [21]. The incisor angulation measurements (U1-SN, U1-NA, L1-MP, L1-NB, and interincisal angle) also demonstrated increased MAEs. This could be due to difficulty in the identification of upper and lower incisor apices. Stabrun and Danielson found that upper and lower incisor apices were not reproducibly identified in 75% of cases [22]. Misidentification of the incisor root apices results in alterations of the long axis of the upper and lower incisors, which affects the accuracy of U1-SN, U1-NA, L1-MP, L1-NB, and interincisal angles.
The AI/human augmentation method was a novel aspect of this study. This method demonstrated an increase in the precision and accuracy of all examiner groups compared to the manual landmark identification technique, especially with the less experienced student and dentist examiners. This is likely due to the inherent ability of the AI to predictably locate each landmark in a nearly identical position [9]. The less experienced examiners may not have changed the position of landmarks unless there was a significant discrepancy due to a lack of skill in tracing lateral cephalometric radiographs. Most measurements that showed clinical significance overlapped between AI alone and the AI/human augmentation method (posterior face height, FMA, L1-MP, L1-NB, and interincisal angles). This reinforces the idea that the location of difficult to locate landmarks, such as gonion and incisor apices, may not have been modified by examiners.
Overall, the results from this study show that the human/AI augmentation method demonstrates greater accuracy and precision for all examiner groups compared to manual landmark identification. Additionally, the mean accuracy of every examiner with the aid of AI was greater than the accuracy of AI alone. Therefore, this method improves the diagnostic quality of lateral cephalometric radiographs compared to manual or AI identification alone.
One limitation of this study is that a single examiner represented an entire dental professional group. Ideally, each experience level would comprise multiple examiners. Additionally, we were unable to compare the accuracy or precision of individual landmarks. Future studies with multiple examiners and evaluation of the precision and accuracy of individual landmarks are recommended. Despite significant improvements in computing technology and AI in automatic lateral cephalometric landmark identification, AI alone cannot fully replace experienced clinicians. However, AI can serve as an educational tool and aid in landmark identification through the human/AI augmentation method, especially for less experienced clinicians.

5. Conclusions

Deep learning AI demonstrated excellent precision and good accuracy in automated cephalometric analysis. Overall, the precision and accuracy of the examiners with the aid of AI improved by 10.47% and 27.27%, respectively. The AI/human augmentation method significantly improved the precision and accuracy of less experienced dental professionals and students in comparison to that of an experienced orthodontist. These groups benefited the most from the approach.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/app13126921/s1, Table S1: Cephalometric measurements and their definitions.

Author Contributions

Conceptualization, S.P. and H.T.; data curation, S.P., A.Z., E.H., A.W., V.D. and H.T.; formal analysis, S.P., S.S.B., G.E., V.D. and H.T.; investigation, S.P., V.D. and H.T.; methodology, S.P., A.Z., E.H., A.W. and V.D.; project administration, H.T.; software, H.T.; supervision, H.T.; validation, S.P.; writing—original draft, S.P. and H.T.; writing—review & editing, S.P., A.W., V.D. and H.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

This study was reviewed and marked as IRB exempt by Indiana University Institutional Review Board (Protocol:14334) on 14 March 2022.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data underlying this article are available in the article.

Acknowledgments

We would like to thank Craig Eberhardt, EHR systems, for help in filtering patient data.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Leonardi, R.; Giordano, D.; Maiorana, F.; Spampinato, C. Automatic cephalometric analysis. Angle Orthod. 2008, 78, 145–151. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Albarakati, S.F.; Kula, K.S.; Ghoneima, A.A. The reliability and reproducibility of cephalometric measurements: A comparison of conventional and digital methods. Dentomaxillofac. Radiol. 2012, 41, 11–17. [Google Scholar] [CrossRef] [Green Version]
  3. Liu, J.K.; Chen, Y.T.; Cheng, K.S. Accuracy of computerized automatic identification of cephalometric landmarks. Am. J. Orthod. Dentofac. Orthop. 2000, 118, 535–540. [Google Scholar] [CrossRef]
  4. Durão, A.P.; Morosolli, A.; Pittayapat, P.; Bolstad, N.; Ferreira, A.P.; Jacobs, R. Cephalometric landmark variability among orthodontists and dentomaxillofacial radiologists: A comparative study. Imaging Sci. Dent. 2015, 45, 213–220. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Hung, K.; Montalvao, C.; Tanaka, R.; Kawai, T.; Bornstein, M.M. The use and performance of artificial intelligence applications in dental and maxillofacial radiology: A systematic review. Dentomaxillofac. Radiol. 2020, 49, 20190107. [Google Scholar] [CrossRef] [PubMed]
  6. Cohen, A.M.; Ip, H.H.; Linney, A.D. A preliminary study of computer recognition and identification of skeletal landmarks as a new method of cephalometric analysis. Br. J. Orthod. 1984, 11, 143–154. [Google Scholar] [CrossRef] [PubMed]
  7. Ren, R.; Luo, H.; Su, C.; Yao, Y.; Liao, W. Machine learning in dental, oral and craniofacial imaging: A review of recent progress. PeerJ 2021, 9, e11451. [Google Scholar] [CrossRef]
  8. Park, J.H.; Hwang, H.W.; Moon, J.H.; Yu, Y.; Kim, H.; Her, S.B.; Srinivasan, G.; Aljanabi, M.N.A.; Donatelli, R.E.; Lee, S.J. Automated identification of cephalometric landmarks: Part 1-Comparisons between the latest deep-learning methods YOLOV3 and SSD. Angle Orthod. 2019, 89, 903–909. [Google Scholar] [CrossRef] [Green Version]
  9. Hwang, H.-W.; Park, J.-H.; Moon, J.-H.; Yu, Y.; Kim, H.; Her, S.-B.; Srinivasan, G.; Aljanabi, M.N.A.; Donatelli, R.E.; Lee, S.-J. Automated Identification of Cephalometric Landmarks: Part 2-Might It Be Better Than human? Angle Orthod. 2019, 90, 69–76. [Google Scholar] [CrossRef] [Green Version]
  10. Yao, J.; Zeng, W.; He, T.; Zhou, S.; Zhang, Y.; Guo, J.; Tang, W. Automatic localization of cephalometric landmarks based on convolutional neural network. Am. J. Orthod. Dentofac. Orthop. 2022, 161, e250–e259. [Google Scholar] [CrossRef]
  11. Jeon, S.; Lee, K.C. Comparison of cephalometric measurements between conventional and automatic cephalometric analysis using convolutional neural network. Prog. Orthod. 2021, 22, 14. [Google Scholar] [CrossRef]
  12. Junaid, N.; Khan, N.; Ahmed, N.; Abbasi, M.S.; Das, G.; Maqsood, A.; Ahmed, A.R.; Marya, A.; Alam, M.K.; Heboyan, A. Development, Application, and Performance of Artificial Intelligence in Cephalometric Landmark Identification and Diagnosis: A Systematic Review. Healthcare 2022, 10, 2454. [Google Scholar] [CrossRef]
  13. Bao, H.; Zhang, K.; Yu, C.; Li, H.; Cao, D.; Shu, H.; Liu, L.; Yan, B. Evaluating the accuracy of automated cephalometric analysis based on artificial intelligence. BMC Oral Health 2023, 23, 191. [Google Scholar] [CrossRef] [PubMed]
  14. Serafin, M.; Baldini, B.; Cabitza, F.; Carrafiello, G.; Baselli, G.; Del Fabbro, M.; Sforza, C.; Caprioglio, A.; Tartaglia, G.M. Accuracy of automated 3D cephalometric landmarks by deep learning algorithms: Systematic review and meta-analysis. Radiol. Med. 2023, 128, 544–555. [Google Scholar] [CrossRef] [PubMed]
  15. Moon, J.H.; Hwang, H.W.; Yu, Y.; Kim, M.G.; Donatelli, R.E.; Lee, S.J. How much deep learning is enough for automatic identification to be reliable? Angle Orthod. 2020, 90, 823–830. [Google Scholar] [CrossRef]
  16. Kim, J.; Kim, I.; Kim, Y.-J.; Kim, M.; Cho, J.-H.; Hong, M.; Kang, K.-H.; Lim, S.-H.; Kim, S.-J.; Kim, Y.H.; et al. Accuracy of automated identification of lateral cephalometric landmarks using cascade convolutional neural networks on lateral cephalograms from nationwide multi-centres. Orthod. Craniofac. Res. 2021, 24, 59–67. [Google Scholar] [CrossRef] [PubMed]
  17. Silva, T.P.; Hughes, M.M.; Menezes, L.D.S.; de Melo, M.F.B.; Takeshita, W.M.; Freitas, P.H.L. Artificial intelligence-based cephalometric landmark annotation and measurements according to Arnett’s analysis: Can we trust a bot to do that? Dentomaxillofac. Radiol. 2022, 51, 20200548. [Google Scholar] [CrossRef]
  18. Schwendicke, F.; Chaurasia, A.; Arsiwala, L.; Lee, J.H.; Elhennawy, K.; Jost-Brinkmann, P.G.; Demarco, F.; Krois, J. Deep learning for cephalometric landmark detection: Systematic review and meta-analysis. Clin. Oral Investig. 2021, 25, 4299–4309. [Google Scholar] [CrossRef]
  19. Broch, J.; Slagsvold, O.; Røsler, M. Error in landmark identification in lateral radiographic headplates. Eur. J. Orthod. 1981, 3, 9–13. [Google Scholar] [CrossRef] [PubMed]
  20. Baumrind, S.; Frantz, R.C. The reliability of head film measurements. 1. Landmark identification. Am. J. Orthod. 1971, 60, 111–127. [Google Scholar] [CrossRef] [PubMed]
  21. Trpkova, B.; Major, P.; Prasad, N.; Nebbe, B. Cephalometric landmarks identification and reproducibility: A meta analysis. Am. J. Orthod. Dentofac. Orthop. 1997, 112, 165–170. [Google Scholar] [CrossRef] [PubMed]
  22. Stabrun, A.E.; Danielsen, K. Precision in cephalometric landmark identification. Eur. J. Orthod. 1982, 4, 185–196. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Cephalometric landmarks used in this study. 1: Nasion, 2: soft tissue nasion, 3: orbitale, 4: sella, 5: porion, 6: basion, 7: condylion, 8: articulare, 9: gonion, 10: menton, 11: pogonion, 12: gnathion, 13: B point, 14: A point, 15: anterior nasal spine, 16: posterior nasal spine, 17: distal of upper first molar, 18: mesial of upper first molar, 19: mesial of lower first molar, 20: lower incisor root apex, 21: lower incisor incisal edge, 22: upper incisor incisal edge, 23: upper incisor root apex, 24: soft tissue pogonion, 25: soft tissue B point, 26: lower lip, 27: upper lip, 28: soft tissue A point, 29: subnasale, 30: pronasale, 31: soft tissue menton.
Figure 1. Cephalometric landmarks used in this study. 1: Nasion, 2: soft tissue nasion, 3: orbitale, 4: sella, 5: porion, 6: basion, 7: condylion, 8: articulare, 9: gonion, 10: menton, 11: pogonion, 12: gnathion, 13: B point, 14: A point, 15: anterior nasal spine, 16: posterior nasal spine, 17: distal of upper first molar, 18: mesial of upper first molar, 19: mesial of lower first molar, 20: lower incisor root apex, 21: lower incisor incisal edge, 22: upper incisor incisal edge, 23: upper incisor root apex, 24: soft tissue pogonion, 25: soft tissue B point, 26: lower lip, 27: upper lip, 28: soft tissue A point, 29: subnasale, 30: pronasale, 31: soft tissue menton.
Applsci 13 06921 g001
Figure 2. Mean intraclass correlation coefficient for each examiner group and percentages of improvement in precision.
Figure 2. Mean intraclass correlation coefficient for each examiner group and percentages of improvement in precision.
Applsci 13 06921 g002
Figure 3. Average mean absolute errors for each examiner group and the percentages of improvement in accuracy.
Figure 3. Average mean absolute errors for each examiner group and the percentages of improvement in accuracy.
Applsci 13 06921 g003
Table 1. Precision of the examiners with and without the aid of AI given by the intraclass correlation coefficients (ICCs).
Table 1. Precision of the examiners with and without the aid of AI given by the intraclass correlation coefficients (ICCs).
MeasurementsOrthodontistOrthodontist
AI
ResidentResident
AI
DentistDentist
AI
StudentStudent
AI
AIMean
without AI
Mean
with AI
SNA (°)0.920.930.910.930.710.980.390.940.970.730.95
SN-Palatal Plane (°)0.930.930.890.980.620.960.740.960.920.800.96
SNB (°)0.960.940.920.980.770.980.920.960.990.890.97
FMA (°)0.950.930.940.980.880.970.880.980.990.910.97
SN-MP (°)0.990.950.950.990.880.980.950.991.000.940.98
Y-Axis (°)0.970.960.920.980.790.980.930.970.990.900.97
ANB (°)0.810.910.840.730.601.000.130.890.910.600.88
ANS-PNS (mm)0.780.920.860.910.580.980.650.930.990.720.94
Co-Gn (mm)0.950.960.960.990.820.980.880.971.000.900.98
Ba-S-N (°)0.970.960.940.980.870.940.870.961.000.910.96
U1-SN (°)0.950.960.930.950.910.930.900.950.980.920.95
U1-NA (°)0.930.960.960.940.920.930.900.950.970.930.95
U1-NA (mm)0.880.920.900.810.710.960.220.910.910.680.90
L1-MP (°)0.940.960.950.960.960.980.900.930.990.940.96
L1-NB (°)0.920.980.940.940.930.970.840.880.990.910.94
L1-NB (mm)0.940.960.960.980.941.000.920.970.990.940.98
Interincisal Angle (°)0.960.980.960.960.960.960.900.940.990.950.96
Upper Lip to E-Plane (mm)0.990.990.990.990.990.980.960.981.000.980.99
Lower Lip to E-Plane (mm)1.000.990.990.990.990.990.970.991.000.990.99
N′-Sn′ (mm)0.750.930.850.970.670.980.850.970.990.780.96
Sn′-Me′ (mm)0.950.970.970.960.950.820.850.941.000.930.92
N-ANS (mm)0.940.920.750.980.560.990.810.940.890.770.96
ANS-Me (mm)0.980.990.990.990.991.000.980.990.980.990.99
Ar-Go (mm)0.960.920.930.990.810.970.910.990.980.900.97
NA-Apo (°)0.830.870.840.700.591.000.130.900.910.600.87
FH-Npo (°)0.810.950.880.950.710.940.650.940.960.760.95
Mean0.920.950.920.940.810.970.770.950.970.860.95
Table 2. The results of the one-way ANOVA analysis with a random effect used to test for the differences in precision between the methods.
Table 2. The results of the one-way ANOVA analysis with a random effect used to test for the differences in precision between the methods.
MeasurementsF-Valuep-ValuePairwise Comparison
SNA (°)1.310.24
SN-Palatal Plane (°)1.970.05
SNB (°)3.98<0.011 < 5, 2 < 5, 3 < 5, 4 < 5, 7 < 5, 8 < 5, 5 > 6, 5 > 9
FMA (°)9.18<0.011 > 7, 2 > 7, 3 > 7, 4 > 7, 7 < 8, 7 < 5, 7 < 6, 7 < 9
SN-MP (°)5.51<0.011 > 3, 1 > 5, 2 < 5, 3 < 4, 3 < 8, 3 > 5, 3 < 6, 3 < 9, 4 > 5, 7 > 5, 8 > 5, 5 < 6, 5 < 9
Y-Axis (°)6.83<0.011 > 5, 2 > 5, 3 > 5, 4 > 5, 7 > 5, 8 > 5, 5 < 6, 5 < 9
ANB (°)2348.9<0.011 < 8, 2 < 7, 2 < 8, 3 < 8, 4 < 8, 7 < 8, 8 > 5, 8 > 6, 8 > 9
ANS-PNS (mm)2.710.011 < 2, 2 > 3, 2 > 7, 2 > 8, 2 > 5, 3 > 7, 4 > 7, 7 < 5, 7 < 6, 7 < 9
Co-Gn (mm)2.40.021 < 2, 1 < 5, 2 > 7, 3 < 5, 4 < 5, 7 < 5, 5 > 6, 5 > 9
Ba-S-N (°)3.04<0.011 > 5, 2 > 5, 3 < 7, 4 > 5, 7 > 5, 8 > 5, 5 < 6, 5 < 9
U1-SN (°)1.250.27
U1-NA (°)1.590.13
U1-NA (mm)0.90.52
L1-MP (°)2.120.031 < 3, 1 < 7, 3 > 4, 3 > 5, 4 < 7, 7 > 5, 5 < 6, 5 < 9
L1-NB (°)0.650.74
L1-NB (mm)1.140.34
Interincisal Angle (°)0.420.9101
Upper Lip to E-Plane (mm)1.050.4
Lower Lip to E-Plane (mm)1.680.1
N′-Sn′ (mm)7.27<0.011 < 2, 1 < 3, 1 < 4, 1 < 7, 1 < 8, 1 > 5, 1 < 6, 1 < 9, 2 > 5, 3 > 5, 4 > 5, 7 > 5, 8 > 5, 5 < 6, 5 < 9
Sn′-Me′ (mm)0.890.53
N-ANS (mm)6.21<0.011 > 5, 2 > 5, 3 > 5, 4 > 5, 7 > 5, 8 > 5, 5 < 6, 5 < 9
ANS-Me (mm)3.870.00031 > 3, 1 > 7, 1 > 5, 1 > 9, 2 > 3, 2 > 7, 2 > 5, 2 > 9, 3 < 4, 3 < 8, 4 > 7, 4 > 5, 4 > 9, 7 < 8, 7 < 6
Ar-Go (mm)1.180.31
NA-Apo (°)1.30.24
FH-Npo (°)1.980.052 < 7, 3 < 7, 4 < 7, 7 > 6, 7 > 9, 5 > 6
1: Orthodontist, 2: orthodontist/AI, 3: resident, 4: resident/AI, 5: dentist, 6: dentist/AI, 7: student, 8: student/AI, 9: AI.
Table 3. Accuracy of the examiners with and without the aid of AI given by the mean absolute errors (MAEs) from the gold standard.
Table 3. Accuracy of the examiners with and without the aid of AI given by the mean absolute errors (MAEs) from the gold standard.
MeasurementsOrthodontistOrthodontist
AI
ResidentResident
AI
DentistDentist
AI
StudentStudent
AI
AIMean
without AI
Mean
with AI
SNA (°)1.671.201.691.663.241.613.571.771.862.541.56
SN-Palatal Plane (°)1.501.181.661.102.871.511.761.471.581.951.32
SNB (°)1.170.861.381.222.931.311.421.241.491.731.16
FMA (°)1.411.653.002.911.811.523.052.213.182.322.07
SN-MP (°)1.201.361.442.512.981.332.001.622.991.911.71
Y-Axis (°)0.830.771.260.923.291.121.360.941.091.690.94
ANB (°)0.970.711.070.661.210.782.900.910.701.540.77
ANS-PNS (mm)2.211.783.442.363.351.882.561.582.062.891.90
Co-Gn (mm)2.491.661.681.302.991.382.441.481.372.401.46
Ba-S-N (°)3.032.763.362.385.743.393.893.171.994.012.93
U1-SN (°)2.191.843.251.945.782.994.693.482.393.982.56
U1-NA (°)2.621.753.242.424.243.175.363.282.523.872.66
U1-NA (mm)1.050.841.350.851.531.043.781.040.801.930.94
L1-MP (°)1.962.192.882.602.382.043.182.613.082.602.36
L1-NB (°)1.871.862.701.982.282.214.372.562.162.812.15
L1-NB (mm)0.550.460.560.490.520.560.840.580.580.620.52
Interincisal Angle (°)2.601.964.162.864.213.827.124.472.734.523.28
Upper Lip to E-Plane (mm)0.330.400.430.350.370.340.300.570.850.360.42
Lower Lip to E-Plane (mm)0.470.470.440.390.450.470.380.490.830.440.46
N′-Sn′ (mm)1.921.381.481.382.331.751.391.442.601.781.49
Sn′-Me′ (mm)1.721.741.811.591.151.641.841.631.811.631.65
N-ANS (mm)0.941.211.731.082.801.451.111.101.491.651.21
ANS-Me (mm)1.161.321.381.071.601.131.250.821.491.351.09
Ar-Go (mm)1.761.631.913.101.971.652.772.603.112.102.25
NA-Apo (°)1.631.311.821.172.451.384.271.421.262.541.32
FH-Npo (°)1.601.312.511.601.831.202.201.471.662.041.40
Mean1.571.371.991.612.551.642.681.771.832.201.60
Table 4. The results of the one-way ANOVA analysis with a random effect used to test for the differences in accuracies between the methods.
Table 4. The results of the one-way ANOVA analysis with a random effect used to test for the differences in accuracies between the methods.
MeasurementsF-Valuep-ValuePairwise Comparison
SNA (°)4.4<0.00011 < 7, 1 < 5, 2 < 7, 2 < 5, 3 < 5, 4 < 7, 4 < 5, 7 > 6, 7 > 9, 8 < 5, 8 > 9, 5 > 6, 5 > 9
SN-Palatal Plane (°)6.82<0.00011 > 5, 2 > 5, 3 < 4, 3 > 5, 4 > 7, 4 > 8, 4 > 5, 4 > 6, 7 > 5, 7 < 9, 8 > 5, 8 < 9, 5 < 6, 5 < 9, 6 < 9
SNB (°)22.21<0.00011 < 3, 1 < 7, 1 < 5, 1 > 9, 2 < 3, 2 < 7, 2 < 5, 2 > 9, 3 > 4, 3 < 5, 3 > 6, 3 > 9, 4 < 7, 4 < 8, 4 < 5, 7 < 5, 7 > 6, 7 > 9, 8 < 5, 8 > 6, 8 > 9, 5 > 6, 5 > 9, 6 > 9
FMA (°)11.92<0.00011 < 3, 1 < 4, 1 > 7, 1 < 9, 2 < 3, 2 < 4, 2 > 7, 2 < 8, 2 < 9, 3 > 7, 3 > 5, 3 > 6, 4 > 7, 4 > 5, 4 > 6, 7 < 8, 7 < 5, 7 < 6, 7 < 9, 8 > 5, 5 < 9, 6 < 9
SN-MP (°)27<0.00011 < 4, 1 < 8, 1 > 5, 1 < 9, 2 < 4, 2 < 8, 2 > 5, 2 < 9, 3 < 4, 3 < 8, 3 > 5, 3 < 9, 4 > 7, 4 > 8, 4 > 5, 4 > 6, 7 > 5, 7 < 9, 8 > 5, 8 < 9, 5 < 6, 5 < 9, 6 < 9
Y-Axis (°)23.89<0.00011 > 7, 1 > 5, 1 < 9, 2 > 7, 2 > 5, 2 < 9, 3 < 4, 3 > 5, 3 < 9, 4 > 7, 4 > 8, 4 > 5, 4 > 6, 7 < 8, 7 > 5, 7 < 6, 7 < 9, 8 > 5, 8 < 9, 5 < 6, 5 < 9, 6 < 9
ANB (°)0.40.9209
ANS-PNS (mm)12.8<0.00011 > 3, 1 > 4, 1 > 5, 2 > 3, 2 > 4, 2 > 5, 3 < 4, 3 < 7, 3 < 8, 3 < 6, 3 < 9, 4 < 7, 4 < 8, 4 > 5, 4 < 6, 4 < 9, 7 > 5, 8 > 5, 5 < 6, 5 < 9
Co-Gn (mm)16.56<0.00011 > 2, 1 > 3, 1 > 4, 1 > 7, 1 > 8, 1 > 6, 1 > 9, 2 > 4, 2 > 7, 2 > 8, 2 < 5, 2 > 6, 2 > 9, 3 > 7, 3 < 5, 4 > 7, 4 < 5, 7 < 5, 8 < 5, 5 > 6, 5 > 9
Ba-S-N (°)11.69<0.00011 > 5, 1 < 9, 2 > 5, 2 < 9, 3 < 4, 3 > 5, 3 < 9, 4 > 5, 4 > 6, 4 < 9, 7 > 5, 7 < 9, 8 > 5, 8 < 9, 5 < 6, 5 < 9, 6 < 9
U1-SN (°)30.85<0.00011 < 3, 1 < 7, 1 < 8, 1 < 5, 1 < 6, 1 > 9, 2 < 3, 2 < 7, 2 < 8, 2 < 5, 2 < 6, 2 > 9, 3 > 4, 3 < 7, 3 < 5, 3 > 9, 4 < 7, 4 < 8, 4 < 5, 4 < 6, 4 > 9, 7 > 8, 7 < 5, 7 > 6, 7 > 9, 8 < 5, 8 > 6, 8 > 9, 5 > 6, 5 > 9, 6 > 9
U1-NA (°)12.01<0.00011 < 7, 1 < 5, 1 > 9, 2 < 3, 2 < 7, 2 < 8, 2 < 5, 2 < 6, 3 < 7, 3 > 9, 4 < 7, 4 < 8, 4 < 5, 4 > 9, 7 > 8, 7 > 5, 7 > 6, 7 > 9, 8 > 9, 5 > 9, 6 > 9
U1-NA (mm)0.430.9016
L1-MP (°)7.38<0.00011 < 7, 1 > 9, 2 < 7, 2 > 9, 3 > 4, 3 > 8, 3 > 9, 4 < 7, 4 > 9, 7 > 8, 7 > 5, 7 > 6, 7 > 9, 8 > 9, 5 > 9, 6 > 9
L1-NB (°)1.510.156
L1-NB (mm)4.83<0.00011 > 5, 1 < 6, 2 > 7, 2 > 5, 3 > 5, 3 < 6, 3 < 9, 4 > 7, 4 > 5, 7 < 6, 7 < 9, 8 > 5, 8 < 6, 8 < 9, 5 < 6, 5 < 9
Interincisal Angle (°)7.95<0.00011 > 3, 1 > 7, 1 > 8, 1 > 5, 2 > 3, 2 > 7, 2 > 8, 2 > 5, 2 > 6, 3 > 7, 3 < 9, 4 > 7, 4 > 5, 4 < 9, 7 < 8, 7 < 6, 7 < 9, 7 < 9, 5 < 6, 5 < 9, 6 < 9
Upper Lip to E-Plane (mm)6.28<0.00011 > 3, 1 > 8, 1 > 9, 2 > 8, 2 > 9, 3 < 7, 3 < 5, 3 < 6, 3 > 9, 4 > 8, 4 > 9, 7 > 8, 7 > 9, 8 < 5, 8 < 6, 5 > 9, 6 > 9
Lower Lip to E-Plane (mm)4.31<0.00011 < 7, 1 < 6, 1 > 9, 2 < 6, 2 > 9, 3 > 9, 4 < 6, 4 > 9, 7 > 9, 8 < 6, 8 > 9, 5 > 9, 6 > 9
N′-Sn′ (mm)20.29<0.00011 < 2, 1 < 3, 1 < 4, 1 < 7, 1 < 8, 1 > 5, 1 < 6, 1 < 9, 2 > 5, 2 < 9, 3 > 5, 3 < 6, 3 < 9, 4 > 5, 4 < 9, 7 > 5, 7 < 6, 7 < 9, 8 > 5, 8 < 9, 5 < 6, 5 < 9, 6 < 9
Sn′-Me′ (mm)7.62<0.00011 > 5, 1 > 6, 2 > 5, 2 > 6, 2 > 9, 3 > 5, 3 > 6, 4 > 5, 4 > 6, 7 > 5, 7 > 6, 8 > 5, 8 > 6, 5 < 9, 6 < 9
N-ANS (mm)12<0.00011 > 3, 1 > 5, 2 > 3, 2 > 5, 3 < 4, 3 > 5, 3 < 6, 3 < 9, 4 > 5, 7 > 5, 7 < 9, 8 > 5, 5 < 6, 5 < 9
ANS-Me (mm)4.76<0.00011 < 8, 1 > 5, 2 < 8, 3 < 8, 4 > 5, 7 < 8, 7 > 5, 8 > 5, 8 > 9, 5 < 6, 6 > 9
Ar-Go (mm)22.38<0.00011 > 3, 1 > 4, 1 > 7, 1 > 8, 1 > 5, 1 > 6, 1 > 9, 2 > 3, 2 > 4, 2 > 7, 2 > 8, 2 > 5, 2 > 6, 2 > 9, 3 > 4, 3 > 7, 3 > 8, 3 > 9, 4 < 5, 4 < 6, 7 < 5, 7 < 6, 8 < 5, 8 < 6, 5 > 9, 6 > 9
NA-APo (°)0.80.6021
FH-NPo (°)15.05<0.00011 < 3, 1 < 7, 1 > 5, 2 < 3, 2 < 4, 2 < 7, 2 > 5, 2 < 9, 3 > 4, 3 > 8, 3 > 5, 3 > 6, 3 > 9, 4 > 8, 4 > 5, 4 > 6, 7 > 8, 7 > 5, 7 > 6, 8 > 5, 5 < 6, 5 < 9, 6 < 9
1: Orthodontist, 2: orthodontist/AI, 3: resident, 4: resident/AI, 5: dentist, 6: dentist/AI, 7: student, 8: tudent/AI, 9: AI.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Panesar, S.; Zhao, A.; Hollensbe, E.; Wong, A.; Bhamidipalli, S.S.; Eckert, G.; Dutra, V.; Turkkahraman, H. Precision and Accuracy Assessment of Cephalometric Analyses Performed by Deep Learning Artificial Intelligence with and without Human Augmentation. Appl. Sci. 2023, 13, 6921. https://doi.org/10.3390/app13126921

AMA Style

Panesar S, Zhao A, Hollensbe E, Wong A, Bhamidipalli SS, Eckert G, Dutra V, Turkkahraman H. Precision and Accuracy Assessment of Cephalometric Analyses Performed by Deep Learning Artificial Intelligence with and without Human Augmentation. Applied Sciences. 2023; 13(12):6921. https://doi.org/10.3390/app13126921

Chicago/Turabian Style

Panesar, Sumer, Alyssa Zhao, Eric Hollensbe, Ariel Wong, Surya Sruthi Bhamidipalli, George Eckert, Vinicius Dutra, and Hakan Turkkahraman. 2023. "Precision and Accuracy Assessment of Cephalometric Analyses Performed by Deep Learning Artificial Intelligence with and without Human Augmentation" Applied Sciences 13, no. 12: 6921. https://doi.org/10.3390/app13126921

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop