Next Article in Journal
Anatomical-MRI Correlations in Adults and Children with Arrhythmogenic Right Ventricular Cardiomyopathy
Next Article in Special Issue
Content-Based Medical Image Retrieval and Intelligent Interactive Visual Browser for Medical Education, Research and Care
Previous Article in Journal
Testing a Deep Learning Algorithm for Detection of Diabetic Retinopathy in a Spanish Diabetic Population and with MESSIDOR Database
Previous Article in Special Issue
Prediction of In-Hospital Cardiac Arrest Using Shallow and Deep Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Automated Adenoid Hypertrophy Assessment with Lateral Cephalometry in Children Based on Artificial Intelligence

1
Department of Orthodontics, Hubei-MOST KLOS & KLOBM, School & Hospital of Stomatology, Wuhan University, Wuhan 430079, China
2
School of Physics and Technology, Wuhan University, Wuhan 430079, China
3
Electronic Information School, Wuhan University, Wuhan 430079, China
4
Center for Evidence-Based Stomatology, Hubei-MOST KLOS & KLOBM, School & Hospital of Stomatology, Wuhan University, Wuhan 430079, China
5
Division of Dentistry, School of Medical Sciences, Faculty of Biology, Medicine and Health, The University of Manchester, Manchester Academic Health Science Centre, Manchester M13 9PL, UK
*
Authors to whom correspondence should be addressed.
Theses authors contributed equally to this work.
Diagnostics 2021, 11(8), 1386; https://doi.org/10.3390/diagnostics11081386
Submission received: 2 July 2021 / Revised: 25 July 2021 / Accepted: 30 July 2021 / Published: 31 July 2021
(This article belongs to the Special Issue Clinical Diagnosis Using Deep Learning)

Abstract

:
Adenoid hypertrophy may lead to pediatric obstructive sleep apnea and mouth breathing. The routine screening of adenoid hypertrophy in dental practice is helpful for preventing relevant craniofacial and systemic consequences. The purpose of this study was to develop an automated assessment tool for adenoid hypertrophy based on artificial intelligence. A clinical dataset containing 581 lateral cephalograms was used to train the convolutional neural network (CNN). According to Fujioka’s method for adenoid hypertrophy assessment, the regions of interest were defined with four keypoint landmarks. The adenoid ratio based on the four landmarks was used for adenoid hypertrophy assessment. Another dataset consisting of 160 patients’ lateral cephalograms were used for evaluating the performance of the network. Diagnostic performance was evaluated with statistical analysis. The developed system exhibited high sensitivity (0.906, 95% confidence interval [CI]: 0.750–0.980), specificity (0.938, 95% CI: 0.881–0.973) and accuracy (0.919, 95% CI: 0.877–0.961) for adenoid hypertrophy assessment. The area under the receiver operating characteristic curve was 0.987 (95% CI: 0.974–1.000). These results indicated the proposed assessment system is able to assess AH accurately. The CNN-incorporated system showed high accuracy and stability in the detection of adenoid hypertrophy from children’ lateral cephalograms, implying the feasibility of automated adenoid hypertrophy screening utilizing a deep neural network model.

1. Introduction

Located in the posterior and anterior wall of the nasopharynx, the adenoids are parts of the pharyngeal lymphoid ring. The adenoids, or pharyngeal tonsils, increase in size during childhood to twice of their final adult size with a particular pattern of growth. Under the physiological condition, adenoids often get smaller at the age of 6 and disappear at 10 years old. However, frequent upper airway infections can lead to pathological hypertrophy of the adenoids. The prevalence of adenoid hypertrophy (AH) in children and adolescents ranges from 42 to 70% [1]. AH is one of the most prevalent causes of upper airway obstruction and obstructive sleep apnea (OSA) in children [2].
Mouth breathing resulted from upper airway obstruction may lead to abnormal dentofacial development. Many previous studies have focused on the association between mouth breathing and dentofacial development, according to which mouth breathing could lead to narrow upper arch, longer facial height, steeper mandibular plane angle, and a more retrognathic mandible [3,4]. In addition, failure to thrive, neurobehavioral problems, and depressive symptoms are also believed to be associated with pediatric OSA [5,6,7,8].
Children with AH usually present in orthodontics department with malocclusion, thus the routine screening of AH in dental practice is helpful for preventing relevant craniofacial and systemic consequences [9]. Nasal endoscopy stands as the current gold standard of diagnosing AH [10]. However, nasal endoscopy is painful, and some young children cannot cooperate adequately. Plenty of studies have been performed to identify other reliable diagnostic tools for the detection of hypertrophic adenoid. In orthodontic practice, the lateral cephalogram is a simple, economic, and routine examination. Many studies have proven that lateral cephalograms had high reliability in detecting AH [11,12]. Recently, a systematic review suggested that despite a relatively high false-positive rate, the lateral cephalogram has great diagnostic accuracy (area under the receiver operating characteristic curve = 0.86) for the diagnosis of AH [13].
One of the most notable AH assessment method based on cephalograms is Fujioka’s adenoid–nasopharyngeal (AN) ratio [14]. In Fujioka’s [14] assessment method, four relevant landmarks are manually marked on the cephalograms to measure the AN ratio, which is similar to the process of cephalometric analysis. However, the entire assessment process, including landmark identification, is highly time-consuming and involves repetitive work. Besides, the accuracy of landmark identification depends largely on the examiner’s clinical experience. Inaccurate identification of cephalometric landmarks may lead to incorrect assessment results. Therefore, it is necessary to develop an accurate and efficient algorithm to automatically classify AH in lateral cephalograms.
Artificial intelligence (AI) refers to intelligence demonstrated by machines that can imitate human knowledge and behavior. Deep learning is a subtype of machine learning technique using multi-layer mathematical operations for automated learning and inferring complex data, such as imagery [15]. Deep learning structures, such as convolutional neural networks (CNNs), have been widely used for automatic image classification [16]. In dentistry, images play an important role in screening, diagnosis, and treatment planning. Moreover, the application of deep learning algorithms in cephalometric analysis and the diagnosis of skeletal classification has shown good performance [17,18,19,20]. However, research on the use of deep-learning-based methods in radiographic AH assessment is still limited [21].
Therefore, the purpose of this study was to propose a deep learning method for automated AH assessment based on lateral cephalograms.

2. Materials and Methods

This study was approved by the Ethics Committee of the School and Hospital of Stomatology, Wuhan University (No. 2020-B55).

2.1. Samples and Identification of Landmarks

The pre-treatment digital lateral cephalograms of all outpatients (6 y to 12 y, n = 937) attending the Department of Orthodontics, Hospital of Stomatology, Wuhan University in April–August, 2019 were collected. As determined a priori, 36 images with poor quality, including those with unclear occipital slope, were excluded, resulting in a sample of 901 cephalograms (normal: 651, moderate hypertrophy: 197, severe hypertrophy: 53). The method used for AH assessment was based on Fujioka’s A/N ratio [14]. As shown in Figure 1a, line segment L is drawn along the straight part of the anterior margin of the basiocciput; A’ is the point of maximal convexity along the inferior margin of the adenoid; PNS is the posterior superior edge of the hard palate; line segment A indicates the size of the adenoid, and line segment N indicates the size of the nasopharyngeal space. A child can be suspected of AH if the A/N ratio is greater than 60%.
Among the 901 lateral cephalograms, 581 were randomly selected for training, and 160 were randomly selected for validation, while the remaining 160 were used for testing. As shown in Figure 1b, four landmarks (Ba, Ar, A’, PNS) were accurately identified in the training set (n = 581) by two well-trained orthodontists (T.Z. and H.H.) together simultaneously and in consent. Ba is the most inferior-posterior point on the margin of the foramen magnum; Ar is the intersection of the inferior cranial base surface and the averaged posterior surfaces of the mandibular condyles.
Given that the original dataset size was relatively small, we augmented the training dataset to improve the performance and generalization ability of the neural network [22]. The original images were rotated from −20 to 20 degrees around the image center. In addition, these images were shifted by 10 pixels in the up, down, left, and right directions, and 20 pixels in the diagonal directions. The rotation and translation processes were carried out in a manner such that the ROI would be always within the image to avoid information loss. After this step, the size of training dataset grew from 581 images to 9877 images.

2.2. Model Architecture and Losses

Figure 2 and Table 1 demonstrate the overall architecture of our model, named HeadNet. It consisted of convolutional layers, attention residual modules [23,24], hourglass modules [25], and an integral regression layer [26]. The hourglass module with top-down and bottom-up design built with regular residual module (Supplementary Figure S1) had the advantage in integrating multiscale information for further detection. The attention residual module (Supplementary Figure S2) evolved from a regular residual module that was composed of a serialized placed channel attention part (Supplementary Figure S3a) and a spatial attention part (Supplementary Figure S3b) before output, as this kind of combination has been reported to achieve better results [23].
For efficiency considerations, all images (format: JPEG) were resized into the resolutions of 256 × 256 from 2300 × 2300 without unduly compromising their accuracy. An integral regression layer was applied over generated feature heatmaps by hourglass module to convert them into continuous coordinates [26]. The backpropagation was performed with different losses. The basic loss item was obtained through the comparison between detection and ground truth with L1 Loss, as it performed better than L2 Loss [26].
By deploying prior knowledge for the neural network, the network model could achieve higher performance [27]. Rotation case (Supplementary Figure S4a) would affect the vertical intersection between A’ and Ar-Ba. Translation case (ideal case: Supplementary Figure S4b), which usually comes with rotation (Supplementary Figure S4c,d), would affect the A and N. The distance from Ar(dt) to ground truth line (formed by Ar gt   and   Ba gt ) is marked as D a ; the distance from Ba(dt) to ground truth line is marked as D b . Intermediate supervision was adopted since it would improve the accuracy of classification [22,28].
To evaluate the effect of our proposed losses, ablation experiments were performed: HeadNet was trained with rotation, translation loss, and attention residual module.

2.3. Training Details

We trained the HeadNet with batch size as 10 using the SGD optimizer (momentum was 0.9, and the weight decay was 2 × 10 5 ), and all parameters of convolutional layers were initialized randomly. The training process started with warm-up (initial learning rate is 0.001) and an annealing strategy in which the learning rate was updated every 5 epochs.

2.4. Statistical Analysis and Evaluation

The absolute distance between the ground truth and the predicted point, and the average precision (AP), as well as the average recall (AR), were the evaluation metrics for keypoint detection. The AN ratio error as the key indicator was the absolute error between the predicted the AN ratio and the actual value. The AN ratio diagnostic accuracy, sensitivity, specificity, receiver operating characteristic curves (ROC), and the area under the curve (AUC), with 95% CIs, were used to test the system’s performance.

3. Results

The system showed high performance in AH assessment. The sensitivity, specificity, and accuracy were 0.906 (95% CI: 0.750–0.980), 0.938 (95% CI: 0.881–0.973), 0.919 (95% CI: 0.877–0.961), respectively. The positive likelihood ratio was 10, and the negative likelihood was 0.067. The ROC is provided in Figure 3, and the AUC (95% CI) was 0.987 (95% CI: 0.974–1.000). These results indicated the accuracy of the proposed assessment system.
The evaluation process for 160 sampled images of this diagnostic system took approximately 11 s with a GTX 1070 graphics card. Figure 4 shows changes in the AN ratio error during 200 epochs of training, while Figure 5 shows absolute distance between ground truth and predicted point (in pixel). As the Figure 5 shows, although the average location error is small, the localization error of A’ was exceedingly great, which might be due to unclear adenoid area in validation images. Figure 6 and Figure 7 show changes in the validation of AP and AR during 200 epochs, respectively. These curves suggested that the HeadNet model could learn quickly and find the keypoints location during the first 50 epochs. However, as the model started to converge, the validation error gradually decreased, while validation accuracy increased slowly.
Table 2 presents the performance details of HeadNet. HeadNet * indicates attention residual module was applied. The rotation (r) loss and translation (t) loss were applied in both HeadNet (r, t) and HeadNet * (r, t). HeadNet * (r, t) could achieve the best performance among all the models with F1-Score = 0.936 and AN ratio error = 0.025. Table 3 shows the absolute localization error over keypoints between these models in test dataset; as the table showed, the HeadNetr* (r, t) performs better than other models. Figure 8 shows the predicted keypoints by HeadNet * (r, t) are located closely to the manually landmarked ones.

4. Discussion

In children, AH is the most common etiology of partial or complete upper airway obstruction, which can further lead to mouth breathing. Increasing evidence has indicated that AH is associated with dentofacial anomalies [29,30]. For mouth breathing patients, the physiological stimulus for the maxilla growth and the subsequent lowering of the palatal vault could be suppressed due to the reduction of the continuous airflow through the nasal passage [31]. Children with AH are expected to have narrow dental arches, deep palatal height, increased mandibular angle, retrognathic mandible, and convex profile [29,30]. These certain facial features are also called “adenoid facies”.
Both the upper airway and dentofacial structures can be observed in lateral cephalograms, and lateral cephalometry was therefore considered to be a useful screening tool in the assessment of upper airway structures [32,33]. Children with AH usually present in orthodontic clinics with a chief complaint of malocclusion or dissatisfaction with their profile. Besides, the prevalence of pediatric sleep breathing disorder in the general orthodontic population was more than twice that reported in a healthy pediatric population [34]. As cephalometry is routinely performed in orthodontic practice, orthodontists are strongly recommended to screen their patients for sleep breathing disorders and AH in clinical practice [35]. Children with suspected AH based on lateral cephalograms could be referred by orthodontists to the ENT department for diagnosis and treatment [9].
In the present study, we developed an AI method that can assess children’s AH using their lateral cephalograms. The model was trained with lateral cephalograms of pediatric patients and showed the ability of locating the key points for AN ratio. If the AN ratio is greater than 0.6, a diagnosis of AH will be made. Over the 160 test samples, the average keypoint localization error was 1.651 in pixels, while the average accuracy precision, recall, F1 score, and AN ratio error was 0.919, 0.954, 0.936, and 0.025, respectively. The diagnostic accuracy, sensitivity, and specificity were 0.919, 0.906, and 0.938, respectively. Besides, the AUC is 0.99, which far exceeds 0.9. These results indicated that the model was accurate and stable. To our knowledge, so far there are only two studies that have applied AI techniques to AH diagnosis. One of them proposed the VGG-Lite model for the automated evaluation of AH but eliminated the process of landmark identification [36]; the other [21] explored the use of AI in AH diagnosis based on magnetic resonance imaging (MRI), which is not routinely used in orthodontic practice. In contrast, the present study was based on lateral cephalometry, a routine examination conducted by orthodontists. Besides, our AI model was improved to be more suitable for lateral cephalograms and the calculation method. Attention residual modules that we used in this study could apparently improve the performance of keypoints detection and reduce the final AN ratio error.
The significance of this study is that our work could assist clinicians or dentists in the screening of AH by eliminating the possible human errors and greatly reducing the time consumption. Many experienced orthodontists and radiologists can estimate whether the adenoids are hypertrophic just by interpreting the image for a second without measuring the AN ratio. However, it would be time-consuming and fallible when manually evaluating the adenoids of a large sample. Therefore, this automated assessment tool can be used for relevant clinical/epidemiological studies, as well as health examinations at a community/population level.
However, this study has several limitations. Firstly, in order to simplify the labeling and learning process, we used the line connecting points Ar and Ba to replace the line tangent to occipital slope, which is similar to the standard AN ratio measurement method but may result in slightly different results in some borderline cases. Secondly, despite the advantages of being a routine diagnostic tool in orthodontic practice, cephalograms cannot provide 3-dimentional information for either adenoids or the upper airway. A previous study using CBCT showed that a AN ratio >0.6 correlates to a lower nasopharyngeal airway volume but not to the upper airway in general [37]. Thirdly, similar to other dental studies based on cephalograms, we had to manually mark relevant landmarks on cephalograms to construct the reference test [38]. The maximal convexity or deepest concavity on the contour were difficult to identify, which might be the reason why the localization deviation of A’ was relatively large [39].

5. Conclusions

The CNN-incorporated system in this study has high accuracy and stability in the detection of AH. AI can be used in the screening of AH among children in dental practice.

Supplementary Materials

The following are available online at https://www.mdpi.com/article/10.3390/diagnostics11081386/s1, Figure S1: Residual Module as Building Block used in Hourglass, Figure S2: Residual Block with Channel Attention Module and Spatial Attention Module, Figure S3: The Channel Attention Module (a); Spatial Attention Module (b), Figure S4: Ideal Rotation between Ground Truth (gt) and Detection (dt) (a); Ideal Translation between Ground Truth and Detection (b); Real Rotation and Translation Case (c,d).

Author Contributions

Conceptualization, T.Z., J.Z. and F.H.; methodology, T.Z., J.Z., Y.C., F.H. and H.H.; software, J.Z. and Y.C.; validation, T.Z., J.Z., J.Y., L.C., Y.C., F.H. and H.H.; formal analysis, T.Z., J.Z., J.Y., L.C., Y.C., F.H. and H.H.; investigation, T.Z., J.Z., J.Y., L.C., Y.C., F.H. and H.H.; resources, T.Z., J.Z., J.Y., L.C., Y.C., F.H. and H.H.; data curation, T.Z., J.Z., J.Y., L.C., Y.C., F.H. and H.H.; writing—original draft preparation, T.Z. and J.Z; writing—review and editing, J.Y., L.C., Y.C., F.H. and H.H.; visualization, T.Z., J.Z., J.Y., L.C.; supervision, Y.C., F.H. and H.H.; project administration, F.H. and H.H.; funding acquisition, T.Z., F.H. and H.H. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Fundamental Research Funds for the Central Universities (No. 2042021kf0182, Wuhan University), the Undergraduate Education Quality Construction and Comprehensive Reform Project, Wuhan University (No. 2021ZG328), and the Wuhan Young and Middle-aged Medical Talents Training Program (No. [2019]87).

Institutional Review Board Statement

All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.

Informed Consent Statement

As a retrospective study using routinely collected data from healthcare activities, this study was approved by the Ethics Committee of School & Hospital of Stomatology, Wuhan University (No. 2020-B55) to be conducted without patients’ informed consent.

Data Availability Statement

The data underlying this article will be shared on reasonable request to the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Pereira, L.; Monyror, J.; Almeida, F.; Almeida, F.R.; Guerra, E.N.S.; Flores-Mir, C.; Pachêco-Pereira, C. Prevalence of adenoid hypertrophy: A systematic review and meta-analysis. Sleep Med. Rev. 2018, 38, 101–112. [Google Scholar] [CrossRef]
  2. Marcus, C.L.; Brooks, L.J.; Draper, K.A.; Gozal, D.; Halbower, A.C.; Jones, J.; Schechter, M.S.; Sheldon, S.H.; Spruyt, K.; Ward, S.D.; et al. Diagnosis and Management of Childhood Obstructive Sleep Apnea Syndrome. Pediatrics 2012, 130, 576–584. [Google Scholar] [CrossRef] [Green Version]
  3. Macari, A.T.; Haddad, R.V. The case for environmental etiology of malocclusion in modern civilizations—Airway morphology and facial growth. Semin. Orthod. 2016, 22, 223–233. [Google Scholar] [CrossRef]
  4. Zhao, T.; Ngan, P.; Hua, F.; Zheng, J.; Zhou, S.; Zhang, M.; Xiong, H.; He, H. Impact of pediatric obstructive sleep apnea on the development of Class II hyperdivergent patients receiving orthodontic treatment. Angle Orthod. 2018, 88, 560–566. [Google Scholar] [CrossRef] [Green Version]
  5. Farber, J.M.; Schechter, M.S.; Marcus, C.L. Clinical practice guideline: Diagnosis and management of childhood obstructive sleep apnea syndrome. Pediatrics 2002, 110, 1255–1257. [Google Scholar] [CrossRef]
  6. Hodges, E.; Marcus, C.L.; Kim, J.Y.; Xanthopoulos, M.; Shults, J.; Giordani, B.; Beebe, D.W.; Rosen, C.L.; Chervin, R.D.; Mitchell, R.B.; et al. Depressive symptomatology in school-aged children with obstructive sleep apnea syndrome: Incidence, demographic factors, and changes following a randomized controlled trial of adenotonsillectomy. Sleep 2018, 41, 1–8. [Google Scholar] [CrossRef]
  7. Esteller, E.; Villatoro, J.C.; Agüero, A.; Lopez, R.; Matiñó, E.; Argemi, J.; Girabent-Farrés, M. Obstructive sleep apnea syndrome and growth failure. Int. J. Pediatric Otorhinolaryngol. 2018, 108, 214–218. [Google Scholar] [CrossRef] [PubMed]
  8. Horiuchi, F.; Oka, Y.; Komori, K.; Tokui, Y.; Matsumoto, T.; Kawabe, K.; Ueno, S.-I. Effects of Adenotonsillectomy on Neurocognitive Function in Pediatric Obstructive Sleep Apnea Syndrome. Case Rep. Psychiatry 2014, 2014, 520215. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  9. Fagundes, N.C.F.; Flores-Mir, C. Pediatric obstructive sleep apnea—Dental professionals can play a crucial role. Pediatr. Pulmonol. 2021. online ahead of print. [Google Scholar] [CrossRef]
  10. Brambilla, I.; Pusateri, A.; Pagella, F.; Caimmi, D.; Caimmi, S.; Licari, A.; Barberi, S.; Castellazzi, A.; Marseglia, G.L. Adenoids in children: Advances in immunology, diagnosis, and surgery. Clin. Anat. 2014, 27, 346–352. [Google Scholar] [CrossRef]
  11. Moideen, S.P.; Mytheenkunju, R.; Nair, A.G.; Mogarnad, M.; Afroze, M.K.H. Role of Adenoid-Nasopharyngeal Ratio in Assessing Adenoid Hypertrophy. Indian J. Otolaryngol. Head Neck Surg. 2019, 71, 469–473. [Google Scholar] [CrossRef] [PubMed]
  12. Soldatova, L.; Otero, H.J.; Saul, D.A.; Barrera, C.; Elden, L. Lateral Neck Radiography in Preoperative Evaluation of Adenoid Hypertrophy. Ann. Otol. Rhinol. Laryngol. 2019, 129, 482–488. [Google Scholar] [CrossRef] [PubMed]
  13. Duan, H.; Xia, L.; He, W.; Lin, Y.; Lu, Z.; Lan, Q. Accuracy of lateral cephalogram for diagnosis of adenoid hypertrophy and posterior upper airway obstruction: A meta-analysis. Int. J. Pediatr. Otorhinolaryngol. 2019, 119, 1–9. [Google Scholar] [CrossRef]
  14. Fujioka, M.; Young, L.W.; Girdany, B.R. Radiographic evaluation of adenoidal size in children: Adenoidal-nasopharyngeal ratio. Am. J. Roentgenol. 1979, 133, 401–404. [Google Scholar] [CrossRef] [PubMed]
  15. Schwendicke, F.; Samek, W.; Krois, J. Artificial Intelligence in Dentistry: Chances and Challenges. J. Dent. Res. 2020, 99, 769–774. [Google Scholar] [CrossRef]
  16. Yim, J.; Ju, J.; Jung, H.; Kim, J. Image Classification Using Convolutional Neural Networks With Multi-stage Feature. Adv. Intell. Syst. Comput. 2015, 587–594. [Google Scholar] [CrossRef]
  17. Torosdagli, N.; Liberton, D.K.; Verma, P.; Sincan, M.; Lee, J.S.; Bagci, U. Deep Geodesic Learning for Segmentation and Anatomical Landmarking. IEEE Trans. Med. Imaging 2018, 38, 919–931. [Google Scholar] [CrossRef] [PubMed]
  18. Yu, H.; Cho, S.; Kim, M.; Kim, W.; Kim, J.; Choi, J. Automated Skeletal Classification with Lateral Cephalometry Based on Artificial Intelligence. J. Dent. Res. 2020, 99, 249–256. [Google Scholar] [CrossRef]
  19. Kunz, F.; Stellzig-Eisenhauer, A.; Zeman, F.; Boldt, J. Artificial intelligence in orthodontics. J. Orofac. Orthop. Fortschr. Kieferorthopädie 2019, 81, 52–68. [Google Scholar] [CrossRef]
  20. Hwang, H.-W.; Park, J.-H.; Moon, J.-H.; Yu, Y.; Kim, H.; Her, S.-B.; Srinivasan, G.; Aljanabi, M.N.A.; Donatelli, R.E.; Lee, S.-J. Automated Identification of Cephalometric Landmarks: Part 2- Might It Be Better Than human? Angle Orthod. 2020, 90, 69–76. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  21. Shen, Y.; Li, X.; Liang, X.; Xu, H.; Li, C.; Yu, Y.; Qiu, B. A deep-learning-based approach for adenoid hypertrophy diagnosis. Med. Phys. 2020, 47, 2171–2181. [Google Scholar] [CrossRef] [PubMed]
  22. Shorten, C.; Khoshgoftaar, T.M. A survey on Image Data Augmentation for Deep Learning. J. Big Data 2019, 6, 60. [Google Scholar] [CrossRef]
  23. Woo, S.; Park, J.; Lee, J.-Y. CBAM: Convolutional Block Attention Module. In Proceedings of the Computer Vision, Munich, Germany, 8–14 September 2018; Springer: Cham, Switzerland; pp. 3–19. [Google Scholar] [CrossRef] [Green Version]
  24. Ling, H.; Wu, J.; Huang, J.; Chen, J.; Li, P. Attention-based convolutional neural network for deep face recognition. Multimed. Tools Appl. 2019, 79, 5595–5616. [Google Scholar] [CrossRef]
  25. Newell, A.; Yang, K.; Deng, J. Stacked hourglass networks for human pose estimation. In European Conference on Computer Vision; Springer: Cham, Switzerland, 2016; Volume 9912, pp. 483–499. [Google Scholar] [CrossRef] [Green Version]
  26. Sun, X.; Xiao, B.; Wei, F.; Liang, S.; Wei, Y. Integral Human Pose Regression. In Proceedings of the Computer Vision, Munich, Germany, 8–14 September 2018; Springer: Cham, Switzerland; pp. 536–553. [Google Scholar] [CrossRef] [Green Version]
  27. Gülehre, A.; Bengio, Y. Knowledge matters: Importance of prior information for optimization. J. Mach. Learn. Res. 2013, 17, 226–257. [Google Scholar]
  28. Sun, D.; Yao, A.; Zhou, A.; Zhao, H. Deeply-Supervised Knowledge Synergy. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 6990–6999. [Google Scholar]
  29. Katyal, V.; Pamula, Y.; Martin, A.J.; Daynes, C.N.; Kennedy, J.D.; Sampson, W.J. Craniofacial and upper airway morphology in pediatric sleep-disordered breathing: Systematic review and meta-analysis. Am. J. Orthod. Dentofac. Orthop. 2013, 143, 20–30.e3. [Google Scholar] [CrossRef]
  30. Flores-Mir, C.; Korayem, M.; Heo, G.; Witmans, M.; Major, M.P.; Major, P.W. Craniofacial morphological characteristics in children with obstructive sleep apnea syndrome. J. Am. Dent. Assoc. 2013, 144, 269–277. [Google Scholar] [CrossRef] [PubMed]
  31. Gungor, A.Y.; Turkkahraman, H. Effects of Airway Problems on Maxillary Growth: A Review. Eur. J. Dent. 2009, 3, 250–254. [Google Scholar] [CrossRef] [Green Version]
  32. Pirilä-Parkkinen, K.; Löppönen, H.; Nieminen, P.; Tolonen, U.; Pirttiniemi, P. Cephalometric evaluation of children with nocturnal sleep-disordered breathing. Eur. J. Orthod. 2010, 32, 662–671. [Google Scholar] [CrossRef] [Green Version]
  33. Major, M.P.; Flores-Mir, C.; Major, P.W. Assessment of lateral cephalometric diagnosis of adenoid hypertrophy and posterior upper airway obstruction: A systematic review. Am. J. Orthod. Dentofac. Orthop. 2006, 130, 700–708. [Google Scholar] [CrossRef]
  34. Abtahi, S.; Witmans, M.; Alsufyani, N.A.; Major, M.P.; Major, P.W. Pediatric sleep-disordered breathing in the orthodontic population: Prevalence of positive risk and associations. Am. J. Orthod. Dentofac. Orthop. 2020, 157, 466–473.e1. [Google Scholar] [CrossRef]
  35. Behrents, R.G.; Shelgikar, A.V.; Conley, R.S.; Flores-Mir, C.; Hans, M.; Levine, M.; McNamara, J.A.; Palomo, J.M.; Pliska, B.; Stockstill, J.W.; et al. Obstructive sleep apnea and orthodontics: An American Association of Orthodontists White Paper. Am. J. Orthod. Dentofac. Orthop. 2019, 156, 13–28.e1. [Google Scholar] [CrossRef] [Green Version]
  36. Liu, J.; Li, S.; Cai, Y.; Lan, D.; Lu, Y.; Liao, W.; Ying, S.; Zhao, Z. Automated Radiographic Evaluation of Adenoid Hypertrophy Based on VGG-Lite. J. Dent. Res. 2021, 29. [Google Scholar] [CrossRef]
  37. Feng, X.; Li, G.; Qu, Z.; Liu, L.; Nässtrom, K.; Shi, X.-Q. Comparative analysis of upper airway volume with lateral cephalograms and cone-beam computed tomography. Am. J. Orthod. Dentofac. Orthop. 2015, 147, 197–204. [Google Scholar] [CrossRef] [PubMed]
  38. Schwendicke, F.; Singh, T.; Lee, J.-H.; Gaudin, R.; Chaurasia, A.; Wiegand, T.; Uribe, S.; Krois, J. Artificial intelligence in dental research: Checklist for authors, reviewers, readers. J. Dent. 2021, 107, 103610. [Google Scholar] [CrossRef]
  39. Grogger, P.; Sacher, C.; Weber, S.; Millesi, G.; Seemann, R. Identification of ‘Point A’ as the prevalent source of error in cephalometric analysis of lateral radiographs. Int. J. Oral Maxillofac. Surg. 2018, 47, 1322–1329. [Google Scholar] [CrossRef] [PubMed]
Figure 1. The A/N ratio measurement (a); annotated images with four keypoints landmarked (b). (A’ is the point of maximal convexity along the inferior margin of adenoid shadow; PNS is the posterior superior edge of the hard palate; Ba is the most inferior-posterior point on the margin of the foramen magnum; Ar is the intersection of the inferior cranial base surface and the averaged posterior surfaces of the mandibular condyles; line segment L is drawn along the straight part of the anterior margin of the basiocciput; line segment A indicates the size of the adenoid; line segment N indicates the size of the nasopharyngeal space).
Figure 1. The A/N ratio measurement (a); annotated images with four keypoints landmarked (b). (A’ is the point of maximal convexity along the inferior margin of adenoid shadow; PNS is the posterior superior edge of the hard palate; Ba is the most inferior-posterior point on the margin of the foramen magnum; Ar is the intersection of the inferior cranial base surface and the averaged posterior surfaces of the mandibular condyles; line segment L is drawn along the straight part of the anterior margin of the basiocciput; line segment A indicates the size of the adenoid; line segment N indicates the size of the nasopharyngeal space).
Diagnostics 11 01386 g001
Figure 2. Model Architecture: The yellow rectangle represents the 2-d convolutional layer; the red rectangle represents the attentional residual module; the blue rectangle in the hourglass-style represents normal residual module; the green rectangle represents the integral regression layer, which converts heatmaps into keypoints; each convolutional layer is followed by a ReLU operation.
Figure 2. Model Architecture: The yellow rectangle represents the 2-d convolutional layer; the red rectangle represents the attentional residual module; the blue rectangle in the hourglass-style represents normal residual module; the green rectangle represents the integral regression layer, which converts heatmaps into keypoints; each convolutional layer is followed by a ReLU operation.
Diagnostics 11 01386 g002
Figure 3. Receiver operating characteristic curve (ROC). The area under the curve (AUC) was far exceeding 0.9, which indicated that the proposed system was able to accurately assess adenoid hypertrophy.
Figure 3. Receiver operating characteristic curve (ROC). The area under the curve (AUC) was far exceeding 0.9, which indicated that the proposed system was able to accurately assess adenoid hypertrophy.
Diagnostics 11 01386 g003
Figure 4. Changes in AN ratio error during model training: AN ratio error decreased as the epochs increased.
Figure 4. Changes in AN ratio error during model training: AN ratio error decreased as the epochs increased.
Diagnostics 11 01386 g004
Figure 5. Changes in absolute distance between ground truth and predicted point (in pixels).
Figure 5. Changes in absolute distance between ground truth and predicted point (in pixels).
Diagnostics 11 01386 g005
Figure 6. Changes in validation AP of HeadNet.
Figure 6. Changes in validation AP of HeadNet.
Diagnostics 11 01386 g006
Figure 7. Changes in validation AR of HeadNet.
Figure 7. Changes in validation AR of HeadNet.
Diagnostics 11 01386 g007
Figure 8. Validation dataset image comparison (green points represent ground truth; red points represent detection). The normal case (a); moderate hypertrophic case (b); severe hypertrophic case (c).
Figure 8. Validation dataset image comparison (green points represent ground truth; red points represent detection). The normal case (a); moderate hypertrophic case (b); severe hypertrophic case (c).
Diagnostics 11 01386 g008
Table 1. Parameters of convolutional layers in HeadNet.
Table 1. Parameters of convolutional layers in HeadNet.
NameC1C2C3C4C5C6C7
Output channels644256128128161
Kernel size7 × 71 × 11 × 11 × 13 × 31 × 17 × 7
Stride2111111
C: Convolutional layer; C1, C2, and C3 were used in HeadNet model; C3, C4, and C5 were used in both residual module and attention residual module; C6 and C7 were used in attention residual module.
Table 2. Performance of HeadNet on test dataset.
Table 2. Performance of HeadNet on test dataset.
MethodAPF1-ScoreA/N Error
HeadNet0.8760.8960.031
HeadNet (r, t)0.9100.9280.027
HeadNet *0.9040.9230.027
HeadNet * (r, t)0.9190.9360.025
r: rotation is applied; t: translation is applied; *: the attention residual module is applied in Headnet.
Table 3. Absolute localization error (pixel) over keypoints between different models in the test dataset.
Table 3. Absolute localization error (pixel) over keypoints between different models in the test dataset.
MethodArBaPNSA’Average
HeadNet1.7231.9611.3262.5701.895
HeadNet (r, t)1.2851.8991.2752.5751.758
HeadNet *1.2761.8131.3072.4161.703
HeadNet * (r, t)1.1881.7441.2752.3721.651
r: rotation is applied; t: translation is applied; *: the attention residual module is applied in Headnet.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhao, T.; Zhou, J.; Yan, J.; Cao, L.; Cao, Y.; Hua, F.; He, H. Automated Adenoid Hypertrophy Assessment with Lateral Cephalometry in Children Based on Artificial Intelligence. Diagnostics 2021, 11, 1386. https://doi.org/10.3390/diagnostics11081386

AMA Style

Zhao T, Zhou J, Yan J, Cao L, Cao Y, Hua F, He H. Automated Adenoid Hypertrophy Assessment with Lateral Cephalometry in Children Based on Artificial Intelligence. Diagnostics. 2021; 11(8):1386. https://doi.org/10.3390/diagnostics11081386

Chicago/Turabian Style

Zhao, Tingting, Jiawei Zhou, Jiarong Yan, Lingyun Cao, Yi Cao, Fang Hua, and Hong He. 2021. "Automated Adenoid Hypertrophy Assessment with Lateral Cephalometry in Children Based on Artificial Intelligence" Diagnostics 11, no. 8: 1386. https://doi.org/10.3390/diagnostics11081386

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop