Next Article in Journal
A Blockchain-Based Efficient, Secure and Anonymous Conditional Privacy-Preserving and Authentication Scheme for the Internet of Vehicles
Previous Article in Journal
The Concentration-Dependent Effects of Essential Oils on the Growth of Fusarium graminearum and Mycotoxins Biosynthesis in Wheat and Maize Grain
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Automated Prediction of Extraction Difficulty and Inferior Alveolar Nerve Injury for Mandibular Third Molar Using a Deep Neural Network

1
School of Integrated Technology (SIT), Gwangju Institute of Science and Technology (GIST), Gwangju 61005, Korea
2
Department of Oral and Maxillofacial Surgery, College of Dentistry, Chosun University, Gwangju 61452, Korea
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(1), 475; https://doi.org/10.3390/app12010475
Submission received: 13 December 2021 / Revised: 27 December 2021 / Accepted: 29 December 2021 / Published: 4 January 2022
(This article belongs to the Section Applied Dentistry and Oral Sciences)

Abstract

:
Extraction of mandibular third molars is a common procedure in oral and maxillofacial surgery. There are studies that simultaneously predict the extraction difficulty of mandibular third molar and the complications that may occur. Thus, we propose a method of automatically detecting mandibular third molars in the panoramic radiographic images and predicting the extraction difficulty and likelihood of inferior alveolar nerve (IAN) injury. Our dataset consists of 4903 panoramic radiographic images acquired from various dental hospitals. Seven dentists annotated detection and classification labels. The detection model determines the mandibular third molar in the panoramic radiographic image. The region of interest (ROI) includes the detected mandibular third molar, adjacent teeth, and IAN, which is cropped in the panoramic radiographic image. The classification models use ROI as input to predict the extraction difficulty and likelihood of IAN injury. The achieved detection performance was 99.0% mAP over the intersection of union (IOU) 0.5. In addition, we achieved an 83.5% accuracy for the prediction of extraction difficulty and an 81.1% accuracy for the prediction of the likelihood of IAN injury. We demonstrated that a deep learning method can support the diagnosis for extracting the mandibular third molar.

1. Introduction

Recently, deep learning has been applied in various fields with the goal of automation, and the field is rapidly growing [1,2,3]. In particular, deep learning is frequently applied in the medical field and shows high performance [4,5,6]. Deep learning is widely used to predict and diagnose diseases through image data such as MRI and CT images and signal data such as EEGs [7,8,9]. It can also be applied in the field of dentistry for the automatic diagnosis of various diseases [10,11,12,13]. This has been applied to many studies, such as classifying cystic lesions in cone beam computed tomography (CBCT) images and estimating a person’s age through their teeth [14,15].
Most people are born with mandibular third molars and have them removed for various reasons [16,17]. Therefore, the extraction of mandibular third molars is a frequent operation in oral and maxillofacial surgery. According to the impaction type of the third molar, the presence of symptoms occurs at 30–68% after extraction of the third molar [18,19]. Mandibular third molars grow in various positions and directions, so these have various impaction patterns [20,21,22,23]. Therefore, it is important to determine the pattern of the mandibular third molar that is impacted before extraction to apply the appropriate surgical method according to the impaction pattern. There are several criteria for defining the impaction pattern of mandibular third molars. In certain impaction patterns, various problems may occur after extraction [24,25,26,27]. Depending on the impaction pattern of the mandibular third molar, complications may arise after extraction. Several studies report that inferior alveolar nerve (IAN) injury is the most common complication after removing the mandibular third molar [28,29,30]. An IAN injury can cause several paresthesia in the mandible and due to the contradictory reported results on the efficacy of the proposed surgical therapeutics approaches, predicting potential complications is of paramount importance [31]. It can be predicted through the relationship between the mandibular third molar and the IAN, and there is a high likelihood of IAN injury when the mandibular third molar and the IAN make contact [32,33].
Several previous cases have applied deep learning models to conduct research related to the third molar. Vinayahalingam et al. proposed method to segment third molars and nerves using U-net [34]. Yoo et al. proposed method to predict the extraction difficulty of the mandibular third molar using convolution neural network(CNN) [35]. Utilizing panoramic radiographic images, we propose a model that uses a deep neural network to predict the extraction difficulty of mandibular third molars and the likelihood of IAN injury. First, we use a detection model to find mandibular third molars in panoramic radiographic images. Second, the area around the detected mandibular third molar is cropped to a specific size as a region of interest (ROI). Finally, we use the ROI as an input to the classification model to predict the extraction difficulty and likelihood of IAN injury (Figure 1).
The automated prediction of extraction difficulty and IAN injury of mandibular third molars enables the reduction of the labor burden for dentists and shortens the diagnosis time. Patients can easily find out about their diagnosis, and recognize the risk of IAN injury in advance of possible complications. The main contributions of this study are summarized as follows:
  • For the first time, we propose a method that can predict both the extraction difficulty of mandibular third molars and the likelihood of IAN injury following extraction.
  • The proposed deep neural network ensures consistent performance because it uses the largest dental panoramic radiographic image dataset to our knowledge.
  • We achieved high performance using classification models, with an extraction difficulty accuracy of 83.5%, AUROC of 92.79% and the likelihood of IAN injury accuracy of 81.1%, AUROC of 90.02%.

2. Materials and Methods

2.1. Dataset

This study was approved by the Institutional Review Board (IRB) of the Chosun University Dental Hospital (CUDHIRB 2005008) and the Gwangju Institute of Science and Technology (20210217-HR-59-01-02). The dataset includes 5397 panoramic radiographic images of patients treated for the extraction of mandibular third molars at Chosun University Dental Hospital. Panoramic radiographic images vary in size from a minimum of 2000 × 1000 to a maximum of 3000 × 1500 and have varied brightness. The dataset consists of 4903 panoramic radiographic images, including 8720 mandibular third molars. The dataset is split into train:val:test = 6073:896:1751.

2.2. Ground Truth

The ground truth is required to train the deep learning model. For this study, ground truths for detecting mandibular third molars and for predicting extraction difficulty and IAN injury were created. Seven dentists used a polygon to annotate the mandibular third molar in the panoramic radiographic image. In addition, the corresponding impaction pattern and likelihood of IAN injury for each mandibular third molar were annotated for the extraction difficulty and likelihood of IAN injury classification models. The annotations were finalized after reviews and consensus.

2.2.1. Mandibular Third Molar Detection

Mandibular third molar detection is the classification and localization of mandibular third molars in the panoramic radiographic image. The detection model is required to label containing the mandibular third molar’s bounding box and class information in the panoramic radiographic image. In the panoramic radiographic image, all mandibular third molars are annotated.

2.2.2. Extraction Difficulty Classification

The extraction difficulty of mandibular third molars was determined to train the extraction difficulty classification model. The impaction pattern of the mandibular third molar is determined base on the Pell and Gregory classification and the Winter’s classification. The extraction difficulty was determined into four categories according to the surgical method for extraction.
  • Vertical eruption: Simple extraction without gum incision or bone fracture;
  • Soft tissue impaction: Extraction after a gum incision;
  • Partial bony impaction: Tooth segmentation is required for extraction;
  • Complete bony impaction: Where more than two-thirds of the crown is impacted, this requires tooth segmentation and bone fracture.
The surgical extraction method is determined by the combination of the impaction pattern of the mandibular third molar as determined by both Pell and Gregory and Winter’s classification criteria [28]. Based on Pell and Gregory classification, impacted mandibular third molars are classified as Classes A, B,and C according to the depth of the impacted mandibular third molar, and Classes I, II, and III according to the distance between the mandibular third molar and the ascending mandibular ramus. Figure 2 shows an example of each class.
Depth of the impacted mandibular third molar:
  • Class A: The impacted mandibular third molar’s highest point of occlusal surface is at the same height as the occlusal surface of the adjacent tooth.
  • Class B: The impacted mandibular third molar’s highest point of occlusal surface is between the occlusal surface of the adjacent tooth and the cervical line.
  • Class C: The impacted mandibular third molars’ highest point is below the cervical line of the adjacent tooth.
Distance between the mandibular third molar and ascending mandibular ramus:
  • Class I: The distance between the distal surface of the mandible’s second molar and the anterior edge of the mandible is wider than the width of the impacted mandibular third molar’s occlusal surface.
  • Class II: The distance between the distal surface of the mandible’s second molar and the anterior edge of the mandible is narrower than the width of the impacted mandibular third molars’ occlusal surface and wider than 1/2.
  • Class III: The distance from the distal surface of the second molar of the mandible to the anterior edge of the mandible is narrower than the width of the impacted mandibular third molar’s occlusal surface.
Figure 3 shows the impaction patterns according to the depth of the impacted third molar and the distance between the third molar and the ascending mandibular ramus [36,37,38]. Based on Winter’s classification, there are six types based on the angle of the impacted mandibular third molar: vertical (−10 to 10), mesioangular (11 to 79), horizontal (80 to 100), distoangular (−11 to −79), transverse (buccolingual), and inverted (−80 to 111). Figure 3 shows the impaction pattern according to the angle of the impacted third molar, which is divided into six types based on the Winter’s classification. Various studies have determined the extraction difficulty through a combination of impaction types [39,40,41,42]. In addition, we determined the extraction difficulty of the mandibular third molar by the combination of the impacted pattern determined by Pell and Gregory classification and Winter’s classification. According to Figure 4, the combination of impaction patterns determine the extraction difficulty of one of vertical eruption (VE), soft tissue impaction (STI), partial bony impaction (PBI) and complete bony impaction (CBI).

2.2.3. Classification of Likelihood of IAN Injury

An IAN injury is a complication that can occur after the extraction of a mandibular third molar. On panoramic radiographic images, various factors can predict IAN injury prior to the extraction of the mandibular third molar. The interruption of the IAN, where the white line of the IAN canal is occluded in the panoramic radiographic image, is the most important factor. The likelihood of IAN injury is classified into three levels according to the degree of interruption of the IAN. Figure 5 provides an example of each class.
  • N.1(low): The mandibular third molar does not reach the IAN canal in the panoramic radiographic image.
  • N.2(medium): The mandibular third molar interrupts one line of the IAN canal in the panoramic radiographic image.
  • N.3(high): The mandibular third molar interrupts two lines of the IAN canal in the panoramic radiographic image.

2.3. Mandibular Third Molar Detection Model

This model extracts a region of interest (ROI) that is used as an input for classification of extraction difficulty and the likelihood of IAN injury from a panoramic radiographic image. Preprocessing was used to improve the mandibular third molar detection model’s training efficiency and performance. First, large-sized panoramic radiographic images were resized to 1056 × 512 and used as input to the model. Second, contrast limit adaptive histogram equalization (CLAHE) is applied to panoramic radiographic image to improve image contrast [43]. By using CLHAE, the deep learning model facilitates extraction features.
The Retinanet was used for mandibular third molar detection [44]. The detection model’s backbone was a ResNet-152, which was pre-trained on the ImageNet dataset [45,46]. The mandibular third molar’s bounding box was obtained through the detection model, which generated the ROI, which was cropped to 700 × 700 to be suitable for inclusion of the mandibular third molar, the teeth adjacent to the third molar, and IAN in the panoramic radiographic image. According to the bounding box, the generated ROIs are set to the tooth number (#38 and #48). These are used as inputs to the classification model for predicting the extraction difficulty and the likelihood of IAN injury. The network was trained using the Adam optimizer with a learning rate of 1 × 10 4 , a batch size of 2, and the focal loss [47]. The model was trained for 250 epochs, and each epoch was evaluated.

2.4. Extraction Difficulty and Likelihood of IAN Injury Classification Model

Using the ROI image as input, the classification model predicts the extraction difficulty and the likelihood of IAN injury. To prevent overfitting and improve performance, data augmentations such as horizontal flip, rotation, and left–right, and up–down shifts were used. We used a Vision Transformer, a Transformer-based image classification model that outperforms CNN, which is widely used in existing image classification tasks [48]. Among the various Vision Transformer models, we used R50+ViT-L/32, which is a hybrid Vision Transformer that includes ResNet-50 in ViT-Large. The resolution of the input image is 384 × 384 because the Vision Transformer benefits from higher resolutions. The classification models were pre-trained using an ImageNet dataset. The classification models were trained using an Adam optimizer with a learning rate of 1 × 10 4 , a batch size of 8, and the cross-entropy loss function. The model was trained for 250 epochs, each of which was evaluated. The best performing model was selected using the evaluation set, and the final results were calculated using the test set.

2.5. Metrics

The object detection performance is generally evaluated through mean average precision (mAP). mAP is calculated by predicting the correct answer when IoU is >0.5. mAP [0.7] is calculated by predicting the correct answer when IoU is >0.7, and mAP [0.5:0.95] is calculated by averaging the performance of IoUs within 0.5–0.95 in 0.05 steps. We used the accuracy, F1-score, and AUROC as criteria to assess the classification model’s performance. The accuracy is calculated the number of correctly predicted data divided by the total number of data. The F1-score is calculated using precision and recall, as follows: F1-score = 2 * Precision * Recall/Precision + Recall. The area under the curve of the receiver operating characteristic (AUROC) measures the overall performance of all possible classification thresholds.

3. Results

3.1. Mandibular Third Molar Detection

The detection performance of the mandibular third molar is indicated in the panoramic radiographic image by the values mAP [0.5], mAP [0.7], and mAP = [0.5:0.95]. According to Table 1, the detection performance achieved 99.0% for mAP [0.5], 97.7% for mAP [0.7], and 85.3% for mAP [0.5:0.95], showing high results in the overall performance index. Figure 6 shows the results of extracting the ROI from an area that includes the mandibular third molar, adjacent teeth, and IAN, from the panoramic radiographic image through the detection model.

3.2. Extraction Difficulty Classification

The classification performance of the extraction difficulty of mandibular third molar was evaluated using the following criteria: accuracy, the macro average of F1-score (F1-score), and area under the curve of the receiver operating characteristic (AUROC). Table 2 summarizes the model’s classification performance for each criterion. We conducted experiments to compare the performance of CNNs used in previous dental studies to the Vision Transformer (R50+ViT-L/32) [15,35]. Through the experiment, the Vision Transformer (R50+ViT-L/32) showed the highest classification performance across all criteria, with an accuracy of 83.5%, an F1-score of 66.35%, and an AUROC of 92.79%. As shown in Figure 7a, about 20% of PBI was misclassified as CBI, and about 9.7% of CBI was misclassified as PBI. Figure 7b shows the classification performance of extraction difficulty using the ROC curve and AUC scores. We achieved high AUC scores of 91.0–98.0% for every class.

3.3. Classification of Likelihood of IAN Injury

Table 3 shows the evaluation results of the likelihood of IAN injury using the criteria of accuracy, F1-score, and AUROC. As shown in Table 3, the Vision Transformer (R50+ViT-L/32) had the highest classification performance. Figure 8a depicts a confusion matrix for the classification of the likelihood of IAN injury using the Vision Transformer (R50+ViT-L/32). According to the confusion matrix, about 27% of N.1 was misclassified as N.2, and about 25% of N.3 was misclassified as N.2. Figure 8b shows the classification performance for the likelihood of IAN injury using the ROC curve and AUC scores. We achieved high AUC scores of 86.0–94.0% for every class. The results support that the extraction difficulty of the mandibular third molar and the likelihood of IAN injury can be predicted with high performance in panoramic radiographic images using a deep neural network.

4. Discussion

Before third molar extraction, the clinician should plan a surgical method for the patient and consider potential complications such as IAN injury. Predicting extraction difficulty and the likelihood of IAN injury are essential in advance to plan a surgical method to minimize IAN injury, which is a complication that may occur after tooth extraction that requires gum incision, tooth segmentation, and bone fracture according to the extraction difficulty. As the population continues to increase, the number of patients who need third molar extraction continues to increase. As a result, dentist fatigue and burden from diagnosing and treating patients may increase, potentially decreasing diagnosis accuracy. Therefore, the demand for automatic diagnosis or prediction in oral and maxillofacial surgery as well as various other medical fields is increasing. Many studies have reported methods that can assist in clinical practice by reducing workload and fatigue, as well as providing guidance to inexperienced dentists. These methods were implemented using deep neural networks. Deep neural networks are applied to perform classification, detection, segmentation in various fields such as industry, medicine, and national defense. They show good performance. Therefore, we can also improve the efficiency and accuracy of diagnosis of third molar extraction using deep neural networks.
Various studies in dentistry have applied deep neural networks. Kim et al. estimated a person’s age using CNN, the input of which was a manually cropped image of the first molar in the panoramic radiographic image [15]. Vinayahalingam et al. used CNN to classify third molar caries using a cropped image of the third molar in a panoramic radiographic image [49]. Moreover, there have been studies on third molar extraction using deep neural networks. Vinayahalingam et al. proposed a deep neural network that can detect mandibular third molars and IAN in panoramic radiographic images, noting that IAN injury can occur when mandibular third molars are extracted depending on the location relationship between mandibular third molar and IAN [34]. Yoo et al. detected mandibular third molars in panoramic radiographic images using SSD, an object detection model, and predicted extraction difficulty based on the Pederson difficulty score (PDS) as an input for the cropped image of detected mandibular third molar [35]. Additionally, they mentioned the importance of location and distance relationship between the IAN and mandibular third molar for the extraction difficulty of the mandibular third molar. Several previous studies have emphasized the significance of IAN injury [28,29,30]. Previous studies have limitations in that they do not predict the overall process of extraction difficulty and complications [35,50]. Thus, we focused on the need for a technique that could predict the entire process from diagnosis to prognosis by predicting the extraction difficulty and possible complications.
Medical data is generated by humans; therefore, it is difficult to generate data arbitrarily and limitations exist in class distribution and quantity. Our dataset has a class imbalance that detrimentally affects the classification performance. Thus, we used the imbalanced dataset sampler to properly balance the classes in the dataset by increasing the proportion of minor classes and decreasing the proportion of major classes through resampling. Performance improvement was achieved by applying an imbalanced dataset sampler. We used Vision Transformer, one of the best models for classification performance, and confirmed the performance of Vision Transformer through comparative experiments with CNN. Our dataset contains many panoramic radiographic images created using various cone beam computed tomography (CBCT) techniques. Our deep neural networks were trained and validated using this dataset. Therefore, we expect consistent performance in various panoramic radiographic images.
Although the proposed method predicts the extraction difficulty and likelihood of IAN injury automatically, which has performance limitations due to the dataset. The classification model has better performance as it uses a large amount of balanced data. However, the amount of data in our dataset varies by more than 50 times between the largest and smallest classes. As a result, we anticipate performance increases when using a large balanced dataset.
Extraction of the maxillary third molar is also a common procedure in oral and maxillofacial surgery. Several studies have reported classification criteria for the extraction difficulty of maxillary third molar and possible complications such as sinus perforation [51,52,53]. Necessary future work includes predicting the extraction difficulty of maxillary third molar and possible complications.

5. Conclusions

This study demonstrates that a deep neural network can predict both the extraction difficulty of mandibular third molars and the likelihood of IAN injury following extraction in panoramic radiographic images. Our model has been trained and validated using a large amount of data. Therefore, our model ensures consistent performance for panoramic radiographic images of various patients. We achieve good results in performance criteria such as accuracy, F1-score, and AUROC. These results support that automatic prediction via deep learning contributes to the reduction in diagnostic time and effort by assisting clinicians.

Author Contributions

Conceptualization, J.L., S.Y.M. and K.L.; data, S.Y.M.; methodology, J.L. and K.L.; validation, J.L. and J.P.; formal analysis J.L., J.P., S.Y.M. and K.L.; investigation, J.L., J.P., S.Y.M. and K.L.; writing—original draft preparation, J.L.; writing—review and editing, J.L., J.P., S.Y.M. and K.L.; supervision, K.L.; project administration, K.L.; funding acquisition, S.Y.M. and K.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by an Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government(MSIT) (No. 2020-0-00857, Development of cloud robot intelligence augmentation, sharing, and framework technology to integrate and enhance the intelligence of multiple robots.

Institutional Review Board Statement

The study was conducted according to the guidelines of the Declaration of Helsinki, and approved by the Institutional Review Boards of Chosun University Dental Hospital(CUDHIRB 2005008) and Gwangju Institute of Science and Technology(20210217-HR-59-01-02).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request but is subject to the permission of the Institutional Review Boards of the participating institutions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Shin, S.; Lee, Y.; Kim, S.; Choi, S.; Kim, J.G.; Lee, K. Rapid and non-destructive spectroscopic method for classifying beef freshness using a deep spectral network fused with myoglobin information. Food Chem. 2021, 352, 129329. [Google Scholar] [CrossRef]
  2. Maqueda, A.I.; Loquercio, A.; Gallego, G.; García, N.; Scaramuzza, D. Event-based vision meets deep learning on steering prediction for self-driving cars. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 5419–5427. [Google Scholar]
  3. Mahler, J.; Matl, M.; Liu, X.; Li, A.; Gealy, D.; Goldberg, K. Dex-net 3.0: Computing robust vacuum suction grasp targets in point clouds using a new analytic model and deep learning. In Proceedings of the 2018 IEEE International Conference on robotics and automation (ICRA), Brisbane, QLD, Australia, 21–25 May 2018; pp. 5620–5627. [Google Scholar]
  4. Back, S.; Lee, S.; Shin, S.; Yu, Y.; Yuk, T.; Jong, S.; Ryu, S.; Lee, K. Robust Skin Disease Classification by Distilling Deep Neural Network Ensemble for the Mobile Diagnosis of Herpes Zoster. IEEE Access 2021, 9, 20156–20169. [Google Scholar] [CrossRef]
  5. Poplin, R.; Varadarajan, A.V.; Blumer, K.; Liu, Y.; McConnell, M.V.; Corrado, G.S.; Peng, L.; Webster, D.R. Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning. Nat. Biomed. Eng. 2018, 2, 158–164. [Google Scholar] [CrossRef] [PubMed]
  6. Kanavati, F.; Toyokawa, G.; Momosaki, S.; Rambeau, M.; Kozuma, Y.; Shoji, F.; Yamazaki, K.; Takeo, S.; Iizuka, O.; Tsuneki, M. Weakly-supervised learning for lung carcinoma classification using deep learning. Sci. Rep. 2020, 10, 9297. [Google Scholar] [CrossRef] [PubMed]
  7. Vaidyanathan, A.; van der Lubbe, M.F.; Leijenaar, R.T.; van Hoof, M.; Zerka, F.; Miraglio, B.; Primakov, S.; Postma, A.A.; Bruintjes, T.D.; Bilderbeek, M.A.; et al. Deep learning for the fully automated segmentation of the inner ear on MRI. Sci. Rep. 2021, 11, 2885. [Google Scholar] [CrossRef] [PubMed]
  8. Tang, H.; Chen, X.; Liu, Y.; Lu, Z.; You, J.; Yang, M.; Yao, S.; Zhao, G.; Xu, Y.; Chen, T.; et al. Clinically applicable deep learning framework for organs at risk delineation in CT images. Nat. Mach. Intell. 2019, 1, 480–491. [Google Scholar] [CrossRef]
  9. Seo, H.; Back, S.; Lee, S.; Park, D.; Kim, T.; Lee, K. Intra-and inter-epoch temporal context network (IITNet) using sub-epoch features for automatic sleep scoring on raw single-channel EEG. Biomed. Signal Process. Control 2020, 61, 102037. [Google Scholar] [CrossRef]
  10. Kühnisch, J.; Meyer, O.; Hesenius, M.; Hickel, R.; Gruhn, V. Caries Detection on Intraoral Images Using Artificial Intelligence. J. Dent. Res. 2021, 00220345211032524. [Google Scholar] [CrossRef]
  11. Wang, H.; Minnema, J.; Batenburg, K.; Forouzanfar, T.; Hu, F.; Wu, G. Multiclass CBCT Image Segmentation for Orthodontics with Deep Learning. J. Dent. Res. 2021, 100, 943–949. [Google Scholar] [CrossRef]
  12. Casalegno, F.; Newton, T.; Daher, R.; Abdelaziz, M.; Lodi-Rizzini, A.; Schürmann, F.; Krejci, I.; Markram, H. Caries detection with near-infrared transillumination using deep learning. J. Dent. Res. 2019, 98, 1227–1233. [Google Scholar] [CrossRef] [Green Version]
  13. Majanga, V.; Viriri, S. Automatic blob detection for dental caries. Appl. Sci. 2021, 11, 9232. [Google Scholar] [CrossRef]
  14. Lee, J.H.; Kim, D.H.; Jeong, S.N. Diagnosis of cystic lesions using panoramic and cone beam computed tomographic images based on deep learning neural network. Oral Dis. 2020, 26, 152–158. [Google Scholar] [CrossRef] [PubMed]
  15. Kim, S.; Lee, Y.H.; Noh, Y.K.; Park, F.C.; Auh, Q.S. Age-group determination of living individuals using first molar images based on artificial intelligence. Sci. Rep. 2021, 11, 1073. [Google Scholar] [CrossRef]
  16. Kruger, E.; Thomson, W.M.; Konthasinghe, P. Third molar outcomes from age 18 to 26: Findings from a population-based New Zealand longitudinal study. Oral Surg. Oral Med. Oral Pathol. Oral Radiol. Endodontol. 2001, 92, 150–155. [Google Scholar] [CrossRef] [PubMed]
  17. Alfadil, L.; Almajed, E. Prevalence of impacted third molars and the reason for extraction in Saudi Arabia. Saudi Dent. J. 2020, 32, 262–268. [Google Scholar] [CrossRef]
  18. Yilmaz, S.; Adisen, M.Z.; Misirlioglu, M.; Yorubulut, S. Assessment of third molar impaction pattern and associated clinical symptoms in a central anatolian turkish population. Med. Princ. Pract. 2016, 25, 169–175. [Google Scholar] [CrossRef] [PubMed]
  19. Kumar, V.R.; Yadav, P.; Kahsu, E.; Girkar, F.; Chakraborty, R. Prevalence and Pattern of Mandibular Third Molar Impaction in Eritrean Population: A Retrospective Study. J. Contemp. Dent. Pract. 2017, 18, 100–106. [Google Scholar] [CrossRef]
  20. Obiechina, A.; Arotiba, J.; Fasola, A. Third molar impaction: Evaluation of the symptoms and pattern of impaction of mandibular third molar teeth in Nigerians. Trop. Dent. J. 2001, 22–25. [Google Scholar]
  21. Eshghpour, M.; Nezadi, A.; Moradi, A.; Shamsabadi, R.M.; Rezaer, N.; Nejat, A. Pattern of mandibular third molar impaction: A cross-sectional study in northeast of Iran. Niger. J. Clin. Pract. 2014, 17, 673–677. [Google Scholar]
  22. Quek, S.; Tay, C.; Tay, K.; Toh, S.; Lim, K. Pattern of third molar impaction in a Singapore Chinese population: A retrospective radiographic survey. Int. J. Oral Maxillofac. Surg. 2003, 32, 548–552. [Google Scholar] [CrossRef]
  23. Jaroń, A.; Trybek, G. The Pattern of Mandibular Third Molar Impaction and Assessment of Surgery Difficulty: A Retrospective Study of Radiographs in East Baltic Population. Int. J. Environ. Res. Public Health 2021, 18, 6016. [Google Scholar] [CrossRef]
  24. Osborn, T.P.; Frederickson, G., Jr.; Small, I.A.; Torgerson, T.S. A prospective study of complications related to mandibular third molar surgery. J. Oral Maxillofac. Surg. 1985, 43, 767–769. [Google Scholar] [CrossRef]
  25. Blondeau, F.; Daniel, N.G. Extraction of impacted mandibular third molars: Postoperative complications and their risk factors. J. Can. Dent. Assoc. 2007, 73, 325. [Google Scholar] [PubMed]
  26. Van Gool, A.; Ten Bosch, J.; Boering, G. Clinical consequences of complaints and complications after removal of the mandibular third molar. Int. J. Oral Surg. 1977, 6, 29–37. [Google Scholar] [CrossRef]
  27. Benediktsdóttir, I.S.; Wenzel, A.; Petersen, J.K.; Hintze, H. Mandibular third molar removal: Risk indicators for extended operation time, postoperative pain, and complications. Oral Surg. Oral Med. Oral Pathol. Oral Radiol. Endodontol. 2004, 97, 438–446. [Google Scholar] [CrossRef] [PubMed]
  28. Kim, H.J.; Jo, Y.J.; Choi, J.S.; Kim, H.J.; Kim, J.; Moon, S.Y. Anatomical Risk Factors of Inferior Alveolar Nerve Injury Association with Surgical Extraction of Mandibular Third Molar in Korean Population. Appl. Sci. 2021, 11, 816. [Google Scholar] [CrossRef]
  29. Valmaseda-Castellón, E.; Berini-Aytés, L.; Gay-Escoda, C. Inferior alveolar nerve damage after lower third molar surgical extraction: A prospective study of 1117 surgical extractions. Oral Surg. Oral Med. Oral Pathol. Oral Radiol. Endodontol. 2001, 92, 377–383. [Google Scholar] [CrossRef]
  30. Smith, A.C.; Barry, S.E.; Chiong, A.Y.; Hadzakis, D.; Kha, S.L.; Mok, S.C.; Sable, D.L. Inferior alveolar nerve demage following removal of mandibular third molar teeth. A prospective study using panoramic radiography. Aust. Dent. J. 1997, 42, 149–152. [Google Scholar] [CrossRef]
  31. Roccuzzo, A.; Molinero-Mourelle, P.; Ferrillo, M.; Cobo-Vázquez, C.; Sanchez-Labrador, L.; Ammendolia, A.; Migliario, M.; de Sire, A. Type I Collagen-Based Devices to Treat Nerve Injuries after Oral Surgery Procedures. A Systematic Review. Appl. Sci. 2021, 11, 3927. [Google Scholar] [CrossRef]
  32. Miloro, M.; DaBell, J. Radiographic proximity of the mandibular third molar to the inferior alveolar canal. Oral Surg. Oral Med. Oral Pathol. Oral Radiol. Endodontol. 2005, 100, 545–549. [Google Scholar] [CrossRef]
  33. Kang, F.; Sah, M.; Fei, G. Determining the risk relationship associated with inferior alveolar nerve injury following removal of mandibular third molar teeth: A systematic review. J. Stomatol. Oral Maxillofac. Surg. 2020, 121, 63–69. [Google Scholar] [CrossRef]
  34. Vinayahalingam, S.; Xi, T.; Bergé, S.; Maal, T.; de Jong, G. Automated detection of third molars and mandibular nerve by deep learning. Sci. Rep. 2019, 9, 1–7. [Google Scholar] [CrossRef]
  35. Yoo, J.H.; Yeom, H.G.; Shin, W.; Yun, J.P.; Lee, J.H.; Jeong, S.H.; Lim, H.J.; Lee, J.; Kim, B.C. Deep learning based prediction of extraction difficulty for mandibular third molars. Sci. Rep. 2021, 11, 1954. [Google Scholar] [CrossRef] [PubMed]
  36. Nakagawa, Y.; Ishii, H.; Nomura, Y.; Watanabe, N.Y.; Hoshiba, D.; Kobayashi, K.; Ishibashi, K. Third molar position: Reliability of panoramic radiography. J. Oral Maxillofac. Surg. 2007, 65, 1303–1308. [Google Scholar] [CrossRef] [PubMed]
  37. Barone, R.; Clauser, C.; Testori, T.; Del Fabbro, M. Self-assessed neurological disturbances after surgical removal of impacted lower third molar: A pragmatic prospective study on 423 surgical extractions in 247 consecutive patients. Clin. Oral Investig. 2019, 23, 3257–3265. [Google Scholar] [CrossRef] [PubMed]
  38. Rood, J.; Shehab, B.N. The radiological prediction of inferior alveolar nerve injury during third molar surgery. Br. J. Oral Maxillofac. Surg. 1990, 28, 20–25. [Google Scholar] [CrossRef]
  39. Yuasa, H.; Kawai, T.; Sugiura, M. Classification of surgical difficulty in extracting impacted third molars. Br. J. Oral Maxillofac. Surg. 2002, 40, 26–31. [Google Scholar] [CrossRef] [Green Version]
  40. Kim, J.Y.; Yong, H.S.; Park, K.H.; Huh, J.K. Modified difficult index adding extremely difficult for fully impacted mandibular third molar extraction. J. Korean Assoc. Oral Maxillofac. Surg. 2019, 45, 309–315. [Google Scholar] [CrossRef]
  41. Lima, C.J.; Silva, L.C.; Melo, M.R.; Santos, J.A.; Santos, T.S. Evaluation of the agreement by examiners according to classifications of third molars. Med. Oral Patol. Oral y Cir. Bucal 2012, 17, e281. [Google Scholar] [CrossRef]
  42. Iizuka, T.; Tanner, S.; Berthold, H. Mandibular fractures following third molar extraction: A retrospective clinical and radiological study. Int. J. Oral Maxillofac. Surg. 1997, 26, 338–343. [Google Scholar] [CrossRef]
  43. Reza, A.M. Realization of the contrast limited adaptive histogram equalization (CLAHE) for real-time image enhancement. J. VLSI Signal Process. Syst. Signal Image Video Technol. 2004, 38, 35–44. [Google Scholar] [CrossRef]
  44. Lin, T.Y.; Goyal, P.; Girshick, R.; He, K.; Dollár, P. Focal loss for dense object detection. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2980–2988. [Google Scholar]
  45. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  46. Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Fei-Fei, L. Imagenet: A large-scale hierarchical image database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar]
  47. Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
  48. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An image is worth 16 × 16 words: Transformers for image recognition at scale. arXiv 2020, arXiv:2010.11929. [Google Scholar]
  49. Vinayahalingam, S.; Kempers, S.; Limon, L.; Deibel, D.; Maal, T.; Hanisch, M.; Bergé, S.; Xi, T. Classification of caries in third molars on panoramic radiographs using deep learning. Sci. Rep. 2021, 11, 12609. [Google Scholar] [CrossRef]
  50. Kim, B.S.; Yeom, H.G.; Lee, J.H.; Shin, W.S.; Yun, J.P.; Jeong, S.H.; Kang, J.H.; Kim, S.W.; Kim, B.C. Deep Learning-Based Prediction of Paresthesia after Third Molar Extraction: A Preliminary Study. Diagnostics 2021, 11, 1572. [Google Scholar] [CrossRef]
  51. Demirtas, O.; Harorli, A. Evaluation of the maxillary third molar position and its relationship with the maxillary sinus: A CBCT study. Oral Radiol. 2016, 32, 173–179. [Google Scholar] [CrossRef]
  52. Lim, A.A.T.; Wong, C.W.; Allen, J.C., Jr. Maxillary third molar: Patterns of impaction and their relation to oroantral perforation. J. Oral Maxillofac. Surg. 2012, 70, 1035–1039. [Google Scholar] [CrossRef] [PubMed]
  53. Hasegawa, T.; Tachibana, A.; Takeda, D.; Iwata, E.; Arimoto, S.; Sakakibara, A.; Akashi, M.; Komori, T. Risk factors associated with oroantral perforation during surgical removal of maxillary third molar teeth. Oral Maxillofac. Surg. 2016, 20, 369–375. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Overall workflow.
Figure 1. Overall workflow.
Applsci 12 00475 g001
Figure 2. Pell and Gregory classification. The red teeth are the mandibular third molars. In the first row, there are figures regarding the depth of the impacted mandibular third molar, and classes A, B, and C from the left. In the second row, there are figures about the distance between the mandibular third molar and the ascending mandibular ramus, and classes I, II, and III from the left.
Figure 2. Pell and Gregory classification. The red teeth are the mandibular third molars. In the first row, there are figures regarding the depth of the impacted mandibular third molar, and classes A, B, and C from the left. In the second row, there are figures about the distance between the mandibular third molar and the ascending mandibular ramus, and classes I, II, and III from the left.
Applsci 12 00475 g002
Figure 3. Winter’s classification. The red teeth are the mandibular third molars. The figure expressed six types of impacted mandibular third molar according to winter’s classification: vertical, mesioangular, horizontal, distoangular, transverse, and inverted.
Figure 3. Winter’s classification. The red teeth are the mandibular third molars. The figure expressed six types of impacted mandibular third molar according to winter’s classification: vertical, mesioangular, horizontal, distoangular, transverse, and inverted.
Applsci 12 00475 g003
Figure 4. Classification table of extraction difficulty according to the combination of impaction patterns. The column of the gray color represents the combination of impaction patterns and the column of the white color represents the extraction difficulty. The characteristics of the gray column are a combination of the depth of impacted: Class A,B,C and the distance between the third molar and the ascending mandibular ramus: Class I, II, III and the angle of the impacted mandibular third molar: vertical, mesioangular, horizontal, distoangular, transverse inverted.
Figure 4. Classification table of extraction difficulty according to the combination of impaction patterns. The column of the gray color represents the combination of impaction patterns and the column of the white color represents the extraction difficulty. The characteristics of the gray column are a combination of the depth of impacted: Class A,B,C and the distance between the third molar and the ascending mandibular ramus: Class I, II, III and the angle of the impacted mandibular third molar: vertical, mesioangular, horizontal, distoangular, transverse inverted.
Applsci 12 00475 g004
Figure 5. Relationship between the inferior alveolar nerve and the mandibular third molar.
Figure 5. Relationship between the inferior alveolar nerve and the mandibular third molar.
Applsci 12 00475 g005
Figure 6. Entire prediction process including the detection of the mandibular third molar, classification of extraction difficulty, and likelihood of IAN injury.
Figure 6. Entire prediction process including the detection of the mandibular third molar, classification of extraction difficulty, and likelihood of IAN injury.
Applsci 12 00475 g006
Figure 7. (a) The confusion matrix for the extraction difficulty classification, (b) the ROC curve for the extraction difficulty classification.
Figure 7. (a) The confusion matrix for the extraction difficulty classification, (b) the ROC curve for the extraction difficulty classification.
Applsci 12 00475 g007
Figure 8. (a) The confusion matrix for the likelihood of IAN injury classification, (b) the ROC curve for the likelihood of IAN injury classification.
Figure 8. (a) The confusion matrix for the likelihood of IAN injury classification, (b) the ROC curve for the likelihood of IAN injury classification.
Applsci 12 00475 g008
Table 1. Mandibular third molar detection result.
Table 1. Mandibular third molar detection result.
ModelAP[0.5]AP[0.75]AP[0.5:0.95]
Retinanet-15299.0%97.7%85.3%
Table 2. Extraction difficulty classification result.
Table 2. Extraction difficulty classification result.
ModelAccuracy (%)F1-Score (%)AUROC(%)
ResNet-3480.0763.2891.43
ResNet-15282.1863.2391.45
R50+ViT-L/3283.566.3592.79
Table 3. Likelihood of the IAN Injury classification result.
Table 3. Likelihood of the IAN Injury classification result.
ModelAccuracy (%)F1-Score (%)AUROC (%)
ResNet-3477.2770.9986.02
ResNet-15280.0772.6288.19
R50+ViT-L/3281.175.5590.02
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Lee, J.; Park, J.; Moon, S.Y.; Lee, K. Automated Prediction of Extraction Difficulty and Inferior Alveolar Nerve Injury for Mandibular Third Molar Using a Deep Neural Network. Appl. Sci. 2022, 12, 475. https://doi.org/10.3390/app12010475

AMA Style

Lee J, Park J, Moon SY, Lee K. Automated Prediction of Extraction Difficulty and Inferior Alveolar Nerve Injury for Mandibular Third Molar Using a Deep Neural Network. Applied Sciences. 2022; 12(1):475. https://doi.org/10.3390/app12010475

Chicago/Turabian Style

Lee, Junseok, Jumi Park, Seong Yong Moon, and Kyoobin Lee. 2022. "Automated Prediction of Extraction Difficulty and Inferior Alveolar Nerve Injury for Mandibular Third Molar Using a Deep Neural Network" Applied Sciences 12, no. 1: 475. https://doi.org/10.3390/app12010475

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop