Next Article in Journal
Prediction of Bone Marrow Metastases Using Computed Tomography (CT) Radiomics in Patients with Gastric Cancer: Uncovering Invisible Metastases
Previous Article in Journal
A 10-Year Retrospective Cohort Study of Endometrial Cancer Outcomes and Associations with Lymphovascular Invasion: A Single-Center Study from Germany
Previous Article in Special Issue
Artificial Intelligence for 3D Reconstruction from 2D Panoramic X-rays to Assess Maxillary Impacted Canines
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Evaluation of the Alveolar Crest and Cemento-Enamel Junction in Periodontitis Using Object Detection on Periapical Radiographs

1
Department of Periodontics, Division of Dentistry, Taoyuan Chang Gung Memorial Hospital, Taoyuan City 333423, Taiwan
2
Department of Operative Dentistry, Taoyuan Chang Gung Memorial Hospital, Taoyuan City 333423, Taiwan
3
Department of Program on Semiconductor Manufacturing Technology, Academy of Innovative Semiconductor and Sustainable Manufacturing, National Cheng Kung University, Tainan City 701401, Taiwan
4
Department of Electronic Engineering, Chung Yuan Christian University, Taoyuan City 320234, Taiwan
5
Department of Electronic Engineering, Feng Chia University, Taichung City 407301, Taiwan
6
Department of Electrical Engineering, Ming Chi University of Technology, New Taipei City 243303, Taiwan
7
Department of Information Management, Chung Yuan Christian University, Taoyuan City 320317, Taiwan
8
Ateneo Laboratory for Intelligent Visual Environments, Department of Information Systems and Computer Science, Ateneo de Manila University, Quezon City 1108, Philippines
*
Authors to whom correspondence should be addressed.
Diagnostics 2024, 14(15), 1687; https://doi.org/10.3390/diagnostics14151687
Submission received: 2 July 2024 / Revised: 30 July 2024 / Accepted: 2 August 2024 / Published: 4 August 2024
(This article belongs to the Special Issue Artificial Intelligence in the Diagnostics of Dental Disease)

Abstract

:
The severity of periodontitis can be analyzed by calculating the loss of alveolar crest (ALC) level and the level of bone loss between the tooth’s bone and the cemento-enamel junction (CEJ). However, dentists need to manually mark symptoms on periapical radiographs (PAs) to assess bone loss, a process that is both time-consuming and prone to errors. This study proposes the following new method that contributes to the evaluation of disease and reduces errors. Firstly, innovative periodontitis image enhancement methods are employed to improve PA image quality. Subsequently, single teeth can be accurately extracted from PA images by object detection with a maximum accuracy of 97.01%. An instance segmentation developed in this study accurately extracts regions of interest, enabling the generation of masks for tooth bone and tooth crown with accuracies of 93.48% and 96.95%. Finally, a novel detection algorithm is proposed to automatically mark the CEJ and ALC of symptomatic teeth, facilitating faster accurate assessment of bone loss severity by dentists. The PA image database used in this study, with the IRB number 02002030B0 provided by Chang Gung Medical Center, Taiwan, significantly reduces the time required for dental diagnosis and enhances healthcare quality through the techniques developed in this research.

1. Introduction

Periodontal disease is a common chronic inflammatory condition that often goes unnoticed by patients, causing them to miss the optimal treatment window [1,2,3]. When the alveolar bone is damaged, gingival recession and bone loss can expose the tooth roots or create sensitive teeth, compromising the stability of the teeth and potentially leading to tooth loss. If not treated promptly, the alveolar bone can also be affected, accumulating plaque [4,5]. Eventually, the atrophy causes periodontal pockets to expand, resulting in loose teeth. In such cases, early detection and treatment are crucial to prevent the condition from worsening and to maintain oral health.
Historically, dentists treating alveolar bone damage typically needed to perform alveolar bone surgery, whether to remove the damaged bone or to reshape it to its physiological form. This process required an assessment of alveolar bone loss, primarily relying on the judgment of the extent of erosion. The characteristics of this condition include the gradual destruction of the ALC [6], leading to periodontal pocket formation and gingival recession. The CEJ [7] refers to the anatomical structure where the enamel, which covers the dental crown, meets the cementum that coats the root [8]. It is a critical reference point in clinical dentistry, as it is generally the site where gingival fibers attach to a healthy tooth [9]. One of the primary parameters for evaluating periodontal destruction is the loss of connective tissue attachment to the tooth root surface. Consequently, the CEJ serves as a stable landmark for measuring clinical attachment loss and assessing periodontal damage [10]. Both PAs and bitewing radiographs have served as the standard for CEJ location assessment [11]. To assist dentists in more accurately diagnosing these conditions, this study utilized tooth recognition and segmentation techniques [12,13] to automate the identification of key locations such as the CEJ and the ALC. In current clinical practice, manual periapical detection faces several major challenges. Firstly, due to the low visibility of tooth gaps, the detection process is easily influenced by adjacent teeth, leading to erroneous judgments. Secondly, the varying angles of each tooth’s apex further increase the difficulty of detection, affecting the accuracy and reliability of the assessments. Due to the inconsistent angles of PA images compared to bitewing or panoramic radiographs, it is challenging to annotate these images conveniently [14]. Moreover, the variability in the quality of X-ray images further complicates the identification of the CEJ and ALC, necessitating the preprocessing of PA images [15] to enhance their quality.
As technology advances, the application of artificial intelligence (AI) has become increasingly widespread across various fields. For instance, endoscopic examinations [16], pancreatic cancer treatment [17], and lung nodules detection [18]. Significant progress has been made in the application of artificial intelligence in dentistry, surpassing the traditional methods that solely relied on visual inspection and professional expertise. Dentists can utilize image masking techniques to enhance the contrast of X-rays [19] for better determination of furcation [20]. The use of convolutional neural network (CNN) models for dental detection is increasing, such as for detecting maxillary sinusitis [21], identifying caries and restorations [22,23], accurately detecting implant positions, and assessing the damage caused by peri-implantitis [24] or dual-supervised network (DSN) models for tooth recognition and boundary regression [25]. In addition to the techniques, many studies have proposed improvements based on object detection models for medical image analysis [26], such as FLM-RCNN [27], the Levenberg–Marquardt backpropagation training algorithm [28], YOLO-DENTAL [29], and using feedforward neural networks for classifying lesions [30]. Various techniques have been applied to DPR images for detecting apical lesions [31] and inferior alveolar nerve injuries [32].
From the above, AI technology can effectively reduce the burden of dentists during consultations. Thus, this study utilizes both YOLO (You Only Look Once) and Mask R-CNN training models, combined with the preprocessing of PA images to enhance the accuracy of YOLO in detecting individual teeth. The Mask R-CNN model is applied to mask these images, specifically to identify and extract the CEJ and ALC positions.

2. Materials and Methods

This section will describe the methods and is divided into the following parts.

2.1. Study Design

To assist dentists in more accurately diagnosing these conditions, this study utilized tooth detection and instance segmentation techniques to identify the location of the CEJ and the ALC automatically. Initially, PA images are preprocessed to enhance image quality, making the contours of teeth and alveolar bone clearer. Then, the YOLOv8 model is used to predict the position of individual teeth and, based on the localization results, the individual teeth are segmented from the PA images. Data augmentation techniques such as rotation angles are employed to increase the sample size of the database and prevent model overfitting. The segmented tooth images are classified, and the YOLOv8 classification model [33] excludes teeth that cannot be assessed, such as those with partially obscured CEJ levels or with implants and crowns. Subsequently, using the Detectron2 framework with Mask R-CNN, masks for the teeth, bone, and crown are extracted. These masks are used to determine the ALC level and CEJ level. The overall flowchart of this study is illustrated in Figure 1. The contributions of this study are as follows:
  • YOLOv8 achieved a sensitivity of 94.3% in extracting single teeth from original images. With the developed CLAHE algorithm, the sensitivity improved to 97.5%.
  • Mask R-CNN was utilized to extract three types of masks, with the DSC and Jaccard index both exceeding 90%. Additionally, the image augmentation method developed in this study showed an improvement of 1~3%.
  • A localization algorithm is proposed for CEJ and ALC; the RMSE is lower than 0.09, providing visualization techniques to aid dentists in diagnosis.

2.2. Data Collection

The database used in this study is sourced from the Chang Gung Memorial Hospital in Taoyuan, Taiwan. It was compiled by five dental practitioners, each with over five years of experience. The study received approval from the Institutional Review Board (IRB: 202301730B0), ensuring that ethical standards were met in the research process. To observe lesions near the alveolar bone, the use of PA imaging is specifically justified. Due to the larger scale of PA images, it is difficult to clearly observe localized lesions, making PA imaging a preferred choice in clinical practice. The smaller size of the PA image allows for more detailed local observation, thereby enhancing diagnostic accuracy. The selection of image sizes, such as 825×1200 and 1200×825, is based on providing more suitable proportions when sectioning teeth.

2.3. Statistical Analysis

The CEJ is an anatomical landmark on the tooth that separates the crown from the root structure, where the formation of enamel (the outermost layer of the crown) stops and the formation of cementum (the outermost layer of the root) begins [34]. This study references a technique [35] for lesion marking, serving as a basis for researchers. Dentists’ manually annotated results are exported to an Excel file, containing key point information marked on the images. These data serve as a baseline for comparison with the automated annotation results. The automated annotation system can detect the coordinates of the CEJ and ALC and compare them with the coordinates marked by dentists. To evaluate the accuracy of the automated annotation system, this study calculates the directional offset and further computes the root mean square error (RMSE). The RMSE quantifies the difference between the predicted and actual values, providing an assessment of the accuracy of the automated annotation system relative to the manual annotation system. This comparison helps understand the performance of the automated annotation method and its feasibility in practical applications.

2.4. Tooth Segmentation

The steps involved in tooth segmentation in this study are illustrated in Figure 2. First, PA images are preprocessed to enhance model detection accuracy and facilitate annotation. These preprocessing steps include image enhancement, contrast adjustment, and noise reduction, ensuring that the contours of the teeth and alveolar bone are clearly visible. Next, the preprocessed PA images are annotated, specifically marking the locations of individual teeth. These annotated image data are used to train the YOLOv8 model, enabling it to accurately detect the positions of individual teeth. Once training is completed, the model is used to predict and automatically segment individual teeth from new images using the segment algorithm.

2.4.1. PA Image Preprocessing

The quality of the original image can affect the physician’s diagnosis. To clarify the contours of the teeth, this study preprocesses PA images to improve the detection accuracy of YOLOv8 and provide more precise annotations. First, the original image, Figure 3a, undergoes denoising using a median filter to remove extraneous noise, resulting in Figure 3b. This allows the noise to blend into the surrounding pixels, which is crucial because any remaining noise might be amplified during the subsequent CLAHE processing, thereby degrading image quality. Next, CLAHE is applied for preprocessing. This step enhances the contours of various objects in the image, such as the teeth, alveolar bone, and roots, making them more distinct, as shown in Figure 3c. A notable advantage of using CLAHE is that it enhances contrast based on local regions of the image rather than the global contrast, preventing issues where some parts of the image become too bright while others become too dark. These preprocessing steps are highly beneficial for the model training in this study and significantly improve accuracy.

2.4.2. PA Image Annotation and Dataset Augmentation

The Roboflow annotate tool was used to perform polygonal annotation on the processed teeth, as shown in Figure 4, and the annotation results were exported in TXT format. This step aims to construct the labeled dataset required for training the model. Data augmentations were employed to enhance the training effectiveness, including flipping the images and rotating 90° clockwise and counterclockwise, the YOLOv8 dataset detailed is shown in Table 1. YOLOv8, like its predecessors, is designed for real-time object detection. It includes innovations in the backbone, neck, and head of the network to enhance feature extraction and object detection accuracy. The model is trained using a loss function that combines classification loss, localization loss (bounding box regression), and confidence loss. Hyperparameters such as learning rate, batch size, number of epochs, and others need to be set. During the training phase, the performance of the model is evaluated at each epoch using the validation set. Based on the validation results, adjustments can be made to the model architecture, hyperparameters, or training process to improve performance. These data augmentation methods increase the diversity of the dataset, helping the model to capture the features of the teeth precisely. By integrating various preprocessing and data augmentation methods, the accuracy and generalization ability of the model in the tooth segmentation task can be significantly improved.

2.4.3. YOLOv8 Detection

YOLO is a deep learning technique specifically designed for object detection, it uses a single forward propagation step to directly predict the positions and categories of all objects in an image. This allows YOLO to perform object detection efficiently in real-time while maintaining a high level of accuracy. This study uses the YOLOv8 model to predict the teeth’s positions and then segments individual teeth based on the model’s localization. Additionally, it compares the performance of YOLOv5, YOLOv7, and YOLOv8 in handling the tooth segmentation task. Initially, data preprocessing and augmentation are conducted to ensure that the training data are consistent and diverse, thereby enhancing the model’s generalization ability. The processed data are divided into training, validation, and test sets, as shown in Table 1 to facilitate model training and evaluation. Subsequently, the prepared datasets are input into the YOLOv5, YOLOv7, and YOLOv8 models for training, allowing the observation of their learning effectiveness and convergence rates; the indicators are calculated as follows, and the formula is shown in Equations (1)–(3). Table 2 provides a detailed overview of the hardware and software configurations used in the system. The hardware components include an AMD Ryzen 7 3700X CPU, an NVIDIA GeForce RTX 3070 GPU with 8 GB of memory, and 32 GB of DRAM. The software stack includes Python version 3.11.9, PyTorch version 2.4.0, and CUDA version 12.1.
P r e c i s i o n = T p T p + F p
R e c a l l S e n s i t i v i t y = T p T p + F n
m A P = q = 1 Q   1 n       P n   n = 1 ,   2 ,   3 ,   .   ,   m Q

2.5. Mask R-CNN

The individual single-tooth image extracted from the previous steps is input into the Mask R-CNN model, as shown in the workflow diagram in Figure 5. In this process, all teeth in the image are annotated, regardless of whether they are intact. This step aims to generate three types of masks, specifically for the tooth, bone, and crown, serving as the intermediate masked images for subsequent steps. These masked images will be used for further analysis and diagnosis, aiding in the precise localization and identification of dental structure, and enhancing the accuracy and reliability of the diagnosis.

2.5.1. Single Tooth Image Augmentation

To improve accuracy and training effectiveness, this study applies vertical flipping to the single-tooth images obtained after CLAHE enhancement. This augmentation increases the number of single-tooth images, preventing overfitting of the Mask R-CNN model and providing more training data to enhance accuracy. Consequently, the training and validation datasets for this study have been doubled in size, as shown in Table 3, ensuring a more robust and comprehensive dataset for model training.

2.5.2. Single-Tooth Annotation Mask

To locate the ALC level and assist dentists in preliminarily identifying critical treatment areas, this study annotates the tooth bone and trains the Mask R-CNN model to extract the Bone Mask. For locating the CEJ level, the study annotates the tooth crown to extract the Crown Mask. The methods and processes for these annotations are illustrated in Figure 6. These steps are designed to accurately pinpoint the key anatomical structures of the teeth, enhancing the precision of dental diagnostics.

2.5.3. Mask R-CNN

This study utilized the Detectron2 framework for mask extraction from dental images, opting for the Mask R-CNN model to achieve precise pixel-level segmentation. Mask R-CNN extends Faster R-CNN’s object detection results to perform pixel-level classification, with its main components including the backbone, head, and ROIAlign. ResNet-50 is used as the backbone to extract multi-level features from the images and generate candidate regions. The head component employs a Feature Pyramid Network (FPN) for classifying candidate regions and bounding box regression, producing the final detection results and pixel-level masks. Additionally, the ROIAlign component in Mask R-CNN precisely aligns the features of each candidate region on the feature map using bilinear interpolation, overcoming the quantization errors that may arise from traditional ROI pooling methods. This study employed Mask R-CNN to extract three distinct categories of masks required for the subsequent stages of the process. Initially, individual teeth were segmented, followed by the use of Mask R-CNN for mask extraction. This strategy narrows the detection scope of the model, which not only reduces the training time but also enhances the overall training process, leading to superior mask extraction results. During this phase, the data were divided approximately in a 7:3 ratio, with 140 images used for training and 54 for validation. Additionally, the dataset was augmented by vertically flipping the tooth images, effectively increasing the dataset size to 280 images for training and 108 for validation.

2.6. ALC Level Localization

The largest area of the Tooth Mask is retained and overlaid with the Bone Mask to form a composite mask. Subsequently, the custom localization algorithm developed in this study is applied to identify the ALC level for each symptomatic tooth in the PA image. This approach allows for the accurate identification of critical symptomatic regions, providing dentists with precise diagnostic information.

2.6.1. Retained the Largest Mask

Three different mask categories are obtained from the Mask R-CNN prediction results: Tooth Mask, Bone Mask, and Crown Mask. After acquiring these masks, only the fully segmented teeth are analyzed. For the incomplete teeth within the Tooth Mask, the neighboring Tooth Mask needs to be addressed. Since YOLOv8 was used earlier to extract individual teeth, each Tooth Mask contains one complete tooth, covering most of the area, allowing the removal of the masks of any incomplete teeth, as shown in Figure 7a,b.
Instance segmentation not only requires detecting the object’s class and location but also involves pixel-level segmentation for each object, each object is segmented into a unique region, even if they belong to the same class. The study identifies all pixel values within the Tooth Mask and saves the masks predicted by Mask R-CNN, representing the pixels of the same object with the same value. The number of different pixel values is then calculated and the pixel value with the largest area is recorded. Other objects with different pixel values are removed, as shown in Figure 7c. This method ensures that only the complete Tooth Mask is used for analysis, thereby improving diagnostic accuracy and efficiency.

2.6.2. Overlay and ALC Localization

To locate the ALC level, the Bone Mask and Tooth Mask predicted by Mask R-CNN are overlaid. By analyzing the overlapping regions of these two masks, the position of the ALC level can be determined. The overlay process is illustrated in Figure 8. Since the teeth and bone might share the same pixel values, the pixel value differences cannot be used for localization. Therefore, this study assigns new pixel values to the Tooth Mask, as shown in Figure 8b, and uses these new pixel value differences to locate the ALC level. Next, a 5 × 5 kernel is iteratively applied across the entire image. If the kernel in a given iteration contains the pixel values from Figure 8b and the overlapping pixel values from Figure 8c, the kernel at that iteration is identified as the ALC level, as shown in Figure 8e.

2.7. CEJ Level Localization

The largest area of the Tooth Mask is retained, and the Crown Mask is subjected to dilation. The processed Tooth Mask and Crown Mask are then overlaid to form a composite mask. The developed localization algorithm is used to identify the CEJ position for each symptomatic tooth in the PA image. This method ensures precise localization by accurately identifying and processing key areas, improving diagnostic accuracy.

2.7.1. Crown Mask Dilation

CEJ refers to the junction between the crown and root. It can be analyzed by overlaying the Tooth Mask and Crown Mask. Since it is not feasible to predict a Crown Mask that perfectly aligns with the crown position in the Tooth Mask, this leads to discrepancies in pixel locations and reduces the accuracy of localization. Hence, the Crown Mask can be dilated using Equation (4), ensuring that the crown completely envelops the crown portion of the Tooth Mask. This leaves only the necessary pixel discrepancy locations.
D i l a t a t i o n A ,   B = A B

2.7.2. Overlay and CEJ Localization

Due to the gingiva attached above the CEJ and its position at the junction between the crown and the root, this junction often becomes an error-prone area in symptom diagnosis. To help dentists make more precise and faster judgments or treatments when extracting symptoms, this study proposes a new method to identify the CEJ level. This technique involves overlaying the Tooth Mask and the dilated Crown Mask, allowing it to encompass the crown portion of the Tooth Mask. This method significantly enhances the accuracy of CEJ localization, as demonstrated in Figure 9d.
Figure 10 compares the effectiveness of the CEJ level localization algorithm with and without the dilation process in this study. Initially, the analysis of the un-dilated Crown Mask revealed that the top of the crown often had many overlapping masks, which hindered the accurate localization of the CEJ level. Therefore, after applying the dilation process to the Crown Mask, the localization results improved significantly, ensuring that the mask intersection occurred only at the junction between the crown and the root. This approach markedly enhances the accuracy of CEJ level localization, thereby better aiding in the diagnosis and treatment of symptoms.

3. Results

In this chapter, this study will analyze the experimental results and compare them with advanced research. This chapter can be divided into tooth detection, different mask evaluation, and the CEJ/ALC position result.

3.1. Tooth Detection Result

The YOLO detection training results, presented in Table 4, highlight the superior performance of YOLOv8 compared to YOLOv5 and YOLOv7. Training with image enhancement significantly improved YOLOv8’s precision from 0.943 to 0.978, recall from 0.943 to 0.975, mAP50 from 0.941 to 0.989, and mAP50-90 from 0.805 to 0.855. Comparatively, YOLOv5 achieved a precision of 0.930 on original images and 0.959 on enhanced images, while YOLOv7 reached 0.939 and 0.969, respectively. These results demonstrate that YOLOv8, particularly with image enhancement, excels in detecting individual teeth, confirming the effectiveness of data augmentation in improving performance.
After training the YOLOv8 detection model, this study used an untrained test dataset for accuracy prediction and evaluated accuracy through the confusion matrix, as shown in Equation (5). The validation results are presented in Table 5, where the accuracy of the YOLOv8 detection model with CLAHE enhancement reached 97.01%, significantly higher than YOLOv7’s 95.37% and YOLOv5’s 90.59%. This indicates a notable improvement of the YOLOv8 model over its predecessors.
A c c u r a c y = T P + F P T P + F P + T N + F N
This study implemented additional preprocessing on the images after thorough consideration, resulting in a significant improvement in accuracy and further validating the reliability of the model. According to the training process outlined in Table 6, each set of 20 epochs took approximately one and a half minutes. The model’s loss rate demonstrated an exponential decline, indicating the overall effectiveness of the training. This decline in loss rate can serve as a key indicator of the model’s training status.
The YOLOv8 model was used to predict the unmarked and untrained original test dataset, with the prediction results shown in Table 7. The evaluation was conducted using the Precision–Recall curve, as shown in Figure 11. The Precision–Recall curve illustrates the model’s precision and recall at different IOU thresholds. In Figure 11a, when the IOU equals 0.5, the mAP reaches 0.989. Additionally, the closer the curve is to the upper right corner of the coordinate axis, the better the model’s performance in correctly identifying samples. The F1–Confidence curve shows the variation of the F1 score at different confidence thresholds, with the highest F1-score of 0.97. The Precision–Recall curve of this study is very close to the upper right corner, indicating that the model has high precision and high recall.

3.2. Tooth, Crown, and Bone Mask Result

Table 8 shows the comparison of the Dice Similarity Coefficient (DSC) and Jaccard index for Tooth Mask, Bone Mask, and CEJ line with and without image enhancement. When classifying the Tooth Mask using Mask R-CNN, the DSC for the original image was 0.9425, which improved to 0.9478 after enhancement; the Jaccard index increased from 0.8920 to 0.9015. For Bone Mask classification, the DSC for the original image was 0.9273, improving to 0.9352 with enhancement; the Jaccard index rose from 0.8688 to 0.8992. Notably, in the CEJ line classification, the DSC for the original image was 0.9500, which increased to 0.9550 after enhancement; the Jaccard index improved from 0.9071 to 0.9156. The Mask R-CNN model in this study showed higher accuracy in classifying the Tooth Mask, Bone Mask, and CEJ line after image enhancement, particularly making a notable contribution to CEJ line classification. This improvement is crucial for the detection and diagnosis of teeth and surrounding tissues. The Dice Similarity Coefficient (DSC) and Jaccard index formulae are shown in Equations (6) and (7); the DSC and Jaccard index are commonly used metrics for evaluating the similarity between two sets in mask-based models. In these models, the annotated masks and predicted results are typically treated as two sets, with pixel locations as the elements of the sets for computation. The DSC focuses on the overlapping region of the masks, assigning double weight to the overlap as indicated by its formula, making the DSC more sensitive to the overlapping area. Its values range from 0 to 1, with a higher value indicating a greater overlap between the masks and thus better model performance. In contrast, the Jaccard index evaluates the similarity of the overall sets by measuring the ratio of the intersection to the union of the masks. It also ranges from 0 to 1, with higher values reflecting the greater similarity of the overall sets. These two metrics offer different perspectives on measuring mask similarity, with the DSC emphasizing the importance of overlapping areas, while the Jaccard index focuses on the ratio of the entire sets.
D i c e A ,   B =   2   A     B A + B
J a c c a r d   i n d e x A ,   B = A     B A     B
The prediction results of the three masks (Tooth, Bone, and Crown) using the aforementioned steps are shown in Figure 12, with different AP metrics evaluated using Equation (8). Table 9 presents the training results of Mask R-CNN for extracting these masks from original and enhanced images, assessed through AP, AP50, and AP75 metrics. For the Tooth Mask, image enhancement improved the AP from 66.73% to 69.65%, AP50 from 88.32% to 89.54%, and AP75 from 74.65% to 81.66%. For the Bone Mask, the AP increased from 73.32% to 76.66%, AP50 from 98.15% to 99.86%, and AP75 from 90.17% to 92.26%. For the Crown Mask, the AP improved from 79.14% to 81.55%, AP50 remained at 99.99%, and AP75 slightly decreased from 98.02% to 96.29%. These results indicate that whether using the bounding box or segmentation evaluation methods, the training outcomes for all three masks improved after image enhancement. This demonstrates the significant effectiveness of image enhancement techniques in increasing the detection accuracy of the Mask R-CNN model. The training process is shown in Table 10. The accuracy of Faster R-CNN for bounding box prediction and Mask R-CNN for mask segmentation prediction has significantly increased to more than 95% during the training process.
A P = 0 1 P R   d R

3.3. CEJ and ALC Position Result

Table 11 presents the accuracy of the Mask R-CNN model for three categories, comparing the results from original images with those from enhanced images. The results show that image enhancement had the most significant improvement in the accuracy of the Tooth Mask, increasing from 92.63% to 93.48%. For the Bone Mask, the accuracy slightly improved from 95.50% to 96.95%. However, for the Crown Mask, the accuracy slightly decreased from 96.79% to 96.21% after image enhancement. This indicates that while image enhancement generally has a positive impact on the model’s accuracy, its effects can vary across different mask categories. Table 12 provides a comparative analysis between our developed positioning algorithm and the annotations supplied by the dentist. The upper portion demonstrates the CEJ and ALC levels ascertained by the algorithm, juxtaposed with the dentist’s marked points on both the left and right sides. Specifically, these points encompass CEJ (left) and CEJ (right) for the CEJ level and ALC (left) and ALC (right) for the ALC level. The lower segment portrays a segmented image, showcasing the positions identified by the algorithm in contrast to those identified by the dentist. The blue and purple circles represent the annotations made by the dentist and the red and green circles represent the annotation result of this study. Furthermore, it computes precision in every CEJ and ALC point on both sides, along with determining the Root Mean Square Error (RMSE) between the four points identified by the algorithm and the dentist’s annotations; the RMSE value between different points is lower than 0.09, and the minimum value is 0.0209 in CEJ (right). The RMSE formula is shown in Equation (9).
R M S E = i = 1 n ( y ^ i y i ) 2 n

4. Discussion

This study evaluated the efficacy of using advanced image processing and deep learning techniques for periodontal diagnosis. The primary objective was to accurately detect and segment individual teeth and key anatomical landmarks, such as the CEJ and ALC, using PA imaging. Initially, the PA images underwent preprocessing with CLAHE to enhance image quality shown in Figure 3. This preprocessing significantly clarified the contours of teeth and alveolar bone, facilitating more precise annotations and model training. This study utilized the YOLOv8 and Mask R-CNN models to predict and segment dental images and applied image enhancement techniques to improve the accuracy of these models. For tooth detection, the YOLOv8 model was employed. The model achieved a sensitivity of 94.3% on the original images, which improved to 97.5% after applying the CLAHE enhancement. YOLOv8 demonstrated superior performance compared to its predecessors, YOLOv5 and YOLOv7, particularly in terms of precision, recall, mAP, and detection accuracy. The enhanced YOLOv8 model reached an impressive accuracy of 97.01% on the test dataset, as shown in Table 13. The study also conducted a detailed comparison with the methods in [36,37]. The approach in [36] uses a matrix to calculate inter-proximal space for tooth segmentation, relying heavily on matrix operations to locate these spaces, which works well when teeth are closely aligned. However, its accuracy may diminish with irregular or overlapping teeth. Meanwhile, the algorithm proposed by Nomir and Abdel-Mottaleb in [37] achieves tooth segmentation by separating teeth from the background using horizontal and vertical projections. This method performs well with images where teeth and background contrast sharply, but its accuracy can decrease in complex or low-contrast backgrounds.
According to the training results shown in Table 9, image enhancement significantly improved the performance of the Mask R-CNN model. For instance, for the Tooth Mask, the AP increased from 66.73% to 69.65%, AP50 from 88.32% to 89.54%, and AP75 from 74.65% to 81.66% after image augmentation. For the Bone Mask, the AP rose from 73.32% to 76.66%, AP50 from 98.15% to 99.86%, and AP75 from 90.17% to 92.26%. Regarding the Crown Mask, the AP increased from 79.14% to 81.55%, AP50 remained at 100%, and AP75 slightly decreased from 98.02% to 96.29%. These results indicate that the training outcomes for all three mask categories improved after image enhancement, demonstrating the significant effectiveness of image enhancement techniques in increasing the detection accuracy of the Mask R-CNN model. Additionally, Table 11 shows the accuracy comparison of the Mask R-CNN model for the three mask categories. The results indicate that image enhancement had the most significant improvement in the accuracy of the Tooth Mask, increasing from 92.63% to 93.48%. For the Bone Mask, the accuracy slightly improved from 95.50% to 96.95%. However, the accuracy of the Crown Mask slightly decreased from 96.79% to 96.21% after image enhancement. This suggests that while image enhancement generally positively impacts the model’s accuracy, its effects can vary across different mask categories. In contrast, the Unet model performed lower in both the DSC and Jaccard index across all classifications [6]; the comparison is shown in Table 14. This highlights the significant advantage of the Mask R-CNN model in handling CEJ line classification. Another significant contribution of this study is the new method proposed for locating the CEJ and ALC levels, which can be seen in Table 12. Throughout Mask R-CNN and image processing to evaluate ALC and CEJ position, the last RMSE is 0.0209 at CEJ (right). These localization techniques assist dentists in faster and more accurate symptom evaluation, significantly enhancing diagnostic efficiency and precision.
Specifically, the developed image enhancement and mask segmentation techniques in this study significantly improved the classification accuracy of the CEJ line, providing robust support for the early diagnosis and treatment of periodontitis. This study has several limitations. First, variations in image quality could impact the model’s accuracy, especially in clinical applications where low-quality or noisy images are more common. Additionally, the reliance on manual annotations introduces potential biases; even experienced dentists may have inconsistencies in their annotations, which can influence the training outcomes of the model. While YOLOv8 and Mask R-CNN performed well in this study, these models may face challenges in handling more complex cases, such as overlapping or missing teeth and severe periodontal diseases. The practical implementation of these AI technologies in clinical settings also necessitates seamless integration into existing diagnostic workflows and ensuring that practitioners can effectively utilize these tools. Future research will focus on exploring various periodontal conditions, including calculating bone loss to assist clinicians in quickly assessing patients’ conditions. The study will emphasize optimizing algorithms for the localization of the CEJ and the ALC, aiming to enhance the system’s efficiency and accuracy. Additionally, different object detection and semantic segmentation models will be compared to identify the most suitable model for PA images, which will be integrated into future systems. The research will also concentrate on utilizing algorithms to aid clinicians in calculating bone loss and addressing other periodontal conditions.

5. Conclusions

The primary objective of this study was to accurately locate the CEJ and ALC to improve the accuracy and efficiency of periodontal diagnosis. By employing CLAHE technology for image preprocessing and utilizing the YOLOv8 and Mask R-CNN models for tooth detection and region segmentation, we significantly enhanced the accuracy of automated assessments. The study results demonstrate that these innovative techniques effectively reduce the time required for diagnosis while increasing accuracy, which holds significant implications for dental practitioners. Specifically, this research provides a rapid and reliable method for marking and assessing symptoms in dental X-rays, aiding dentists in better diagnosing and treating periodontal diseases. This study not only highlights the potential of AI technology in dentistry but also offers a practical tool for clinical practice, significantly improving diagnostic efficiency and accuracy.

Author Contributions

Conceptualization, T.-J.L. and Y.-C.M.; Data curation, T.-J.L. and Y.-C.M.; Formal analysis, T.-Y.C.; Funding acquisition, T.-Y.C., S.-L.C., C.-A.C. and K.-C.L.; Methodology, C.-H.L., Y.-Q.H., Y.-C.H. and S.-L.C.; Resources, S.-L.C.; Software, Y.-J.L., C.-H.L., Y.-Q.H., Y.-C.H., S.-L.C. and C.-A.C.; Validation, Y.-J.L. and S.-L.C.; Visualization, Y.-J.L., C.-H.L., Y.-Q.H. and Y.-C.H.; Writing—original draft, Y.-J.L.; Writing—review and editing, T.-Y.C., C.-A.C., K.-C.L. and P.A.R.A. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Ministry of Science and Technology (MOST), Taiwan, under grant numbers of MOST-109-2410-H-197-002-MY3, MOST-107-2218-E-131-002, MOST-107-2221-E-033-057, MOST-107-2622-E-131-007-CC3, MOST-106-2622-E-033-014-CC2, MOST-106-2221-E-033-072, MOST-106-2119-M-033-001, MOST 107-2112-M-131-001, and MOST-112-2410-H-033-014 and the National Chip Implementation Center, Taiwan.

Institutional Review Board Statement

Institutional Review Board Statement: Chang Gung Medical Foundation Institutional Review Board; IRB number:202301730B0; Date of Approval: 1 December 2020; Protocol Title: A Convolutional Neural Network Approach for Dental Bite-Wing, Panoramic and Periapical Radiographs Classification; Executing Institution: Chang-Geng Medical Foundation Taoyuan Chang-Geng Memorial Hospital of Taoyuan. The IRB reviewed and determined that it is expedited review according to case research or cases treated or diagnosed by clinical routines. However, this does not include HIV-positive cases.

Informed Consent Statement

The IRB approves the waiver of the participants’ consent.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding authors.

Acknowledgments

The authors are grateful to the Applied Electrodynamics Laboratory (Department of Physics, National Taiwan University) for their support with the microwave calibration kit and microwave components.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Oong, E.M.; An, G.K. Treatment Planning Considerations in Older Adults. Dent. Clin. N. Am. 2014, 58, 739–755. [Google Scholar] [CrossRef] [PubMed]
  2. Aizenbud, I.; Wilensky, A.; Almoznino, G. Periodontal Disease and Its Association with Metabolic Syndrome—A Comprehensive Review. Int. J. Mol. Sci. 2023, 24, 13011. [Google Scholar] [CrossRef] [PubMed]
  3. Kitamura, M.; Mochizuki, Y.; Miyata, Y.; Obata, Y.; Mitsunari, K.; Matsuo, T.; Ohba, K.; Mukae, H.; Yoshimura, A.; Nishino, T.; et al. Pathological Characteristics of Periodontal Disease in Patients with Chronic Kidney Disease and Kidney Transplantation. Int. J. Mol. Sci. 2019, 20, 3413. [Google Scholar] [CrossRef] [PubMed]
  4. Lertpimonchai, A.; Rattanasiri, S.; Arj-Ong Vallibhakara, S.; Attia, J.; Thakkinstian, A. The Association between Oral Hygiene and Periodontitis: A Systematic Review and Meta-Analysis. Int. Dent. J. 2020, 67, 332–343. [Google Scholar] [CrossRef] [PubMed]
  5. Grassi, R.; Nardi, G.M.; Mazur, M.; Di Giorgio, R.; Ottolenghi, L.; Guerra, F. The Dental-BIOfilm Detection TECHnique (D-BioTECH): A Proof of Concept of a Patient-Based Oral Hygiene. Medicina 2022, 58, 537. [Google Scholar] [CrossRef] [PubMed]
  6. Lee, C.-T.; Kabir, T.; Nelson, J.; Sheng, S.; Meng, H.-W.; Van Dyke, T.E.; Walji, M.F.; Jiang, X.; Shams, S. Use of the Deep Learning Approach to Measure Alveolar Bone Level. J. Clin. Periodontol. 2022, 49, 260–269. [Google Scholar] [CrossRef] [PubMed]
  7. Chen, C.-C.; Wu, Y.-F.; Aung, L.M.; Lin, J.C.-Y.; Ngo, S.T.; Su, J.-N.; Lin, Y.-M.; Chang, W.-J. Automatic Recognition of Teeth and Periodontal Bone Loss Measurement in Digital Radiographs Using Deep-Learning Artificial Intelligence. J. Dent. Sci. 2023, 18, 1301–1309. [Google Scholar] [CrossRef] [PubMed]
  8. Vandana, K.L.; Haneet, R.K. Cementoenamel junction: An insight. J. Indian Soc. Periodontol. 2014, 18, 549–554. [Google Scholar] [CrossRef] [PubMed]
  9. Arambawatta, K.; Peiris, R.; Nanayakkara, D. Morphology of the cemento-enamel junction in premolar teeth. J. Oral. Sci. 2009, 51, 623–627. [Google Scholar] [CrossRef]
  10. Preshaw, P.M.; Kupp, L.; Hefti, A.F.; Mariotti, A. Measurement of clinical attachment levels using a constant-force periodontal probe modified to detect the cemento-enamel junction. J. Clin. Periodontol. 1999, 26, 434–440. [Google Scholar] [CrossRef]
  11. Mallya, S.; Tetradis, S.; Takei, H. Radiographic Aids in the Diagnosis of Periodontal Disease; Elsevier: Amsterdam, The Netherlands, 2012; pp. 378–390. [Google Scholar] [CrossRef]
  12. Abraham, T.S.; Subramani, S.; Jeyakumar, V.; Sundaram, P. A Comprehensive Preprocessing Approaches for Tooth Labeling and Classification using Dental Panoramic X-Ray Images. In Proceedings of the 2022 IEEE Delhi Section Conference (DELCON), New Delhi, India, 11–13 February 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 1–7. [Google Scholar] [CrossRef]
  13. Chandrashekar, G.; AlQarni, S.; Bumann, E.E.; Lee, Y. Collaborative Deep Learning Model for Tooth Segmentation and Identification Using Panoramic Radiographs. Comput. Biol. Med. 2022, 148, 105829. [Google Scholar] [CrossRef]
  14. Ali, M.A.; Fujita, D.; Kobashi, S. Teeth and Prostheses Detection in Dental Panoramic X-Rays Using CNN-Based Object Detector and a Priori Knowledge-Based Algorithm. Sci. Rep. 2023, 13, 16542. [Google Scholar] [CrossRef] [PubMed]
  15. Wang, T.; Kim, G.T.; Kim, M.; Jang, J. Contrast Enhancement-Based Preprocessing Process to Improve Deep Learning Object Task Performance and Results. Appl. Sci. 2023, 13, 10760. [Google Scholar] [CrossRef]
  16. Young, E.; Edwards, L.; Singh, R. The Role of Artificial Intelligence in Colorectal Cancer Screening: Lesion Detection and Lesion Characterization. Cancers 2023, 15, 5126. [Google Scholar] [CrossRef]
  17. Zhao, G.; Chen, X.; Zhu, M.; Liu, Y.; Wang, Y. Exploring the Application and Future Outlook of Artificial Intelligence in Pancreatic Cancer. Front. Oncol. 2024, 14, 1345810. [Google Scholar] [CrossRef]
  18. Mori, Y.; Kudo, S.; Misawa, M.; Saito, Y.; Ikematsu, H.; Hotta, K.; Ohtsuka, K.; Urushibara, F.; Kataoka, S.; Ogawa, Y.; et al. Real-Time Use of Artificial Intelligence in Identification of Diminutive Polyps During Colonoscopy: A Prospective Study. Ann. Intern. Med. 2018, 169, 357. [Google Scholar] [CrossRef] [PubMed]
  19. Chen, S.-L.; Chou, H.-S.; Chuo, Y.; Lin, Y.-J.; Tsai, T.-H.; Peng, C.-H.; Tseng, A.-Y.; Li, K.-C.; Chen, C.-A.; Chen, T.-Y. Classification of the Relative Position between the Third Molar and the Inferior Alveolar Nerve Using a Convolutional Neural Network Based on Transfer Learning. Electronics 2024, 13, 4. [Google Scholar] [CrossRef]
  20. Mao, Y.-C.; Huang, Y.-C.; Chen, T.-Y.; Li, K.-C.; Lin, Y.-J.; Liu, Y.-L.; Yan, H.-R.; Yang, Y.-J.; Chen, C.-A.; Chen, S.-L.; et al. Deep Learning for Dental Diagnosis: A Novel Approach to Furcation Involvement Detection on Periapical Radiographs. Bioengineering 2023, 10, 802. [Google Scholar] [CrossRef]
  21. Serindere, G.; Bilgili, E.; Yesil, C.; Ozveren, N. Evaluation of maxillary sinusitis from panoramic radiographs and cone-beam computed tomographic images using a convolutional neural network. Imaging Sci. Dent. 2022, 52, 187–195. [Google Scholar] [CrossRef]
  22. Chen, S.-L.; Chen, T.Y.; Mao, Y.C.; Lin, S.Y.; Huang, Y.Y.; Chen, C.A.; Lin, Y.J.; Chuang, M.H.; Abu, P.A.R. Detection of Various Dental Conditions on Dental Panoramic Radiography Using Faster R-CNN. IEEE Access 2023, 11, 127388–127401. [Google Scholar] [CrossRef]
  23. Mao, Y.-C.; Chen, T.-Y.; Chou, H.-S.; Lin, S.-Y.; Liu, S.-Y.; Chen, Y.-A.; Liu, Y.-L.; Chen, C.-A.; Huang, Y.-C.; Chen, S.-L.; et al. Caries and Restoration Detection Using Bitewing Film Based on Transfer Learning with CNNs. Sensors 2021, 21, 4613. [Google Scholar] [CrossRef]
  24. Chen, Y.-C.; Chen, M.-Y.; Chen, T.-Y.; Chan, M.-L.; Huang, Y.-Y.; Liu, Y.-L.; Lee, P.-T.; Lin, G.-J.; Li, T.-F.; Chen, C.-A.; et al. Improving Dental Implant Outcomes: CNN-Based System Accurately Measures Degree of Peri-Implantitis Damage on Periapical Film. Bioengineering 2023, 10, 640. [Google Scholar] [CrossRef]
  25. Wang, S.; Liang, S.; Chang, Q.; Zhang, L.; Gong, B.; Bai, Y.; Zuo, F.; Wang, Y.; Xie, X.; Gu, Y. STSN-Net: Simultaneous Tooth Segmentation and Numbering Method in Crowded Environments with Deep Learning. Diagnostics 2024, 14, 497. [Google Scholar] [CrossRef] [PubMed]
  26. Lin, T.-J.; Lin, Y.-T.; Lin, Y.-J.; Tseng, A.-Y.; Lin, C.-Y.; Lo, L.-T.; Chen, T.-Y.; Chen, S.-L.; Chen, C.-A.; Li, K.-C.; et al. Auxiliary Diagnosis of Dental Calculus Based on Deep Learning and Image Enhancement by Bitewing Radiographs. Bioengineering 2024, 11, 7. [Google Scholar] [CrossRef]
  27. Devi, M.S.; Umanandhini, D.; Rajesh, G.; Laharee, C.; Kumar, D.A. Five Layered Mask-RCNN based Dental Disease Detection from Children Panoramic Radiographs. In Proceedings of the 2023 Global Conference on Information Technologies and Communications (GCITC), Bengaluru, India, 1–3 December 2023; pp. 1–5. [Google Scholar] [CrossRef]
  28. Rahmat, R.F.; Silviani, S.; Nababan, E.B.; Sitompul, O.S.; Anugrahwaty, R.; Silmi, S. Identification of molar and premolar teeth in dental panoramic radiograph image. In Proceedings of the 2017 Second International Conference on Informatics and Computing (ICIC), Jayapura, Indonesia, 1–3 November 2017; pp. 1–6. [Google Scholar] [CrossRef]
  29. Gao, L.; Xu, T.; Liu, M.; Jin, J.; Peng, L.; Zhao, X.; Li, J.; Yang, M.; Li, S.; Liang, S. Ai-aided diagnosis of oral X-ray images of periapical films based on deep learning. Displays 2024, 82, 102649. [Google Scholar] [CrossRef]
  30. Mahmoud, Y.E.; Labib, S.S.; Mokhtar, H.M.O. Teeth periapical lesion prediction using machine learning techniques. In Proceedings of the 2016 SAI Computing Conference (SAI), London, UK, 13–15 July 2016; pp. 129–134. [Google Scholar] [CrossRef]
  31. Endres, M.G.; Hillen, F.; Salloumis, M.; Sedaghat, A.R.; Niehues, S.M.; Quatela, O.; Hanken, H.; Smeets, R.; Beck-Broichsitter, B.; Rendenbach, C.; et al. Development of a Deep Learning Algorithm for Periapical Disease Detection in Dental Radiographs. Diagnostics 2020, 10, 430. [Google Scholar] [CrossRef] [PubMed]
  32. Lee, J.; Park, J.; Moon, S.Y.; Lee, K. Automated Prediction of Extraction Difficulty and Inferior Alveolar Nerve Injury for Mandibular Third Molar Using a Deep Neural Network. Appl. Sci. 2022, 12, 1. [Google Scholar] [CrossRef]
  33. Terven, J.; Córdova-Esparza, D.-M.; Romero-González, J.-A. A Comprehensive Review of YOLO Architectures in Computer Vision: From YOLOv1 to YOLOv8 and YOLO-NAS. Mach. Learn. Knowl. Extr. 2023, 5, 4. [Google Scholar] [CrossRef]
  34. Nguyen, K.-C.T.; Le, L.H.; Kaipatur, N.R.; Major, P.W. Imaging the Cemento-Enamel Junction Using a 20-MHz Ultrasonic Transducer. Ultrasound Med. Biol. 2016, 42, 333–338. [Google Scholar] [CrossRef]
  35. Brezniak, N.; Goren, S.; Zoizner, R.; Shochat, T.; Dinbar, A.; Wasserstein, A.; Heller, M. The accuracy of the cementoenamel junction identification on periapical films. Angle Orthod. 2004, 74, 496–500. [Google Scholar] [CrossRef]
  36. Al-sherif, N.; Guo, G.; Ammar, H.H. A New Approach to Teeth Segmentation. In Proceedings of the 2012 IEEE International Symposium on Multimedia, Irvine, CA, USA, 10–12 December 2012; pp. 145–148. [Google Scholar] [CrossRef]
  37. Nomir, O.; Abdel-Mottaleb, M. A system for human identification from X-ray dental radiographs. Pattern Recognit. 2005, 38, 1295–1305. [Google Scholar] [CrossRef]
Figure 1. The overall flowchart in this research.
Figure 1. The overall flowchart in this research.
Diagnostics 14 01687 g001
Figure 2. Tooth segmentation flowchart.
Figure 2. Tooth segmentation flowchart.
Diagnostics 14 01687 g002
Figure 3. Image preprocessing in tooth segmentation steps: (a) original image, (b) median blur process, (c) CLAHE process.
Figure 3. Image preprocessing in tooth segmentation steps: (a) original image, (b) median blur process, (c) CLAHE process.
Diagnostics 14 01687 g003
Figure 4. Image annotation example in YOLO object detection.
Figure 4. Image annotation example in YOLO object detection.
Diagnostics 14 01687 g004
Figure 5. Mask R-CNN flowchart.
Figure 5. Mask R-CNN flowchart.
Diagnostics 14 01687 g005
Figure 6. The three types of masks used in this study are (a) CLAHE tooth segmentation, (b) tooth annotation mask, (c) crown annotation mask, and (d) bone annotation mask.
Figure 6. The three types of masks used in this study are (a) CLAHE tooth segmentation, (b) tooth annotation mask, (c) crown annotation mask, and (d) bone annotation mask.
Diagnostics 14 01687 g006
Figure 7. Mask R-CNN predictions for single tooth and Tooth Mask processing. (a) Mask R-CNN prediction. (b) The mask predicts the result. (c) Removing incomplete Tooth Mask.
Figure 7. Mask R-CNN predictions for single tooth and Tooth Mask processing. (a) Mask R-CNN prediction. (b) The mask predicts the result. (c) Removing incomplete Tooth Mask.
Diagnostics 14 01687 g007
Figure 8. Mask processing in the ALC level localization algorithm. (a) Tooth Mask without new value. (b) Tooth Mask with new value. (c) Bone Mask. (d) Overlay. (e) ALC level localization (red line).
Figure 8. Mask processing in the ALC level localization algorithm. (a) Tooth Mask without new value. (b) Tooth Mask with new value. (c) Bone Mask. (d) Overlay. (e) ALC level localization (red line).
Diagnostics 14 01687 g008
Figure 9. Mask processing in the CEJ level localization algorithm. (a) Tooth Mask. (b) Crown Mask without dilation. (c) Overlay with dilation. (d) CEJ level localization.
Figure 9. Mask processing in the CEJ level localization algorithm. (a) Tooth Mask. (b) Crown Mask without dilation. (c) Overlay with dilation. (d) CEJ level localization.
Diagnostics 14 01687 g009
Figure 10. Magnified comparison of Crown Mask (a) without dilation and (b) with dilation.
Figure 10. Magnified comparison of Crown Mask (a) without dilation and (b) with dilation.
Diagnostics 14 01687 g010
Figure 11. Model performance and training metrics: (a) Precision–Recall curve and (b) F1–Confidence curve.
Figure 11. Model performance and training metrics: (a) Precision–Recall curve and (b) F1–Confidence curve.
Diagnostics 14 01687 g011
Figure 12. Mask R-CNN prediction result: (a) single tooth (CLAHE), (b) Tooth Mask, (c) Bone Mask, and (d) Crown Mask.
Figure 12. Mask R-CNN prediction result: (a) single tooth (CLAHE), (b) Tooth Mask, (c) Bone Mask, and (d) Crown Mask.
Diagnostics 14 01687 g012
Table 1. YOLOv8 detection dataset.
Table 1. YOLOv8 detection dataset.
TrainValidationTest
Original1405784
Dataset Augmentation420171
Table 2. The hardware and software platform vision.
Table 2. The hardware and software platform vision.
Hardware PlatformVersionManufacturer
CPUAMD Ryzen 7 3700XIntel, California, United States
GPUNVIDIA GeForce RTX 3070 8 GNVIDIA, California, United States
DRAM32 GBADATA, New Taipei City, Taiwan
Software PlatformVersionSoftware PlatformVersion
Python3.11.9Anaconda24.1.2
PyTorch2.4.0CUDA12.1
Table 3. Mask R-CNN train and validation dataset.
Table 3. Mask R-CNN train and validation dataset.
TrainValidation
Original14054
Image Augmentation280108
Table 4. YOLO detection result.
Table 4. YOLO detection result.
PrecisionRecallmAP50mAP50-90
YOLOv5Original0.9300.9500.9420.787
CLAHE0.9590.9770.9790.847
YOLOv7Original0.9390.9470.9480.802
CLAHE0.9690.9600.9830.828
YOLOv8Original0.9430.9430.9410.805
CLAHE0.9780.9750.9890.855
Table 5. Tooth detection training accuracy.
Table 5. Tooth detection training accuracy.
MethodAccuracy
YOLOv5Original84.50%
CLAHE90.59%
YOLOv7Original93.23%
CLAHE95.37%
YOLOv8Original95.44%
CLAHE97.01%
Table 6. YOLOv8 segmentation training process in every 20 epochs.
Table 6. YOLOv8 segmentation training process in every 20 epochs.
EpochTime ElapsedSegmentation Loss
100:00:052.52
2100:01:331.38
4100:03:031.23
6100:04:141.13
8100:05:341.00
10000:06:570.77
Table 7. Single tooth segmentation validation.
Table 7. Single tooth segmentation validation.
Diagnostics 14 01687 i001Diagnostics 14 01687 i002Diagnostics 14 01687 i003Diagnostics 14 01687 i004Diagnostics 14 01687 i005
(a)(b)(c)(d)(e)
Accuracy96.51%97.12%96.46%93.77%
Table 8. Comparison of DSC and Jaccard index for Tooth Mask and Bone Mask.
Table 8. Comparison of DSC and Jaccard index for Tooth Mask and Bone Mask.
Mask CategoryDSCJaccard Index
OriginalAugmentationOriginalAugmentation
This studyTooth Mask0.94250.94780.89200.9015
Bone Mask0.92730.93520.86880.8992
Table 9. Training results of Mask R-CNN for extracting tooth, bone, and crown (AP, AP50, AP75) with original and image augmentation.
Table 9. Training results of Mask R-CNN for extracting tooth, bone, and crown (AP, AP50, AP75) with original and image augmentation.
CategoryEvaluation MethodAPAP50AP75
OriginalAugmentationOriginalAugmentationOriginalAugmentation
Tooth MaskBounding Box66.7369.6588.3289.5474.6581.66
Segmentation65.7967.7189.3190.7777.3479.90
Bone MaskBounding Box73.3276.6698.1599.8690.1792.26
Segmentation70.5572.1998.1599.8680.1283.15
Crown MaskBounding Box79.1481.5599.9999.9998.0296.29
Segmentation83.8685.5799.9999.9998.0298.00
Table 10. Training process with Crown Mask and image augmentation over 1000 iterations.
Table 10. Training process with Crown Mask and image augmentation over 1000 iterations.
IterationFast R-CNN AccuracyMask R-CNN AccuracyTotal Loss
207.42%62.29%2.69
20092.57%80.65%0.85
40096.87%92.93%0.50
60098.24%95.89%0.28
80098.73%96.56%0.22
100098.73%96.21%0.21
Table 11. Comparison between different mask categories in the Mask R-CNN model.
Table 11. Comparison between different mask categories in the Mask R-CNN model.
Mask CategoryOriginalImage Augmentation
AccuracyTooth Mask92.63%93.48%
Bone Mask95.50%96.95%
Crown Mask96.79%96.21%
Table 12. Comparison of the CEJ and ALC positioning algorithm results with dentist annotations.
Table 12. Comparison of the CEJ and ALC positioning algorithm results with dentist annotations.
Ground TruthDiagnostics 14 01687 i006Diagnostics 14 01687 i007Diagnostics 14 01687 i008Diagnostics 14 01687 i009Diagnostics 14 01687 i010
This StudyDiagnostics 14 01687 i011Diagnostics 14 01687 i012Diagnostics 14 01687 i013Diagnostics 14 01687 i014Diagnostics 14 01687 i015
NumberNo. 1No. 2No. 3No. 4No. 5
PrecisionAverage
RMSE
No. 1No. 2No. 3No. 4No. 5
CEJ (left)90.56%95.39%85.52%90.35%81.18%0.0420
CEJ (right)96.05%99.10%92.37%92.39%97.14%0.0209
ALC (left)85.65%92.83%81.48%89.40%99.11%0.0843
ALC (right)92.12%96.94%92.82%97.52%93.67%0.0552
Table 13. Comparison with different object detection methods.
Table 13. Comparison with different object detection methods.
YOLOv8Original95.44%
CLAHE97.01%
Compared with [36]79.60%
Compared with [37]82.50%
Table 14. Comparison with different method in Tooth Mask and Bone Mask.
Table 14. Comparison with different method in Tooth Mask and Bone Mask.
MethodMask CategoryDSCJaccard Index
This studyTooth Mask0.94780.9015
Bone Mask0.93520.8992
Ref. [6]
(Unet)
Tooth Mask0.89940.8225
Bone Mask0.86210.9319
Ref. [6] (ResNet-34 Encoder)Tooth Mask0.91700.9143
Bone Mask0.92350.9343
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lin, T.-J.; Mao, Y.-C.; Lin, Y.-J.; Liang, C.-H.; He, Y.-Q.; Hsu, Y.-C.; Chen, S.-L.; Chen, T.-Y.; Chen, C.-A.; Li, K.-C.; et al. Evaluation of the Alveolar Crest and Cemento-Enamel Junction in Periodontitis Using Object Detection on Periapical Radiographs. Diagnostics 2024, 14, 1687. https://doi.org/10.3390/diagnostics14151687

AMA Style

Lin T-J, Mao Y-C, Lin Y-J, Liang C-H, He Y-Q, Hsu Y-C, Chen S-L, Chen T-Y, Chen C-A, Li K-C, et al. Evaluation of the Alveolar Crest and Cemento-Enamel Junction in Periodontitis Using Object Detection on Periapical Radiographs. Diagnostics. 2024; 14(15):1687. https://doi.org/10.3390/diagnostics14151687

Chicago/Turabian Style

Lin, Tai-Jung, Yi-Cheng Mao, Yuan-Jin Lin, Chin-Hao Liang, Yi-Qing He, Yun-Chen Hsu, Shih-Lun Chen, Tsung-Yi Chen, Chiung-An Chen, Kuo-Chen Li, and et al. 2024. "Evaluation of the Alveolar Crest and Cemento-Enamel Junction in Periodontitis Using Object Detection on Periapical Radiographs" Diagnostics 14, no. 15: 1687. https://doi.org/10.3390/diagnostics14151687

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop