Next Article in Journal
Improving Mechanical Properties of Tendon Allograft through Rehydration Strategies: An In Vitro Study
Next Article in Special Issue
Deep Learning for Dental Diagnosis: A Novel Approach to Furcation Involvement Detection on Periapical Radiographs
Previous Article in Journal
Detection of Corneal Ulcer Using a Genetic Algorithm-Based Image Selection and Residual Neural Network
Previous Article in Special Issue
The Effectiveness of Calcium Phosphates in the Treatment of Dentinal Hypersensitivity: A Systematic Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improving Dental Implant Outcomes: CNN-Based System Accurately Measures Degree of Peri-Implantitis Damage on Periapical Film

1
Department of General Dentistry, Keelung Chang Gung Memorial Hospital, Keelung City 204201, Taiwan
2
Department of General Dentistry, Chang Gung Memorial Hospital, Taoyuan City 33305, Taiwan
3
Department of Electronic Engineering, Chung Yuan Christian University, Taoyuan City 32023, Taiwan
4
School of Physical Educational College, Jiaying University, Meizhou 514000, China
5
Department of Electrical Engineering, Ming Chi University of Technology, New Taipei City 243303, Taiwan
6
Department of Information Management, Chung Yuan Christian University, Taoyuan City 320317, Taiwan
7
Ateneo Laboratory for Intelligent Visual Environments, Department of Information Systems and Computer Science, Ateneo de Manila University, Quezon City 1108, Philippines
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Bioengineering 2023, 10(6), 640; https://doi.org/10.3390/bioengineering10060640
Submission received: 13 April 2023 / Revised: 9 May 2023 / Accepted: 19 May 2023 / Published: 25 May 2023

Abstract

:
As the popularity of dental implants continues to grow at a rate of about 14% per year, so do the risks associated with the procedure. Complications such as sinusitis and nerve damage are not uncommon, and inadequate cleaning can lead to peri-implantitis around the implant, jeopardizing its stability and potentially necessitating retreatment. To address this issue, this research proposes a new system for evaluating the degree of periodontal damage around implants using Periapical film (PA). The system utilizes two Convolutional Neural Networks (CNN) models to accurately detect the location of the implant and assess the extent of damage caused by peri-implantitis. One of the CNN models is designed to determine the location of the implant in the PA with an accuracy of up to 89.31%, while the other model is responsible for assessing the degree of Peri-implantitis damage around the implant, achieving an accuracy of 90.45%. The system combines image cropping based on position information obtained from the first CNN with image enhancement techniques such as Histogram Equalization and Adaptive Histogram Equalization (AHE) to improve the visibility of the implant and gums. The result is a more accurate assessment of whether peri-implantitis has eroded to the first thread, a critical indicator of implant stability. To ensure the ethical and regulatory standards of our research, this proposal has been certified by the Institutional Review Board (IRB) under number 202102023B0C503. With no existing technology to evaluate Peri-implantitis damage around dental implants, this CNN-based system has the potential to revolutionize implant dentistry and improve patient outcomes.

Graphical Abstract

1. Introduction

In recent decades, dental implant technology has gained popularity, boasting a success rate of over 90% for artificial dental implant surgery [1]. The human mouth contains 32 permanent teeth, each with an interlocking function. Missing teeth lead to a cascade of oral health issues, causing more significant long-term damage than adjacent natural teeth [2]. Failure to address a missing tooth can lead to tooth decay and peri-implantitis, impairing the original function of the mouth. In more severe cases, adjacent teeth can shift, bone shrinkage can occur, and bite and temporomandibular joint disorder (TMD) can develop [3,4]. Symptoms associated with TMD include Temporomandibular Joint (TMJ) pain, chewing pain, pain around the ear, and facial asymmetry due to uneven force application [5,6]. According to the American Dental Association, around 5 million dental implants are annually implanted in the U.S., and the worldwide market for dental implants is projected to reach USD 4.6 billion by 2022 [7]. Today, dental implants are a common dental procedure, involving the surgical implantation of a titanium root into the alveolar bone where a tooth is missing [8]. After sterile treatment and a secure bond between the root and tissue, an artificial crown is placed to replace the missing tooth [9]. The structure of the implant is similar to that of a natural tooth and will not cause any foreign body sensations when biting [10].
The use of artificial intelligence (AI) has become prevalent across various fields due to technological advancements. In recent years, the integration of AI and medicine has emerged in areas such as Cardiology [11], Pulmonary Medicine [12], and Neurology [13]. Artificial intelligence can help doctors to consolidate data and provide diagnostic methods. It also brings medical resources to rural areas to improve the quality of medical care around the world, which shows that artificial intelligence is extremely helpful to society [14]. The combination of Convolutional Neural Networks (CNN) and dentistry has resulted in a wealth of information. Research in AI has displayed promising results in utilizing the three common X-ray film types used in routine dental exams, including Panoramic radiographs, Periapical films, and Bite-Wing films. In the realm of dental radiology research, two primary areas of focus are tooth localization and identification of disease symptoms. Image enhancement techniques have been proposed to increase the accuracy of cutting and positioning of individual teeth. For instance, some studies have utilized a polynomial function to connect gap chains into a smooth curve, resulting in a 4% improvement and 93.28% accuracy rate [15]. Additionally, Gaussian filtering and edge detection technology have been proposed to enhance the visibility of tooth gaps and facilitate the cutting and positioning of individual teeth [16]. Filters have been helpful in reducing the impact of point creation on cutting technology and recognition in PA [17]. Furthermore, adaptive thresholds have been suggested to improve the application of cropping technology in dental radiology research [18]. Regarding the identification of disease symptoms, the backpropagation neural network has been used to diagnose dental caries with an accuracy rate of 94.1% [19]. Tooth detection and classification have been carried out on panoramic radiographs by training and classifying tooth types into four groups using a quadruple cross-validation method with 93.2% accuracy. Dental status has also been classified into three groups with an accuracy of 98.0% [20]. These findings demonstrate the enormous potential of AI in the dental field, with the ability to provide accurate diagnosis and improve patient care.
The dental implant surgery carries potential complications such as sinus perforation or jaw paralysis due to its location in nerve-ridden gums [21], making focus and attention crucial to avoid medical disputes. Currently, the objective of research in this area focuses on two areas: inspection and pre-operative analysis, thus reducing clinic time for dentists and enabling them to focus on treatment and technique. For example, CNN technology has been used for whole oral cavity analysis and inspection of periapical radiographs during the inspection stage [22,23]. Other studies have proposed an automatic synchronous detection system for two-dimensional grayscale cone beam computed tomography (CBCT) images of alveolar bone (AB) and the mandibular canal (MC) for preoperative treatment planning [24]. Additionally, pain and discomfort during the operation can affect its smoothness, and some research has proposed evaluating and predicting pain [25]. However, there is a limited amount of research conducted on postoperative analysis. Insufficient cleaning by the patient may result in peri-implantitis [26,27], wherein bacteria can gradually erode the tissues surrounding the implant, leading to bone and flesh loss. As a result, the implant may lose support and become loose or dislodged. In view of this, the aim of this study is to assess the extent of periodontal damage surrounding implants and provide accurate and objective evaluation results for postoperative follow-up examinations. The study aims to decrease the workload of dentists, protect the rights and interests of patients, and prevent potential medical disputes. This proposal provides three contributions and innovations:
  • The YOLOv2 model is trained using the manually created ROI database provided by the dentist to detect the implant position and return data for individual implant thread cropping;
  • Histogram equalization, overlapping techniques, and adaptive histogram equalization are employed to enhance the boundary lines. Additionally, the gingival area is colored orange, while the threaded area is green, thereby improving subsequent CNN judgment;
  • The study trains preprocessed data in a CNN model to detect damages, utilizing the AlexNet algorithm, achieving a final accuracy rate of 90.4%. Additionally, this research presents the first medical assistance system for automated thread analysis of implants.
The structure of this paper is as follows: Section 2 introduces the use of deep learning models for implant location labeling, cropping, and anterior processing, and finally, the use of CNN to build a model for arguing whether there is damage; Section 3 mainly integrates the methods used and the research results; Section 4 discusses the experimental results; and Section 5 describes the conclusion and future prospects.

2. Materials and Methods

The database used in this research is collected from relevant cases diagnosed by professional dentists. It can be roughly divided into three parts: implant cropping, image preprocessing, and implant classification. The damages of dental implants are determined by the M(mesial) and the D(distal). Therefore, the step of implant cropping will be divided into cutting out single implants from one to multiple implants in the PA. This part will use a deep learning model to label the implant position and then separate it into M and D by using the linear regression algorithm. Although both implant cropping and implant classification require machine learning, the training methods are very different. Not only are different models used but also different types of databases are introduced. The implant cropping is trained using a manually selected ROI database, while the implant classification is trained using preprocessed images. The major problem encountered in this research is that the validation set cannot converge when the cropped implant images are directly fed into the model for training, which leads to overfitting. To solve this problem the research colors different parts of the implant image, adding reference lines and adjusting the parameters of the CNN model. The flowchart of this research is shown in Figure 1.

2.1. Image Cropping

To enable the CNN model to focus specifically on identifying destruction of dental implants on the mesial and distal sides, the PA image needs to be cropped to a single implant. Manual cropping is a time-consuming process. This study utilizes YOLOv2 to detect the position of the implant. Using the position information returned by YOLOv2, the implant can be cropped efficiently. Next, a linear regression algorithm is used to find the central cutting line of the implant. The output image is then named and classified by comparing it with the diagnosis results provided by the hospital, creating a database for the CNN model. To prepare the data for further analysis, image preprocessing is then performed.

2.1.1. Label Dental Implant

The key issue in this step is to determine the Region of Interest (ROI) for training the object detection model. If the ROI encompasses the entire implant, the damage feature of the screw thread may not be classified accurately. This is because the area above the screw thread occupies most of the picture as depicted in Figure 2a, which can also make subsequent cropping steps challenging. Labeling only the screw thread, on the other hand, will not affect the determination process. Hence, the research sets the ROI to the thread instead of the implant body as shown in Figure 2b, to preserve the damage features of the screw thread as much as possible. Additionally, the damage detection also requires the gingival features surrounding the screw thread. Therefore, in the subsequent step of cropping the screw thread, the ROI returns the position that expands horizontally by several pixels to preserve these features for the next step.

2.1.2. YOLOv2 Model

The main purpose of training the object detection model in this study is to improve operational efficiency and reduce the time required for manual image cropping. Therefore, this study uses specific instruments to achieve the best training effect, including hardware equipment, as listed in Table 1; software, i.e., YOLOv2 layer structure model, as listed in Table 2; and training parameter settings, as listed in Table 3.
To train the YOLOv2 model to label the position of an implant, this research manually labels a total of 211 photos with 147 used for training and 64 for testing. The remaining 173 images are labeled directly by the YOLOv2 model as indicated in Table 4. Ultimately, the position of an implant is exported to the next step while the confusion matrix is calculated using the results of the following step. By employing this approach, the research can reduce the time required for manual image cropping and achieve accurate labeling of implant positions. This is accomplished by training the YOLOv2 model to identify the position of the implant within the image. The ability of the YOLOv2 model to identify the location of the implant quickly and accurately allows for efficient and accurate cropping of the image, therefore reducing the amount of time required for this process. To ensure the accuracy of the YOLOv2 model, manually labeling a significant portion of the images used for training was conducted in this research. This manual labeling allowed for the evaluation of the performance of the model and made any necessary adjustments to improve its accuracy. The remaining images were labeled by the YOLOv2 model to further improve its accuracy.
In conclusion, the object detection of the model training is critical to reducing the time required for manual image cropping in this research. The use of hardware and software configurations was optimized for this purpose along with the manual labeling of images, thus allowing the YOLOv2 model to accurately identify implant positions in the image. By doing so, this proposed study can achieve efficient and accurate image cropping, therefore reducing the amount of time required for this process.

Optimizer

Optimizers play a crucial role in machine learning by helping to minimize the loss function. The choice of optimizer depends on the specific network and the problem at hand. In MATLAB, there are several options for optimizers, including Sgdm, RMSProp, and Adam.
The Sgdm optimizer is a variant of stochastic gradient descent with momentum, which uses the gradients of the current mini-batch and the previous mini-batch to update the model parameters. It has been shown to be effective in improving convergence speed and reducing the likelihood of becoming stuck in local optima. RMSProp optimizer, on the other hand, adjusts the learning rate adaptively for each parameter based on the average of the squares of the gradients. It is known to be useful for training recurrent neural networks. Adam optimizer is another popular algorithm that combines the ideas of momentum and adaptive learning rates. It has been shown to be effective in training large-scale deep learning models.
For this research, the Sgdm optimizer was chosen for the YOLOv2 network. The reason for this choice may be related to its effectiveness in improving convergence speed, reducing the likelihood of becoming stuck in local optima, and its ability to handle large datasets. Ultimately, the choice of optimizer depends on the specific problem being addressed and the characteristics of the data.

Initial Learning Rate

The initial learning rate is a critical hyperparameter that determines the step size at each iteration during model training. It controls the speed of gradient descent and affects the performance of the model. However, choosing an optimal learning rate can be challenging. If the learning rate is set too high, the model may learn too quickly, resulting in convergence problems. Conversely, a learning rate that is too low may lead to slow learning, which is ineffective and can result in overfitting or becoming trapped in a local minimum. Therefore, selecting an appropriate learning rate is essential for achieving global minimum and successfully training the model.

Max Epoch, Mini Batch Size and Iteration

In neural network learning, an epoch refers to a complete iteration over the entire dataset. In MATLAB, the Max Epoch parameter is used to set the total number of epochs before the network training is stopped. However, when the size of each dataset is large, it may not be possible to process all the data at once due to limited memory resources. In such cases, the data are divided into smaller subsets called batches, with each batch containing a certain number of samples. In addition, the number of samples in each batch is referred to as the batch size. It is important to choose an appropriate batch size as it affects the performance of the neural network during training. Using a large batch size may accelerate the training process, but it can also cause overfitting where the network becomes excessively attuned to the training data, resulting in poor performance on new data. On the other hand, a small batch size can lead to slower convergence, but it also makes the training process more robust and generalizable to new data. Thus, choosing the right batch size is crucial in achieving good performance in neural network training.
The concept of Iteration is closely related to batch size. For instance, if a dataset contains 10 samples and the batch size is set to 2, then it would take 5 iterations to complete one epoch of training. During each iteration, the neural network updates its parameters based on the gradients calculated from the samples in the current batch. The relationship between dataset size, batch size, and iterations can be expressed mathematically, as shown in Equation (1):
D a t a   s e t   s i z e = I t e r a t i o n B a t c h   s i z e ( 1   E p o c h )

2.1.3. Cropping Dental Implant by YOLOv2

The detector after training will return the position of the object, including the dot in the upper left corner of the object and the length and width of the ROI. The key point of this step is to use the returned value to crop the required image. In 2.1.1, it is necessary to preserve the features between the implant and the gingiva to the greatest extent possible during cropping. Therefore, the returned data will add several pixels to the horizontal field as shown in Figure 3.
The detection of damages in implant screw threads is not based on a single implant, but rather on a single side. Thus, after cropping the region of interest (ROI) of the implant thread using YOLOv2, further segmentation is necessary. To simplify the classification of items in the CNN model database and enable the model to focus more on damage identification, the cropped image is segmented into the mesial and distal sides. However, cropping poses a challenge as the thread may not be parallel to the Y-axis of the image. Therefore, linear regression analysis [28] is employed to determine the position of the implant in the image for cropping purposes, as shown in Equation (2):
y = β 0 + β 1 x
where 0 represents the intercept, and 1 represents the slope. By analyzing the distribution of points on the coordinate axis, a line that represents the overall trend can be obtained. Based on the observation of dental implants in this project, the length of the implant is greater than its width in the photo. Therefore, by placing the implant horizontally on the coordinate axis, a linear equation that passes through the center of the implant can be obtained.
The initial step involves the binarization of the image to extract the implant as illustrated in Figure 4a. The following step entails plotting the extracted implant on the XY plane as depicted in Figure 4b. Due to the closely distributed pixels of the implant, the last step involves utilizing linear regression analysis to determine the cutting line via the centerline of the implant. Padding is applied to maintain the symmetry of the cropped image, therefore resulting in two images each containing only half of the screw thread as demonstrated in Figure 4.

2.2. Preprocessing

It is crucial to establish a well-characterized database that can effectively aid the CNN model in identifying the presence of peri-implantitis. In order to achieve this, this research categorized the database into two groups: the control group, consisting of implants without signs of peri-implantitis; and the test group, consisting of implants with signs of peri-implantitis. To classify the database, this research consulted and referred to the assessment of three physicians with at least five years of clinical experience on whether the model has detected peri-implantitis. Although the cropped images can be used as a database for the CNN model, the original images still contain significant amounts of noise. This noise hinders the ability of the CNN model to differentiate between damage and health. To enhance the learning ability of the CNN model, it is necessary to remove the noise and improve the features to make the differences between damaged and healthy more distinct. For instance, in the test group’s data, implants with signs of peri-implantitis exhibit obvious black subsidence marks around the alveolar bone on the image, which is not present in the control group’s data. Hence, this research proposed the steps for image enhancement to improve efficacy of the CNN model in detecting peri-implantitis.
The first step is to filter out any unnecessary noise. This involves converting the RGB images to grayscale and using histogram equalization and adaptive histogram equalization to accomplish this. The resulting images are overlaid onto the original images to enhance their boundaries. The second step is to enhance the features by examining the differences in color levels between the implant and gingiva. The research plots the values of each pixel in a 3D space and colors them accordingly on the original image. These steps are then combined to produce a pre-processed image. A CNN model training database is created using these pre-processed images which possess the necessary features and sufficiently high recognition accuracy to enable more effective CNN model training.

2.2.1. Histogram Equalization and Adjust Histogram Equalization

The original image in Figure 5a has a color scale that is too similar between the gingival and screw threads; this makes it difficult to distinguish damaged features due to excessive noise. The main objective of this step is to increase the color scale between the gingival and implant while filtering out the unnecessary noise. Histogram equalization [29] (Figure 5b) and adaptive histogram equalization [30] (Figure 5c) are used to achieve this goal. The result in Figure 5d is obtained by subtracting one image from the other. Then, the norm of the horizontal and vertical gradients of the image is calculated and the results are plotted in 3D as shown in Figure 6, capturing the edge features. Finally, the results are combined with the coloring from the next step to complete the preprocessing. In Equation (3), p x ( i ) is the probability value of the occurrence of grayscale values from 0 to 255, n i is the total number of occurrences of grayscale value i in the picture, and n is the total number of pixels in the image and L is 256. Equation (4) presents the cumulative distribution function which calculates the cumulative probability of pixels from 0 to 255 and linearizes the probability of the occurrence of all pixels. Finally, it multiplies 255 by the cumulative distribution function as shown in Equation (5) to scale the cumulative probability of 0 to 1 to 0 to 255.
p x ( i ) = n i / n   ,   0 i < L
c d f x ( i ) = j = 0 i p x ( j )
c d f x ( i ) × ( 255 0 )

2.2.2. Image Enhanced with 3D Graphics Technology

In the previous step, we were able to identify edge features. However, to further emphasize the differences between damaged and healthy, it is necessary to use the distribution of gingiva on the image to enhance the distinction between the two categories of features. The main challenge in this step is distinguishing between the gum and dental implant regions. To address this issue, the correlation between the 3D map output of the previous step and the 3D map of the original image is utilized as shown in Figure 7a. The range value is used to determine whether a pixel is located on the edge or on the flat surface. When a pixel is on the flat surface, the Z-axis position in the 3D map is used as reference to determine whether it belongs to the dental implant or gum. If the Z-axis position is higher than the threshold between the dental implant and gum, the pixel is considered a dental implant and is colored green. If it is lower, it is considered gum and colored orange. A gate value is also used to separate the gum region from the rest of the original image. Pixels below this gate are set to 0 which appears black. Finally, to enhance the discriminability, a red reference line is added at the position of the damaged platform on each image as shown in Figure 7b.

2.3. Image Classification

To monitor the learning progress of the model during training, the project divides the training data into an 80% training set and a 20% validation set as listed in Table 5. The validation set is used to observe if the model is overly focused on the training data, leading to incorrect predictions of new data, known as “Overfitting”. Insufficient data is a factor contributing to model overfitting. This project augments the data by horizontally and vertically flipping images, therefore increasing the data volume by a factor of four. To ensure the accuracy of the training process, the number of damaged and healthy data in the training and validation sets must be adjusted to approximately 1:1 to ensure that each category has a consistent distribution of the probability of correct predictions.

2.3.1. CNN Model

The hardware setup used to train the CNN image classification model is the same as described in Table 1 in Section 2.1.2. The CNN model is built using the Deep Network Designer app of MATLAB with AlexNet as the base model. However, the input size is different from the original 227 × 227 × 3 and is set to 450 × 450 × 3 to accommodate the elongated shape of dental implants and avoid distortion caused by stretching rectangular images into squares as shown in Figure 8a. This approach also prevents excessive padding resulting from filling the square as shown in Figure 8b. The final architecture of AlexNet is presented in Table 6. To ensure accuracy in the training process, it is necessary to adjust the quantity of damaged and health data in the training and validation sets to approximately 1:1.

2.3.2. Hyperparameter

To train a model effectively, it is crucial to tune the appropriate training parameters according to the data characteristics. The parameters used in the YOLOv2 model trained in Section 2.1.2 differ from those used in the AlexNet model in this step. In this section, we will provide details about the Initial Learning Rate, Mini Batch Size, Max Epoch, and Dropout Factor. Moreover, the parameters used to train AlexNet are presented in Table 7.

Learning Rate Dropout

Machine learning models must be generalized to all types of data within their respective domains to make accurate predictions. Overfitting happens when a model becomes too closely fitted to the training dataset and fails to generalize. Preventing overfitting is crucial for successful model training. Dropout is a regularization technique utilized to address overfitting. It involves assigning a probability of dropping out hidden layer neurons during each iteration or epoch of training. The dropped-out neurons do not transmit any information.

3. Results

This chapter is divided into two sections: the first focuses on the training process and results of the YOLOv2 object detection model; while the second covers the CNN image classification model. Both models will be compared with those proposed in other papers, and the precision and accuracy achieved by this project will be assessed using the confusion matrix.

3.1. YOLOv2 Object Detector

Table 8 presents a comprehensive overview of the YOLOv2 training process employed in this study. In this study, an unvalidated model was utilized to detect dental implants due to the fact that the YOLOv2 training function in MATLAB does not support validation. The results of the detection process are detailed in the confusion matrix provided in Table 9. Moreover, Figure 9 depicts the training process for the YOLOv2 loss function. To evaluate the accuracy of the CNN model, the validation set was utilized as the input for the network in this study. The accuracy of the AlexNet model was evaluated by comparing its predictions with the correct answers obtained from the images.
The appropriate selection of hyperparameters is crucial for the success of any machine learning algorithm. In this study, the hyperparameters of YOLOv2 were carefully selected based on the data characteristics. Zero (0) indicates that the YOLOv2 model correctly predicted 287 implants across all test cases, achieving a recall of 90.5%. Additionally, it correctly predicted 89 cases of normal teeth, resulting in a true negative rate of 78%. The accuracy rate of the model in this study is 89.3%. In addition, the model is 95% accurate. The YOLOv2 model displays lower propensity for erroneously detecting healthy teeth, but another issue was encountered during testing. As depicted in Figure 10, the system tends to repeatedly detect incomplete implants in the same tooth leading to high false negative values. In contrast to literature [31,32] that employ positioning and identification technology, the proposed technology in this study integrates automatic image cropping, resulting in a difference in the accuracy of less than 2%. This proposal has attained high precision and accuracy in dental implant detection and image classification. In general, these outcomes suggest that our proposed models exhibit a potential for clinical applications and could serve as a valuable tool for dental implant planning.

3.2. CNN AlexNet Image Classification

To monitor the training progress of the model, a validation set was employed in this project. The training process is presented in Table 10, while Figure 11 and Figure 12 display the accuracy and loss of training AlexNet, respectively. The black line in both figures represents the validation.
Based on the data presented in Table 11, it is evident that the use of distorted images, which were not adjusted for relative size, as the training database led to a very high loss of the validation set and an accuracy of less than 50%. These findings suggest that the database may have some fatal flaws such as indistinct features resulting from image stretching or excessive noise in the image. The second column of Table 11 presents the results of training using histogram equalization, the overlaid original image, and adaptive histogram equalization. As a result of correcting image size and enhancing features, the loss decreased significantly, and the accuracy increased to 81.43%. Nonetheless, these results were still below the standards set by this project. Furthermore, during the training process, the loss rebounded after reaching 0.5, indicating the need for further image pre-processing. The third column of Table 11 shows the result of this project which significantly enhances the features of an image by coloring different regions and adding damage reference lines. This enhancement has helped the model to perform better in image classification, achieving a validation set accuracy of 97.5%, and the loss has also dropped below the threshold of 0.5 to 0.08.
The evaluation of prediction results in terms of accuracy and precision was performed by comparing the predicted outcomes with the ground truth using a confusion matrix based on the test set, as presented in Table 12. Initially, the AlexNet model employed original images during the training process, and after continuous adjustments, the accuracy increased from 60% to 80%. However, it reached a bottleneck. Further improvement was achieved with the use of preprocessing. The final training outcomes are depicted in Table 12. Nevertheless, some images in the test data have unclear boundaries such as those without obvious screw threads or gums having similar grayscale values to screw threads which may also lead to misjudgment by the human eye. Consequently, there were approximately 10% errors in the final testing. It is evident that the CNN model performed remarkably well on the test set; it accurately predicted 107 out of all 117 samples, with damages accounting for 91.4% of the total damaged test data; similarly, it accurately predicted 92 out of all 103 healthy samples, accounting for 89.3% of the total healthy test data. The accuracy rate was 90.4% and the precision rate was 90.7%.
The direct classification of dental implant damages has not yet been addressed in the current state of the art. The paper that is closest in technique is [33] which focuses solely on determining the fit between two sides of a dental implant rather than directly detecting damages on one side of the implant using the method employed in this project. Table 13 presents a comparison with this technology, revealing that the image enhancement technology utilized in this study contributes to the final recognition result of the CNN, with an accuracy rate increasing to 90.4%. This marks significant progress, as this research currently represents the highest performance in detecting implant damages in teeth.

4. Discussion

The YOLOv2 model achieved an 89.3% accuracy rate in detecting the position of dental implants, surpassing the performance of existing methods [34]. The multiple identification process revealed that the system may repeatedly detect the same tooth when there are incomplete implants, leading to high false negative values. In addition to the positioning and recognition technology used in [31,32], this study introduced automatic image cropping, resulting in less than a 2% accuracy difference. This is a promising direction. Another study [35] used YOLOv3 to identify dental implants, with TP ratios and APs ranging from 0.50 to 0.82 and 0.51 to 0.85 for each implant system, respectively. The resulting mAP and mIoU of the model were 0.71 and 0.72, respectively. This is with a small amount of training data used, which may have compromised the accuracy of the model. For the AlexNet data used in this study, grayscale images were initially used for training, resulting in lower accuracy rates. When distorted images were used, the accuracy rate was even lower. Therefore, this research strengthens the high-precision image preprocessing process, improves the accuracy of the model to detect damages to 90.4%, and innovates and breaks through the latest similar related research.
The most related investigation [33] utilized Faster R-CNN to identify marginal bone loss surrounding implants (the κ value for the bone loss site was 0.547, while the κ value for the bone loss implant was 0.568) and compared the judgments of the AI to those of MD students and resident dentists on the same data. The results showed significant differences in the judgments of human observers. Therefore, training a consistent and accurate model can greatly facilitate healthcare by providing real-time treatment. However, the model is limited in its ability to detect finer levels of bone loss or the number of threads affected. Future research could address this limitation by exploring the use of additional imaging techniques or developing more sophisticated algorithms to detect these features, reducing misjudgment, and avoiding medical disputes.

5. Conclusions

The YOLOv2 model achieved an accuracy rate of 89.3% in capturing the implant position, while the AlexNet damage detection model achieved an accuracy rate of 90.4%. Moving forward, this research will continue to optimize the model and investigate better methods to improve accuracy rates. In terms of capturing dental implants, this study improves the cropping method by cutting through the interdental space to avoid capturing teeth outside the target area. In addition, this study obtained different grayscale ranges and blurred glue and line edges in the image. This automatic image preprocessing method greatly improves on the current manual preprocessing process. This automated approach not only improves efficiency and consistency but also reduces manual operation errors. Moreover, it is advisable to investigate the potential of incorporating advanced imaging techniques or developing more sophisticated algorithms that can accurately detect even subtle levels of bone loss or the number of affected threads. Moreover, creating a user interface can improve user satisfaction and increase the ease of use and efficiency of the system, leading to improved work efficiency and product quality.

Author Contributions

Conceptualization, Y.-C.C. and M.-Y.C.; Data curation, Y.-C.C. and M.-Y.C.; Formal analysis, M.-L.C.; Funding acquisition, S.-L.C. and C.-A.C.; Methodology, T.-Y.C. and M.-L.C.; Resources, C.-A.C., S.-L.C. and P.A.R.A.; Software, S.-L.C., Y.-L.L., P.-T.L., G.-J.L. and T.-F.L.; Supervision, C.-A.C. and S.-L.C.; Validation, P.-T.L., G.-J.L. and T.-F.L.; Visualization, Y.-Y.H., K.-C.L. and P.A.R.A.; Writing—original draft, T.-Y.C.; Writing—review and editing, M.-L.C., K.-C.L., C.-A.C. and P.A.R.A. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Ministry of Science and Technology (MOST), Taiwan, under grant numbers of MOST-111-2221-E-033-041, MOST-111-2823-8-033-001, MOST-111-2622-E-131-001, MOST-110-2223-8-033-002, MOST-110-2221-E-027-044-MY3, MOST-110-2218-E-035-007, MOST-110-2622-E-131-002, MOST-109-2622-E-131-001-CC3, MOST-109-2221-E-131-025, and MOST-109-2410-H-197-002-MY3.

Institutional Review Board Statement

Chang Gung Medical Foundation Institutional Review Board; IRB number: 02102023B0C503; Date of Approval: 1 December 2020; Protocol Title: A Convolutional Neural Network Approach for Dental Bite-Wing, Panoramic and Periapical Radiographs Classification. Executing Institution: Chang-Geng Medical Foundation Taoyuan Chang-Geng Memorial Hospital of Taoyuan; Duration of Approval: From 1 December 2020 to 30 November 2021. The IRB reviewed and determined that it is an expedited review according to case research or cases treated or diagnosed by clinical routines. However, this does not include HIV-positive cases.

Informed Consent Statement

The IRB approves the waiver of the participants consent.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Tricio, J.; Laohapand, P.; Van Steenberghe, D.; Quirynen, M.; Naert, I. Mechanical state assessment of the implant-bone continuum: A better understanding of the Periotest method. Int. J. Oral Maxillofac. Implant. 1995, 10, 43–49. [Google Scholar]
  2. Wright, W.E.; Davis, M.L.; Geffen, D.B.; Martin, S.E.; Nelson, M.J.; Straus, S.E. Alveolar bone necrosis and tooth loss: A rare complication associated with herpes zoster infection of the fifth cranial nerve. Oral Surg. Oral Med. Oral Pathol. 1983, 56, 39–46. [Google Scholar] [CrossRef] [PubMed]
  3. Eckerbom, M.; Magnusson, T.; Martinsson, T. Reasons for and incidence of tooth mortality in a Swedish population. Dent. Traumatol. 1992, 8, 230–234. [Google Scholar] [CrossRef] [PubMed]
  4. Krall, E.A.; Garvey, A.J.; Garcia, R.I. Alveolar bone loss and tooth loss in male cigar and pipe smokers. J. Am. Dent. Assoc. 1999, 130, 57–64. [Google Scholar] [CrossRef] [PubMed]
  5. Duong, H.; Roccuzzo, A.; Stähli, A.; Salvi, G.E.; Lang, N.P.; Sculean, A. Oral health-related quality of life of patients rehabilitated with fixed and removable implant-supported dental prostheses. Periodontology 2000 2022, 88, 201–237. [Google Scholar] [CrossRef]
  6. Kanehira, Y.; Arai, K.; Kanehira, T.; Nagahisa, K.; Baba, S. Oral health-related quality of life in patients with implant treatment. J. Adv. Prosthodont. 2017, 9, 476–481. [Google Scholar] [CrossRef]
  7. Dental Implants Market Size, Share & Growth Report. 2030. Available online: https://www.grandviewresearch.com/industry-analysis/dental-implants-market (accessed on 6 March 2023).
  8. Fiorillo, L.; Meto, A.; Cicciù, M. Bioengineering Applied to Oral Implantology, a New Protocol: “Digital Guided Surgery”. Prosthesis 2023, 5, 234–250. [Google Scholar] [CrossRef]
  9. Abraham, C.M. A Brief Historical Perspective on Dental Implants, Their Surface Coatings and Treatments. Open Dent. J. 2014, 8, 50–55. [Google Scholar] [CrossRef]
  10. Block, M.S. Dental Implants: The Last 100 Years. J. Oral Maxillofac. Surg. 2018, 76, 11–26. [Google Scholar] [CrossRef]
  11. Alqahtani, N.D.; Alzahrani, B.; Ramzan, M.S. Deep Learning Applications for Dyslexia Prediction. Appl. Sci. 2023, 13, 2804. [Google Scholar] [CrossRef]
  12. Sethi, Y.; Patel, N.; Kaka, N.; Desai, A.; Kaiwan, O.; Sheth, M.; Sharma, R.; Huang, H.; Chopra, H.; Khandaker, M.U.; et al. Artificial Intelligence in Pediatric Cardiology: A Scoping Review. J. Clin. Med. 2022, 11, 7072. [Google Scholar] [CrossRef] [PubMed]
  13. Zhang, G.; Luo, L.; Zhang, L.; Liu, Z. Research Progress of Respiratory Disease and Idiopathic Pulmonary Fibrosis Based on Artificial Intelligence. Diagnostics 2023, 13, 357. [Google Scholar] [CrossRef] [PubMed]
  14. Basu, K.; Sinha, R.; Ong, A.; Basu, T. Artificial intelligence: How is it changing medical sciences and its future? Indian J. Dermatol. 2020, 65, 365–370. [Google Scholar] [CrossRef]
  15. Chen, S.-L.; Chen, T.-Y.; Huang, Y.-C.; Chen, C.-A.; Chou, H.-S.; Huang, Y.-Y.; Lin, W.-C.; Li, T.-C.; Yuan, J.-J.; Abu, P.A.R.; et al. Missing Teeth and Restoration Detection Using Dental Panoramic Radiography Based on Transfer Learning with CNNs. IEEE Access 2022, 10, 118654–118664. [Google Scholar] [CrossRef]
  16. Mao, Y.-C.; Chen, T.-Y.; Chou, H.-S.; Lin, S.-Y.; Liu, S.-Y.; Chen, Y.-A.; Liu, Y.-L.; Chen, C.-A.; Huang, Y.-C.; Chen, S.-L.; et al. Caries and Restoration Detection Using Bitewing Film Based on Transfer Learning with CNNs. Sensors 2021, 21, 4613. [Google Scholar] [CrossRef]
  17. Li, C.-W.; Lin, S.-Y.; Chou, H.-S.; Chen, T.-Y.; Chen, Y.-A.; Liu, S.-Y.; Liu, Y.-L.; Chen, C.-A.; Huang, Y.-C.; Chen, S.-L.; et al. Detection of Dental Apical Lesions Using CNNs on Periapical Radiograph. Sensors 2021, 21, 7049. [Google Scholar] [CrossRef]
  18. Chuo, Y.; Lin, W.-M.; Chen, T.-Y.; Chan, M.-L.; Chang, Y.-S.; Lin, Y.-R.; Lin, Y.-J.; Shao, Y.-H.; Chen, C.-A.; Chen, S.-L.; et al. A High-Accuracy Detection System: Based on Transfer Learning for Apical Lesions on Periapical Radiograph. Bioengineering 2022, 9, 777. [Google Scholar] [CrossRef]
  19. Geetha, V.; Aprameya, K.S.; Hinduja, D.M. Dental caries diagnosis in digital radiographs using back-propagation neural network. Health Inf. Sci. Syst. 2020, 8, 8. [Google Scholar] [CrossRef] [PubMed]
  20. Muramatsu, C.; Morishita, T.; Takahashi, R.; Hayashi, T.; Nishiyama, W.; Ariji, Y.; Zhou, X.; Hara, T.; Katsumata, A.; Ariji, E.; et al. Tooth detection and classification on panoramic radiographs for automatic dental chart filing: Improved classification by multi-sized input data. Oral Radiol. 2021, 37, 13–19. [Google Scholar] [CrossRef]
  21. Kohlakala, A.; Coetzer, J.; Bertels, J.; Vandermeulen, D. Deep learning-based dental implant recognition using synthetic X-ray images. Med. Biol. Eng. Comput. 2022, 60, 2951–2968. [Google Scholar] [CrossRef]
  22. Chen, S.-L.; Chen, T.-Y.; Mao, Y.-C.; Lin, S.-Y.; Huang, Y.-Y.; Chen, C.-A.; Lin, Y.-J.; Hsu, Y.-M.; Li, C.-A.; Chiang, W.-Y.; et al. Automated Detection System Based on Convolution Neural Networks for Retained Root, Endodontic Treated Teeth, and Implant Recognition on Dental Panoramic Images. IEEE Sens. J. 2022, 22, 23293–23306. [Google Scholar] [CrossRef]
  23. Lin, S.-Y.; Chang, H.-Y. Tooth Numbering and Condition Recognition on Dental Panoramic Radiograph Images Using CNNs. IEEE Access 2021, 9, 166008–166026. [Google Scholar] [CrossRef]
  24. Widiasri, M.; Arifin, A.Z.; Suciati, N.; Fatichah, C.; Astuti, E.R.; Indraswari, R.; Putra, R.H.; Za’In, C. Dental-YOLO: Alveolar Bone and Mandibular Canal Detection on Cone Beam Computed Tomography Images for Dental Implant Planning. IEEE Access 2022, 10, 101483–101494. [Google Scholar] [CrossRef]
  25. Yadalam, P.K.; Trivedi, S.S.; Krishnamurthi, I.; Anegundi, R.V.; Mathew, A.; Al Shayeb, M.; Narayanan, J.K.; Jaberi, M.A.; Rajkumar, R. Machine Learning Predicts Patient Tangible Outcomes After Dental Implant Surgery. IEEE Access 2022, 10, 131481–131488. [Google Scholar] [CrossRef]
  26. Hashim, D.; Cionca, N.; Combescure, C.; Mombelli, A. The diagnosis of peri-implantitis: A systematic review on the predictive value of bleeding on probing. Clin. Oral Implant. Res. 2018, 29 (Suppl. 16), 276–293. [Google Scholar] [CrossRef] [PubMed]
  27. Prathapachandran, J.; Suresh, N. Management of peri-implantitis. Dent. Res. J. 2012, 9, 516–521. [Google Scholar] [CrossRef]
  28. Isobe, T.; Feigelson, E.D.; Akritas, M.G.; Babu, G.J. Linear regression in astronomy. Astrophys. J. 1990, 364, 104–113. [Google Scholar] [CrossRef]
  29. Lu, L.; Zhou, Y.; Panetta, K.; Agaian, S. Comparative study of histogram equalization algorithms for image enhancement. Mob. Multimed. Image Process. Secur. Appl. 2010, 7708, 770811. [Google Scholar] [CrossRef]
  30. Zhu, Y.; Huang, C. An Adaptive Histogram Equalization Algorithm on the Image Gray Level Mapping. Phys. Procedia 2012, 25, 601–608. [Google Scholar] [CrossRef]
  31. Chen, H.; Zhang, K.; Lyu, P.; Li, H.; Zhang, L.; Wu, J.; Lee, C.-H. A deep learning approach to automatic teeth detection and numbering based on object detection in dental periapical films. Sci. Rep. 2019, 9, 3840–3851. [Google Scholar] [CrossRef]
  32. Jang, W.S.; Kim, S.; Yun, P.S.; Jang, H.S.; Seong, Y.W.; Yang, H.S.; Chang, J.-S. Accurate detection for dental implant and peri-implant tissue by transfer learning of faster R-CNN: A diagnostic accuracy study. BMC Oral Health 2022, 22, 591. [Google Scholar] [CrossRef]
  33. Liu, M.; Wang, S.; Chen, H.; Liu, Y. A pilot study of a deep learning approach to detect marginal bone loss around implants. BMC Oral Health 2022, 22, 11. [Google Scholar] [CrossRef] [PubMed]
  34. Lin, N.-H.; Lin, T.-L.; Wang, X.; Kao, W.-T.; Tseng, H.-W.; Chen, S.-L.; Chiou, Y.-S.; Lin, S.-Y.; Villaverde, J.F.; Kuo, Y.-F. Teeth Detection Algorithm and Teeth Condition Classification Based on Convolutional Neural Networks for Dental Panoramic Radiographs. J. Med. Imaging Health Inform. 2018, 8, 507–515. [Google Scholar] [CrossRef]
  35. Takahashi, T.; Nozaki, K.; Gonda, T.; Mameno, T.; Wada, M.; Ikebe, K. Identification of dental implants using deep learning—Pilot study. Int. J. Implant. Dent. 2020, 6, 53. [Google Scholar] [CrossRef] [PubMed]
Figure 1. The flowchart of this research.
Figure 1. The flowchart of this research.
Bioengineering 10 00640 g001
Figure 2. Illustrating the marking of the Region of Interest (ROI) in this study. (a) The dental implant is depicted, including the area above the screw thread. (b) Only dental implants with screw threads are included in the ROI.
Figure 2. Illustrating the marking of the Region of Interest (ROI) in this study. (a) The dental implant is depicted, including the area above the screw thread. (b) Only dental implants with screw threads are included in the ROI.
Bioengineering 10 00640 g002
Figure 3. The cropping example with and without extending: (a,c) the image cropped directly; (b,d) the image cropped after extending.
Figure 3. The cropping example with and without extending: (a,c) the image cropped directly; (b,d) the image cropped after extending.
Bioengineering 10 00640 g003
Figure 4. The result of the images: (a) image binarization; (b) dental implants plot on coordinate axis; (c) dental implant midline; (d) the left half of the image after cutting; (e) the right half of the image after cutting.
Figure 4. The result of the images: (a) image binarization; (b) dental implants plot on coordinate axis; (c) dental implant midline; (d) the left half of the image after cutting; (e) the right half of the image after cutting.
Bioengineering 10 00640 g004
Figure 5. The results of pre-processing: (a) original image; (b) image after histogram equalization, (c) image after adaptive histogram equalization; (d) result after subtraction.
Figure 5. The results of pre-processing: (a) original image; (b) image after histogram equalization, (c) image after adaptive histogram equalization; (d) result after subtraction.
Bioengineering 10 00640 g005
Figure 6. The plot of gradient result in 3D map.
Figure 6. The plot of gradient result in 3D map.
Bioengineering 10 00640 g006
Figure 7. Enhanced schematics through 3D graphics technology: (a) the plot of the original image in a 3D map; (b) changing the color of the dental implant and gums in the image with added reference lines.
Figure 7. Enhanced schematics through 3D graphics technology: (a) the plot of the original image in a 3D map; (b) changing the color of the dental implant and gums in the image with added reference lines.
Bioengineering 10 00640 g007
Figure 8. The image used for training the CNN: (a) distortion image; (b) over padding image.
Figure 8. The image used for training the CNN: (a) distortion image; (b) over padding image.
Bioengineering 10 00640 g008
Figure 9. The training process for the YOLOv2 loss function. The blue line represents the change in the loss function over the number of iterations.
Figure 9. The training process for the YOLOv2 loss function. The blue line represents the change in the loss function over the number of iterations.
Bioengineering 10 00640 g009
Figure 10. The image of detecting an incomplete implant.
Figure 10. The image of detecting an incomplete implant.
Bioengineering 10 00640 g010
Figure 11. The accuracy of AlexNet model in the validation set (black line) and training set (blue line) during training process.
Figure 11. The accuracy of AlexNet model in the validation set (black line) and training set (blue line) during training process.
Bioengineering 10 00640 g011
Figure 12. The accuracy of AlexNet model in the validation set (black line) and the training set (orange line) during loss training process.
Figure 12. The accuracy of AlexNet model in the validation set (black line) and the training set (orange line) during loss training process.
Bioengineering 10 00640 g012
Table 1. Experimental environment specifications.
Table 1. Experimental environment specifications.
Hardware PlatformVersion
CPUIntel i5-12400
GPUGeForce RTX 3080
DRAMDDR4 3200 32 GB
Software PlatformVersion
MATLABR2022b
Table 2. The Layers of YOLOv2 model.
Table 2. The Layers of YOLOv2 model.
TypeActivations
1Image Input416 × 416 × 3
22-D Convolution416 × 416 × 16
3Batch Normalization416 × 416 × 16
4Leaky ReLU416 × 416 × 16
52-D Max Pooling208 × 208 × 16
62-D Convolution208 × 208 × 32
7Batch Normalization208 × 208 × 32
8Leaky ReLU208 × 208 × 32
92-D Max Pooling104 × 104 × 32
102-D Convolution104 × 104 × 64
11Batch Normalization104 × 104 × 64
12Leaky ReLU104 × 104 × 64
132-D Max Pooling52 × 52 × 64
142-D Convolution52 × 52 × 128
15Batch Normalization52 × 52 × 128
16Leaky ReLU52 × 52 × 128
172-D Max Pooling26 × 26 × 128
182-D Convolution26 × 26 × 256
19Batch Normalization26 × 26 × 256
20Leaky ReLU26 × 26 × 256
212-D Max Pooling13 × 13 × 256
222-D Convolution13 × 13 × 512
23Batch Normalization13 × 13 × 512
24Leaky ReLU13 × 13 × 512
252-D Max Pooling13 × 13 × 512
262-D Convolution13 × 13 × 1024
27Batch Normalization13 × 13 × 1024
28Leaky ReLU13 × 13 × 1024
292-D Convolution13×13×512
30Batch Normalization13×13×512
31Leaky ReLU13 × 13 × 512
322-D Convolution13 × 13 × 30
33Transform Layer13 × 13 × 30
34Output13 × 13 × 30
Table 3. Hyperparameters for YOLOv2 training.
Table 3. Hyperparameters for YOLOv2 training.
HyperparametersValue
Optimizersgdm
Initial Learning Rate0.001
Max Epoch24
Mini Batch Size16
Table 4. The distribution of data in the original periapical image obtained from clinical sources.
Table 4. The distribution of data in the original periapical image obtained from clinical sources.
The Number of Original Periapical Images
TrainingTestThe OthersTotal
Quantity14746263456
Table 5. Image classification of the periapical image before and after preprocessing.
Table 5. Image classification of the periapical image before and after preprocessing.
The Number of Periapical Images before and after Preprocess
BeforeTraining SetValidation Set
Healthy16240
Damaged16440
Total32680
AfterTraining SetValidation Set
Healthy648 (Augmented)40
Damaged656 (Augmented)40
Total130480
Table 6. The model architecture of AlexNet.
Table 6. The model architecture of AlexNet.
TypeActivations
1Image Input450 × 250 × 3
22-D Convolution110 × 60 × 96
3ReLU110 × 60 × 96
4Cross Channel Normalization110 × 60 × 96
52-D Max Pooling54 × 29 × 96
62-D Grouped Convolution54 × 29 × 256
7ReLU54 × 29 × 256
8Cross Channel Normalization54 × 29 × 256
92-D Max Pooling26 × 14 × 256
102-D Convolution26 × 14 × 384
11ReLU26 × 14 × 384
122-D Grouped Convolution26 × 14 × 384
13ReLU26 × 14 × 384
142-D Grouped Convolution26 × 14 × 256
15ReLU26 × 14 × 256
162-D Max Pooling12 × 6 × 256
17Fully Connected1 × 1 × 1152
18ReLU1 × 1 × 1152
19Dropout1 × 1 × 1152
20Fully Connected1 × 1 × 144
21ReLU1 × 1 × 144
22Dropout1 × 1 × 144
23Fully Connected1 × 1 × 2
24Softmax1 × 1 × 2
25Classification Output1 × 1 × 2
Table 7. Hyperparameters in AlexNet model.
Table 7. Hyperparameters in AlexNet model.
HyperparametersValue
OptimizerSgdm
Initial Learning Rate0.00006
Max Epoch50
Mini Batch Size16
LearnRateDropFactor0.75
LearnRateDropPeriod30
Table 8. Training process for YOLOv2.
Table 8. Training process for YOLOv2.
EpochIterationTime ElapsedMini-Batch RMSEMini-Batch Loss
1100:037.7560.0
65000:271.131.3
1210000:510.770.6
1715001:140.560.3
2320001:370.440.2
2421601:430.530.3
Table 9. The confusion matrix of YOLOv2 test.
Table 9. The confusion matrix of YOLOv2 test.
Target Class
CategoryImplantToothSubtotal
Output ClassImplant287 (68.2%)15 (3.6%)95%
Tooth30 (7.1%)89 (21.1%)74.8%
Subtotal90.5%78%89.3%
Table 10. The detailed process of AlexNet training.
Table 10. The detailed process of AlexNet training.
EpochIterationMini-Batch
Accuracy
Validation
Accuracy
Mini-Batch
Loss
Validation
Loss
1156.2550.008.32223.1206
1081081.2587.500.28220.2692
201620100.0095.000.03580.1354
302430100.0096.250.03760.1144
403240100.0096.250.01770.0951
504050100.0096.250.01260.0676
604860100.0097.500.00700.0989
705670100.0097.500.01090.0740
806480100.0097.500.00510.0672
907290100.0097.500.00350.0551
1008100100.0097.500.00550.0766
Table 11. Comparison of the datasets used in various stages and the validation results.
Table 11. Comparison of the datasets used in various stages and the validation results.
Original ImagesAdaptive Histogram EqualizationAdaptive Histogram Equalization + Damage Reference Lines
Validation
Accuracy
48.6481.4397.5
Validation
Loss
2.210.540.08
NetAlexNetAlexNetAlexNet
ImageBioengineering 10 00640 i001Bioengineering 10 00640 i002Bioengineering 10 00640 i003
Table 12. The confusion matrix of AlexNet test.
Table 12. The confusion matrix of AlexNet test.
Target Class
CategoryDamagedHealthySubtotal
Output ClassDamaged107 (48.6%)11 (5%)90.7%
Healthy10 (4.5%)92 (41.8%)90.2%
Subtotal91.4%89.3%90.4%
Table 13. The comparison table between the prior art and this study.
Table 13. The comparison table between the prior art and this study.
This WorkMethod in [33]
MethodCNNFaster R-CNN
Accuracy90.4%81%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, Y.-C.; Chen, M.-Y.; Chen, T.-Y.; Chan, M.-L.; Huang, Y.-Y.; Liu, Y.-L.; Lee, P.-T.; Lin, G.-J.; Li, T.-F.; Chen, C.-A.; et al. Improving Dental Implant Outcomes: CNN-Based System Accurately Measures Degree of Peri-Implantitis Damage on Periapical Film. Bioengineering 2023, 10, 640. https://doi.org/10.3390/bioengineering10060640

AMA Style

Chen Y-C, Chen M-Y, Chen T-Y, Chan M-L, Huang Y-Y, Liu Y-L, Lee P-T, Lin G-J, Li T-F, Chen C-A, et al. Improving Dental Implant Outcomes: CNN-Based System Accurately Measures Degree of Peri-Implantitis Damage on Periapical Film. Bioengineering. 2023; 10(6):640. https://doi.org/10.3390/bioengineering10060640

Chicago/Turabian Style

Chen, Yi-Chieh, Ming-Yi Chen, Tsung-Yi Chen, Mei-Ling Chan, Ya-Yun Huang, Yu-Lin Liu, Pei-Ting Lee, Guan-Jhih Lin, Tai-Feng Li, Chiung-An Chen, and et al. 2023. "Improving Dental Implant Outcomes: CNN-Based System Accurately Measures Degree of Peri-Implantitis Damage on Periapical Film" Bioengineering 10, no. 6: 640. https://doi.org/10.3390/bioengineering10060640

APA Style

Chen, Y. -C., Chen, M. -Y., Chen, T. -Y., Chan, M. -L., Huang, Y. -Y., Liu, Y. -L., Lee, P. -T., Lin, G. -J., Li, T. -F., Chen, C. -A., Chen, S. -L., Li, K. -C., & Abu, P. A. R. (2023). Improving Dental Implant Outcomes: CNN-Based System Accurately Measures Degree of Peri-Implantitis Damage on Periapical Film. Bioengineering, 10(6), 640. https://doi.org/10.3390/bioengineering10060640

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop