Next Article in Journal
An Enhanced Deep Learning Model for Effective Crop Pest and Disease Detection
Previous Article in Journal
A Dual-Module System for Copyright-Free Image Recommendation and Infringement Detection in Educational Materials
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Convolutional Neural Network-Based Deep Learning Methods for Skeletal Growth Prediction in Dental Patients

by
Miran Hikmat Mohammed
1,
Zana Qadir Omer
2,
Barham Bahroz Aziz
3,
Jwan Fateh Abdulkareem
3,
Trefa Mohammed Ali Mahmood
4,
Fadil Abdullah Kareem
5,* and
Dena Nadhim Mohammad
6
1
Department of Basic Sciences, College of Dentistry, University of Sulaimani, Sulaimaniyah 46001, Iraq
2
Department of POP, College of Dentistry, Hawler Medical University, Erbil 44001, Iraq
3
Department of Prosthodontics, College of Dentistry, University of Sulaimani, Sulaimaniyah 46001, Iraq
4
Department of Orthodontics, College of Dentistry, University of Sulaimani, Sulaimaniyah 46001, Iraq
5
Department of Pedodontics and Community Oral Health, College of Dentistry, University of Sulaimani, Sulaimaniyah 46001, Iraq
6
Department of Oral Diagnosis, College of Dentistry, University of Sulaimani, Sulaimaniyah 46001, Iraq
*
Author to whom correspondence should be addressed.
J. Imaging 2024, 10(11), 278; https://doi.org/10.3390/jimaging10110278
Submission received: 29 September 2024 / Revised: 27 October 2024 / Accepted: 31 October 2024 / Published: 2 November 2024
(This article belongs to the Section AI in Imaging)

Abstract

:
This study aimed to predict the skeletal growth maturation using convolutional neural network-based deep learning methods using cervical vertebral maturation and the lower 2nd molar calcification level so that skeletal maturation can be detected from orthopantomography using multiclass classification. About 1200 cephalometric radiographs and 1200 OPGs were selected from patients seeking treatment in dental centers. The level of skeletal maturation was detected by CNN using the multiclass classification method, and each image was identified as a cervical vertebral maturation index (CVMI); meanwhile, the chronological age was estimated from the level of the 2nd molar calcification. The model’s final result demonstrates a high degree of accuracy with which each stage and gender can be predicted. Cervical vertebral maturation reported high accuracy in males (98%), while females showed high accuracy of 2nd molar calcification. CNN multiclass classification is an accurate method to detect the level of maturation, whether from cervical maturation or the calcification of the lower 2nd molar, and the calcification level of the lower 2nd molar is a reliable method to trust in the growth level, so the traditional OPG is enough for this purpose.

1. Introduction

Artificial intelligence (AI) significantly enhances efficiency, accuracy, and treatment planning in dental treatment. It improves healthcare decision-making, therapy, and rehabilitation. Furthermore, it aids in the identification of dental and skeletal abnormalities by radiographic interpretation and assessing growth and development [1,2]. Estimation of craniofacial growth represents the main target of preventive and treatment programs in orthodontics [3].
Proper diagnosis of a patient’s tooth development and craniofacial growth significantly impacts treatment plans. Interestingly, facial and dental structures change over time. This helps orthodontists anticipate growth patterns and assess the potential impact of orthodontic interventions. It gives clues in decision making regarding the timing and nature of orthodontic treatments and results outcomes [4,5]. Understanding the development of teeth and craniofacial structures in this young age group forms the basis for intervention strategies that aim to reduce malocclusion within the population [6].
Various biological indicators predict the human growth stage, such as chronological age, dental development, sexual maturation, and skeletal age. Many researchers have found that skeletal maturity is closely related to craniofacial growth [7].
The cervical vertebrae can indirectly influence orthodontic treatment due to their connection to the craniofacial complex. The relationship between the cervical vertebrae and orthodontics is often considered when assessing skeletal maturity and growth patterns. Orthodontists may consider the growth status of the cervical vertebrae when planning treatments related to jaw relationships, tooth alignment, and overall facial harmony [8,9]. Determining the growth potential at this stage of development could enable early short-term interceptive orthodontic treatment with simple appliances that reduce the complexity or even bypass the need to use other expensive orthodontic interventions [10].
Previous studies indicated that lateral cephalometric radiographs could detect cervical vertebral maturation (CVM) level as a dependable factor for the circum-pubertal growth phase [9,11]. This knowledge supports preventive orthodontic measures by enabling more precise timing for orthodontic interventions [10].
Tooth development is often linked with skeletal maturity. Panoramic radiographic interpretation can provide information about the stage of dental development. Interceptive orthodontic treatment may be required to manage specific malocclusion during the mixed dentition phase, and root development influences such intervention [12,13].
The use of osseointegrated implants has been increasingly widespread in the adult population. The literature shows a certain lack of application of this technique in children. The bone growth and development must be well analyzed, and the pediatric dentist might suggest using this treatment option for oral rehabilitation when necessary [14].
Dental implant placement is one of the possible modes of rehabilitation in pediatric patients with conditions such as congenital partial anodontia and traumatic tooth loss. Systematic planning of treatment is required to achieve optimum esthetic and functional outcomes. Furthermore, growth assessment accompanied by alveolar bone evaluation is mandatory to plan implant treatment. For more significant outcomes of implant treatments, all surgical and orthodontic procedures should be initiated about a year before the implant placement. In children, the greater the physiologic harmony that can be created within the dentition, alveolar bone, and skeletal growth changes, the higher the chances of successful implant placement. In determining the optimal individual time point of implant insertion, the status of skeletal growth, the degree of hypodontia, and the extension of related psychological stress should be considered in addition to the status of a pediatric patient’s existing dentition and dental compliance [15].
Deep learning models lead to remarkable improvements in image processing and segmentation. It modified interpreting dental radiographs towards automatic diagnosis and treatment. It could perform dental structure segmentation, classification, and identification of dental diseases with significantly high accuracy [16].
A study done by Kafieh and Aghazadeh focuses on using CNNs to classify tooth maturity stages based on panoramic radiographs, highlighting the effectiveness of traditional imaging techniques for dental assessments. The CNN model demonstrated high accuracy in classifying different stages of tooth maturity. The study reported effective performance metrics, including precision, recall, and F1-score, indicating that the model could reliably differentiate between various maturity levels [17]. Although there was no study correlating the skeletal maturity with dental classification, this study aimed to predict the skeletal growth maturation by convolutional neural network-based deep learning methods using cervical vertebral maturation and the lower 2nd molar calcification level so that skeletal maturation can be detected from an orthopantomography (OPG) image. No previous study was done with the use of multiclass classification on image identification, marking a significant advancement in the application of AI to preventive measures in dentistry and orthodontics.

2. Materials and Methods

2.1. Registration for Study

The University of Sulaimani’s College of Dentistry’s ethics committee granted this study’s clearance (199 on 10 December 2023). The authors disclose that the study was carried out in adherence to the principles and guidelines of the Helsinki Declaration. Additionally, the participants provided informed consent to be included in the study.

2.2. Sample

Two thousand and four hundred radiographic images (including 1200 cephalometric radiographs and 1200 OPGs as shown in Figure 1 and Figure 2) from different patients seeking dental treatment in private dental clinics were chosen with the ages ranged between 8 and 16 years, in which no previous orthodontic treatment or orthognathic surgery was performed and no medical history or medication interfered with the natural growth process. All the radiographic images were obtained by an experienced technician.
The X-ray machine used in the study is Vatech PaX-i3D (Vatech Company, Hwaseong-si, Republic of Korea), Model PHT6500 with a focal spot of 0.5 mm IEC 60336 and output of 90 Kv,10 mA, with total filtration of 2.8 mm AI. The software for OPG and cephalometric radiation and exposure time is adjusted by the machine automatically according to the age.
CVMI was evaluated by classifying cervical vertebra C2, C3, and C4 into six stages depending on their maturation patterns on a lateral cephalogram using a method given by Mcnamara, Baccetti, and Franchi, 2005 [18].
The analysis consisted of both cephalometric (quantitative) and visual (qualitative) appraisals of morphologic characteristics of the cervical vertebrae, revealing that statistically significant distinctions can be made between the stages of cervical vertebral maturation (Appendix A). Moreover, the mandibular right second molar was used (Figure 2). Tooth calcification was rated according to the index described by Demirjian et al. (Demirjian index DI; 1973), in which one of eight stages of calcification (A to H) was assigned to the tooth (Appendix B) [19].

2.3. Study Protocol

This work proposes a new method using AI techniques, especially a deep learning mechanism called CNN (convolutional neural network) that works on image classifications as binary and multiclass classifications. Binary classification works on distinguishing between two classes. While multiclass, it works on distinctive between more than two classes.
However, this paper is concerned with differentiating between multiple classes, which consist of 6 classes of cervical vertebrae maturation extracted from cephalometric radiography, which are named (CS1, CS2, CS3, CS4, CS5, CS6). Also, another prediction of lower 2nd molar calcification level according to Demirjian index DI; 1973 named in (D1, D2, D3, D4, D5, D6, D7), as the patients seeking orthodontic treatment aged mainly between 8 and 16 years old, so the first two categories (D1, D2) were not involved in this study.
Each section of classification prediction is operated in a separate process. And for each process, a CNN model is used with a different set of network architecture, consisting of the number of layers. In addition, later both results from the prediction were tested for correctness and accuracy, using different classification metrics, such as precision, recall, F1-score, confusion matrix, and accuracy as well. These metrics are implemented on architecture, cervical, and molar class predictions.
Moreover, these two models’ architecture of cervical and molar was implemented on genders, males and females. Also, the same classification metrics are implemented on both genders in each CNN model.
The strategies are started by collecting images of different patients with not less than 200 images for each class in molar OPG and cervical images. These images are separated into different groups and categories (molar, cervical, and genders for each one of them) and main classifications based on growth or ingrowth. Also, each patient has two X-ray images: cephalometric and OPG. Hence, the required action is to find the stage of the predicted molar by the predicted image of the cervical, and this process is used for both genders. Moreover, the vice-versa process is implemented as well, which means using the predicted cervical image to find the stage of molar class. So, there is a consideration of correlation between the two images after the prediction, which counts as 1 if there is a correlation between the two stages of growth and 0 if they are not correlated. As the implementation has been set on both genders, it is important to consider the different genders regarding the correlation and stage prediction for the two images.
The next step is that the image datasets in each model architecture are split into the train set and the validation set. The train sets are used to train the model and make it more general for unseen data. On the other hand, validation sets are used as unseen data to calculate the model’s accuracy before finalizing for production. So, the amount split is 20% for validation and the rest is for the training process, which is 80%. This process is implemented when the image datasets are categorized based on their class names into folders.
So, in the case of molar predications, there are considerations of six classes, which are from C to H, and they are the names of each folder. In the same way, the same process is implemented for cervical images, which are categorized into six classes named from CS1 to CS6, which means there are six folders. So, this action is used for both genders in separate folders, named females and males.
After the process of splitting the images into folders based on their categories, the next step is to design the AI model, which is done by using the CNN model. The model takes the images that are in a greyscale channel. And they are cropped to capture the interesting part, which is (C2, C3, and C4) of vertebrae X-rays, and (Stage 1 to Stage 6) in the case of cervical 7th molar images.
Subsequently, the produced images were converted into the image size of 250 by 250 for each image category, and this size helps to capture more features from images to be extracted during the CNN process. If the images are not in the same shape width, and height, this leads to inaccurate performance and low accuracy results.
Another important step is that the images go through a process, called downscaling, by using the min–max scaling technique, which works on image pixels and converts them into the same range between 0 and 1. This process is important, as the model works on many images, so that there are different ranges of pixel values in different regions of different input images. Also, following this process ensures increases in the performance of the model, rather than working on large pixel values.
In addition, to avoid overfitting cases, the images go through a process of augmentation. This process takes each image and creates other images of different shapes. The augmentation process in CNN involves transformation, scaling, flipping, translations, and rotations. At the end of this process, it can be assured that many images are provided, which decreases the amount of overfit and increases prediction accuracy.
Also, the process of augmentation is applied on a train set only; the reason is that the CNN model is required to train on more data and different shapes of images. The validation set is used for testing the model to show how the model performs on unseen data.
After that, the CNN model is created, and it has four layers of convolutions with different filter sizes, which are 3 by 3, including batch normalization and a dropout layer, which works on increasing the performance of the model and lessening the action of overfitting to be accurate.
This filter extracts the most important features from each image during the training process, such as edges and different relevant contents from images. Contents that can be used as features are most repeated or distinguished among other images.
In addition to the filtering process, a stride of one is used to shift one column at a time on each column of pixels in the images. Also, after each level of convolution, the max pooling technique is used to make the extracted feature more focusable and reduce the size of the image to only the part that is considered to be more relevant for predictions.
At the last step of the CNN process, the next level starts with flattening all the pixels of the generated features as a one-dimensional array. Then, it is passed into a two-level ANN (artificial neural network), with batch normalization of 0.3 after each level. In this step, the predicted output is resulted out. Figure 3 shows the model architecture of CNN. Also, it illustrates CNN’s designed architecture model, which is used for image classification efficiently. This process requires a set of layer combinations.
  • Convolutional neural network works on extracting features using kernel filters 3 × 3 with conv2d sizes of 64, 128, 256, and 512 in consequences.
  • Batch normalization to prevent gradient vanishing and accelerate the training process.
  • Activation function, which works on extracting more complex patterns from the images, and rectified linear unit (ReLu), which is non-linear activation, is used as an activation function type.
  • Max pooling works on reducing the spatial dimensionalities of the extracted feature maps, which decreases computational time and computer resources, and 2 × 2 max pooling size is used.
  • The dropout layer forces the architecture network to learn more robust and generalized features, and 0.25 units are used during the convolution process, while 0.5 units are used during the flattening and fully connected layer, which are the last stage of the training model.
  • Flatten layer converts the 2D feature into a 1D vector, and this flattening process prepares the data for the next layer, which is a fully connected layer.
  • This fully connected layer works on learning the patterns and data of the images. In this stage, a higher-level representation is formed to make the final predictions on the input images.
  • Output layer: This is the final layer of the model, and it uses the SoftMax function to convert the output to probability distributions. These distributions represent the likelihood that the input image belongs to a particular class, so the class with the highest probability is selected as the final prediction.
Figure 4 illustrates data handling and model training strategies in our study. Also, this flowchart provides an overview of the steps taken to ensure the quality of the model and optimizations of model quality.

2.3.1. Reading Images

The process starts with reading cervical and molar images, which are the primary input to the model for subsequent steps.

2.3.2. Preprocessing Step

Both images undergo preprocessing, which involves resizing, normalization, and noise reduction. This step is also essential to standardizing the input data. The non-local means filter is used, which compares and averages similar pixel values throughout the image rather than just nearby ones, reducing noise while preserving details.
This noise reduction filter is implemented on all the image pixels with similar patches, not only the neighbor pixels. Then, a weighted average of the similar patches will be taken into account so that more similar patches will contribute to the final pixel value. As a result, the output will be much more effective with noise reduction, specifically for repetitive structure or texture in images. So that the model would be able to correspond to images that may have noises, and it would remove the noise and then pass it to the CNN model.
Some of the images in our dataset contain noises, which affect the accuracy of the model prediction, because these noises obscure important features such as edges and textures, losing some important information. After implementing the non-local means filter technique, it cleans all the images in our dataset, and keep all the relevant information. As the model works on extracting features (edges, shapes and textures) from the inserted images After removing the noises the accuracy of the model is increased, this because the model more focus on the relevant feature for detections, as shown in Figure 5 and Figure 6.
The noise reduction process starts after reading images from the image directory for both categories, the cervical and molar images. Each image is read and inserted into the non-local means noise reduction process. The output image is saved into the new directory, preparing for the next operation, which is image augmentation.

2.3.3. Augmentation Step

After the preprocessing task, images are augmented to enhance the model’s training process. This step includes techniques like flipping, zooming, and rotation. This helps the trained model to become more generalized. These variations generate new training samples that are different from the original images and make the model learn patterns and features across different variations, and this improves robustness and ability to generalize to unseen images. with different variations and shapes.
The argumentation step is used because, without it, the model will train on a static number of images and does not expose it to different perspectives of the image dataset. Therefore, this can lead to overfitting, which is the case where the model trains only on a small number of image datasets; it means it performs well on training datasets and poorly on unseen images or images that it has not learned to its patterns and generalized it.
In addition, after implementing the argumentation step, the number of images in our datasets increases. Each image of the cervical and molar undergoes the process of flipping, zooming, shifting, and rotation. Thus, the model will train on different image orientations and sizes so it can be more generalizable to unseen data. For this case, a Python augmentation package named Image Data Generator is used, as shown in Figure 7.
Also, the operation of this step is started after the process of noise reduction is finalized, and then the set of images is implemented into the augmentation process. A list of new images is generated, and after that, they are fed into the CNN model for training purposes. In this way, the proposed model becomes more generalized to unseen and future data because it becomes able to find different patterns and features of the inserted images.

2.3.4. CNN Model Architecture

The preprocessed and augmented images are fed into separate convolutional neural network (CNN) model architectures for both images cervical and molar images. This model works on extracting relevant features from the images.

Model Training Process

The CNN models are trained separately for both images. This training process optimizes the model to predict the image characteristics accurately, which is essential for growth stage analysis.

Prediction Process

After the training model is finalized, the models predict outcomes on unseen cervical and molar images. This is important for evaluating the performance of the model’s quality and accuracy.

Growth Stage Determination

The final stage includes the integration of prediction outcomes from both models to investigate the overall stage of growth based on the combined cervical and molar images.
In addition, the model followed a set of mathematical equations, which perform mathematical equations to compute the working model on predictions.

Convolution Operation Equation

Z i , j , k = ( X W ) k = m = 1 M n = 1 N X i + m 1 ,   j + n 1   . W m , n , k + b k
where W is the kernel filter applied on the input image X to produce feature map Z, the following is the detail of the CNN model equation.
  • Z i , j , k : The output value of the feature map is at position (i,j) in the k-channel.
  • X i + m 1 ,   j + n 1 : The input value at the corresponding position.
  • W m , n , k : The weight of the filter at position (m,n) in the K-channel.
  • b k : The bias term for the K-th channel.
  • ∗: Convolution operation.
Activation function, rectified linear unit (ReLU) activation function:
f(x) = max(0, x)
where f(x): indicates the output function, which depends on input x values.
  • x: is the input to the function, where x is the output from the previous CNN layer and is used with ReLU.
  • max(0,x): this operation returns the maximum output of two values 0, x; if x is positive or zero, the function returns x; if x is negative, it returns 0.
SoftMax function:
SoftMax ( z ) i = e z i j = 1 k e z j
where e z j   is the standard exponential function for the input vector.
  • e z j : standard exponential function for output vector.
  • K: number of classes in the multi-class classifier.
Moreover, the hyperparameters used in the models were set as follows: learning rate = 0.0001, batch_size = 16, number of epochs = 100, optimizer is Adam, kernel size (3 × 3), dropout layers=0.5, and the number of filters in the first convolution layer is 64, followed by 128, then 256 and 512 in consequent layers.
The implementation of the CNN model was done by using Python 3.10.12, Tensorflow, and Keras 2.17.0. It is implemented on the Google Colab iCloud services, as they provide free GPU and memory.

2.3.5. Implementation Details

  • I. 2nd molar image predictions
1.
Initializing the directories and classes
-
Define the directories that hold the datasets for male and gender.
-
Define the class list: [C, D, E, F, G, H].
2.
Loading dataset images from directories
-
Create two empty lists for holding image paths and image labels.
-
For each class name in classes, getting all image files for each class
-
Adding image paths and their labels to two separate array lists.
3.
Splitting dataset into training and testing
-
Split all images and labels into train images, test images, train labels, and test labels using the train_test_split() method with 80% for training and 20% for testing.
4.
Convert labels to categorical format
-
Convert train—labels and test—labels to one-hot encoding using to_categorical() function.
5.
Handle class imbalance
-
Compute class_weigts using compute_class_weight() to handle imbalance classes.
6.
Create a data generator using Data augmentation
-
Initialize image data generator with rescaling and light augmentation (zoom, shear, and brightness adjustment) using ImageDataGenerator().
-
Create generators for images to be implemented with the augmentation process.
7.
Building CNN model
-
Define the sequential model.
-
Add a convolution layer with 32 filters and ReLU activation.
-
Add max pooling layer
-
Add batch normalization layer
-
Add a convolution layer with 64 filters and ReLU activation.
-
Add max pooling layer
-
Add batch normalization layer
-
Flatten the output
-
Add a dense layer with 256 units and ReLU activation.
-
Add a dropout layer to prevent overfitting
-
Add a final dense layer with SoftMax activation, which is used to classify the input into one of the 6 classes.
8.
Compile the model
-
Compile the model with adam optimizer, categorical_crossentropy loss and accuracy as the metric.
9.
Train the model
-
Train the model with the train_generator method for a fixed number of epochs, which is 20.
-
Validating the model with a validation_generator function.
10.
Evaluate the model with test set
-
Create the test_generator using the same ImageDataGenerator.
-
Evaluate the model using the test set and printing the accuracy result.
11.
Predict on test set
-
Get the true label from test_generator()
-
Use model.predict() to predict the probabilities for each image in the test set.
-
Use argmax() to convert the predicted probabilities to class labels.
12.
calculate the performance metrics
-
Calculate the weighted F1-score using f1_score() based on true and predicted labels.
  • II. cervical image predictions
1.
Initializing the directories and classes
-
Define the directories that hold the datasets for male and gender.
-
Define the class list: [CS1, CS2, CS3, CS4, CS5, CS6].
2.
Loading dataset images from directories
-
Create two empty lists for holding image paths and image labels.
-
For each class name in classes, obtain all image files for each class
-
Adding image paths and their labels to two separate array lists.
3.
Splitting dataset into training and testing
-
Split all images and labels into train images, test images, train labels, and test labels using the train_test_split() method with 80% for training and 20% for testing.
4.
Convert labels to categorical format
-
Convert train—labels and test—labels to one-hot encoding using to_categorical() function.
5.
Handle class imbalance
-
Compute class_weigts using compute_class_weight() to handle imbalance classes.
6.
Create a data generator using data augmentation
-
Initialize Image Data Generator with rescaling and light augmentation (zoom, shear, and brightness adjustment) using ImageDataGenerator().
-
Create generators for images to be implemented with the augmentation process.
7.
Building CNN model
-
Define the sequential model.
-
Add a convolution layer with 32 filters and ReLU activation.
-
Add max pooling layer
-
Add batch normalization layer
-
Add a convolution layer with 64 filters and ReLU activation.
-
Add max pooling layer
-
Add batch normalization layer
-
Add a convolution layer with 128 filters and ReLU activation.
-
Add max pooling layer
-
Add batch normalization layer
-
Flatten the output
-
Add a dense layer with 256 units and ReLU activation.
-
Add a dense layer with 128 units and ReLU activation.
-
Add a dropout layer to prevent overfitting
-
Add a final dense layer with SoftMax activation, which classifies the input into one of the 6 classes.
8.
Compile the model
-
Compile the model with adam optimizer, categorical_crossentropy loss and accuracy as the metric.
9.
Tra in the model
-
Train the model with the train_generator method for a fixed number of epochs, which is 20.
-
Validating the model with a validation_generator function.
10.
Evaluate the model with test set
-
Create the test_generator using the same ImageDataGenerator.
-
Evaluate the model using the test set and printing the accuracy result.
11.
Predict on test set
-
Get the true label from test_generator()
-
Use model.predict() to predict the probabilities for each image in the test set.
-
Use argmax() to convert the predicted probabilities to class labels.
12.
calculate the performance metrics
-
Calculate the weighted F1-score using f1_score() based on true and predicted labels.

3. Result

The model’s final result demonstrates the high degree of accuracy with which each stage and gender can be predicted. Furthermore, both genders do well in picture classification—especially when it comes to medical images—with accuracy levels exceeding 95% for each category. The findings are broken down by gender and class in Table 1.
Furthermore, the accuracy and validation rates demonstrate excellent performance and a discernible drop in loss. This indicates that there is a decreased likelihood of overfitting problems and incorrect predictions in the model as a whole. The accuracy of the model’s training performance is displayed in Figure 8, as well as the validation set and the decreasing loss amount.
In addition, the model is validated using previously unknown imaging data (blinded data), and additional pictures from the cervical and molar areas are included. The model accurately predicted the class for each of them, and it has been tested on both genders. Furthermore, it demonstrates that each cervical and molar class can be anticipated and that the projected cervical stage corresponds to a molar stage, as well as the association between cervical and molar. As a result, throughout the test, it was discovered that the stages of the female dataset were always one stage ahead of the male dataset, in both cervical and molar.
Another metric used to test the model’s efficiency is the F1-score, the harmonic mean of two essential metrics: precision and recall.
Prediction measures the accuracy of true positive predictions; for example, it calculates the proportion of true positives to the total number of positive predictions. Also, prediction tries to find how many actual positive instances are correctly predicted.
Precision = T r u e   P o s i t i v e   ( T P ) T r u e   P o s i t i v e   T P + F a l s e   P o s i t i v e   ( F P )
Whereas recall measures the model’s ability to identify all the actual positive instances, it tries to find the proportion of the true positive to the actual positive instances. This means recall focuses on how many positive instances the model correctly predicted.
Recall = T r u e   P o s i t i v e   ( T P ) T r u e   P o s i t i v e   T P + F a l s e   N e g a t i v e   ( F N )
The F1-score metric balances precision and recall to provide a single metric that calculates both false positives and negatives. The value of the F1-score ranges from 0 to 1, with the best value being 1 and the worst being 0. The results of the F1-score for our model are shown in Table 2.
F 1   Score = 2     P r e c i s i o n R e c a l l P r e c i s i o n + R e c a l l
In addition, another metric that showed the performance for the model were confusion matrixes, which are tables that evaluate the performance of a classification model. Also, it predicts the classes with the actual true classes, based on TP (true positive), FP (false positive), TN (true negative), and FN (false negative) for each class. The following Table 3 and Table 4 shows the performance metric evaluation using a confusion matrix for both cervical and 2nd molar image prediction accuracy.

4. Discussion

Dental examinations in modern dental clinics are supported by computer technologies that employ computational intelligence to detect potential health concerns more efficiently. Deep learning algorithms produce the most exact results in one of the most significant applications. A single-stage oriented deep learning model from scanning panoramic images for five dental treatments and diagnosis of teeth class revealed that the root canal treatment and impacted teeth had precision scores of 82.63% and 18.78%, respectively [20]. Furthermore, the TeethU2Net deep learning model used to detect tooth saliency by dental panoramic radiographs achieved a high accuracy of 0.97 [21].
In this paper, a unique correlation learning mechanism (CLM) for deep neural network topologies that blends CNN with traditional design was presented. The support neural network assists CNN in determining the best filers for pooling and convolution layers. As a result, the primary neural classifier learns more quickly and efficiently. The results reveal that our CLM model can achieve 96% accuracy, 95% precision, and 95% recall.
Skeletal growth prediction is critical in orthodontics and orthognathic surgery to arrange procedures corresponding to the patient’s development trajectory. It helps orthodontists to resolve skeletal abnormalities and optimize treatment outcomes properly. Regular exams and communication between orthodontists and other healthcare experts help to anticipate and treat skeletal development. The pubertal growth spurt is a key time of skeletal development throughout adolescence. It is critical for treatment planning to assess the time and amount of this growth surge. Skeletal growth prediction for dental implants is especially relevant when contemplating implant placement in developing persons or in places where continuing skeletal development may impair the implants’ long-term success.
Some specialized software programs are intended to forecast skeletal growth based on a variety of criteria. These instruments can help predict bone position in the future and guide implant placement.
Dental practitioners should think about long-term treatment plans for their patients, especially children. It might be done in stages, with interim remedies in place until skeletal growth stabilizes. The type of prosthetic repair chosen for the dental implant should take into account prospective skeletal expansion. This guarantees that the repair remains in sync with the evolving anatomy. It is critical to evaluate the implant site’s connection to surrounding tissues, such as sinuses or nerves. Skeletal development may have an effect on these interactions over time.
Predicting skeletal growth for dental implants requires an in-depth examination of the patient’s age, growth stage, and skeletal maturity, as well as modern imaging tools and coordination among several dental professionals. This method guarantees that dental implants are put in a way that allows for continuous bone growth while also maximizing long-term success.
Despite AI’s potentially revolutionary significance, the existing literature is lacking in recommended automatic solutions, despite some attempts in recent years.
Bone age is a measure of bone maturity that may be used to treat a variety of pediatric illnesses as well as legal difficulties. Traditional bone age evaluation is a sophisticated and time-consuming technique, prone to inter- and intra-observer variability, based on the study of distinct skeletal segments and teeth. Fully automated systems are in high demand, but developing an accurate and dependable solution has proven challenging. Deep learning technologies, machine learning, and convolutional neural network-based systems have demonstrated promising results in automated bone age assessment. We discuss the evolution of bone age estimation, its use, and conventional techniques of evaluation, as well as the current artificial intelligence-based solutions for bone age assessment and its prospects [22].
Ameli et al. (2023) analyzed the form and pattern of cervical vertebrae; machine learning models were used to 3D cephalometric pictures to forecast the development stage of patients [23]. However, the quantity of radiation exposure, which is more than that of a 2D cephalogram, is the source of the most debate concerning its use in dental imaging [24]. Although CBCT pictures are indicated to be a reliable and effective technique for assessing skeletal age using CVM, they should not be utilized primarily for that purpose [25].
Fifty-eight percent training and fifty-seven percent test accuracy were acquired as a result of the 40-epoch training. The model produced findings that were extremely close to those acquired during training on the test data. On the other hand, it was established that the model performed best in terms of accuracy and F1-score in CVM Stage 1 and best in terms of recall value in CVM Stage 2. According to the experimental data, the constructed model had reasonable success, with a classification accuracy of 58.66% in CVM stage classification [26].
Machine learning algorithms can assist in cephalometric analysis by automatically identifying landmarks on X-rays and measuring various parameters related to facial and dental structures. This can help orthodontists in diagnosing malocclusions and planning treatment [27]. A study used a customized open-source CNN deep learning algorithm analysis in comparing an automated cephalometric analysis to the experienced human examiner in determining 18 landmarks on a total of 1792 cephalometric X-rays, and they reported no statistically significant differences between humans’ gold standard and the AI’s predictions [28].
Classifying images is a complex problem in the field of computer vision. The deep learning algorithm is a computerized model that simulates human brain functions and operations. Training the deep learning model is costly in machine resources and time. Investigating the performance of the deep learning algorithm is mostly needed. The convolutional neural network (CNN) is most commonly used to build a structure of deep learning models. The final results evaluate the deep learning algorithm as a state-of-the-art method for an image classification task [29].
The CNN model is used to care for farming by identifying leaf diseases that help in growing up healthy plants [30], breast cancer abnormalities [31], skin cancer [32,33], lung disease [34], and so on. Still, this study is the first one dealing with growth prediction. A study by Rauf et al. (2023) recommends using K-nearest neighbor to predict arch perimeters rather than linear regression [35].
Two studies were conducted in the Iranian population to estimate dental age; one revealed that mandibular third molar calcification could be used as a dental age predictor, especially in males [13]. Another study indicated a high correlation between mandibular second molar calcification and skeletal maturity in the post-pubertal growth phase [36].
Relying on OPG and cephalometric radiographs is that these methods are widely available and cost-effective, making them accessible for a larger patient population. In many clinical settings, OPG and cephalometric radiographs are the standard imaging techniques, allowing for easier integration into routine dental and orthodontic practices. Additionally, the study may have aimed to establish foundational data using these established methods before potentially exploring more advanced 3D imaging techniques in future research. By starting with OPG and cephalometric radiographs, researchers can build a comprehensive dataset that could later be enhanced with 3D imaging for more nuanced analyses. Moreover, while 2D imaging may have limitations, it still provides valuable insights into skeletal relationships and developmental patterns, which can serve as a basis for understanding broader trends in skeletal maturation. In addition, CBCT cannot be recommended for every orthodontic patient unless there is an impacted tooth, surgical correction, or a pathological condition due to increased radiation dose, risks, safety, ethical, and medico-legal considerations.
Object detection is the most important problem in computer vision tasks; CNN (convolutional neural network) can perform well even with moderate-sized datasets. The reason is that convolution structures inherently capture local spatial relationships, which is particularly advantageous in image classification tasks. Also, CNN does not highly require having large-scale datasets and can perform well on smaller datasets as well. Unlike transformers, which require more data to perform well and learn effectively.
In terms of performance and computational efficiency, transformers such as the vison transformer (ViT) excel in large datasets but tend to have very computational demands. Whereas CNNs have been optimized to be lightweight and computationally efficient while maintaining strong accuracy, which is advantageous in real-world applications where resources are limited.
In addition, CNNs are best used for image classification, object detection in images, and image segmentation. While transformers are best used for NLP (natural language processing), text reorganization, and summarization. Also, transformers can be used for multimodal tasks such as video understanding and captioning.
Based on our medium-sized dataset used in this study and the priority to optimize the computational efficiency, CNNs were chosen over transformed based models. Also, CNNs are well suited for tasks requiring lower computational resources while maintaining high predictive accuracy on moderate datasets, which makes them an ideal choice.
CNNs make image problems easier to handle because of their transitional invariance and inductive bias features. The training model needs a lot more data or more robust data improvement to learn picture attributes because transformers lack this feature [37].
Accuracy is not a trustworthy metric on its own when working with skewed datasets. It may be deceptive since a model may perform poorly overall even if it consistently predicts the majority class correctly. The following significant metrics were taken into account: false positives (FP) are instances of the positive class that are incorrectly predicted, false negatives (FN) are instances of the negative class that are incorrectly predicted, true positives (TP) are instances of the positive class that are correctly predicted, and true negatives (TN) are instances of the negative class that are incorrectly predicted.
Precision, which gauges the precision of optimistic forecasts, is another derived metric. Recall (sensitivity) assesses the capacity to identify every positive example. The F1-score, which is helpful for balancing precision and memory, is the harmonic mean of the two. Confusion matrix is a table that makes it possible to see how well a classification model is performing. For machine learning initiatives to be successful, it is essential to grasp the appropriate evaluation criteria and strike a balance in class representation. The dangers of class imbalance can be lessened with careful consideration of the evaluation technique, algorithm choice, and data characteristics [38].
In this study, by providing a thorough evaluation of each preprocessing step and discussing their limitations and challenges, we can present a more balanced and comprehensive view. This will help readers understand not only the benefits but also the complexities involved in preprocessing for improved model performance. Furthermore, it demonstrates that each cervical and molar class can be anticipated and that the projected cervical stage corresponds to a molar stage, as well as the association between cervical and molar.

5. Conclusions

In this study, a unique correlation learning mechanism (CLM) for deep neural network topologies that blends CNN with traditional design was presented. The support neural network assists CNN in determining the best filers for pooling and convolution layers. The results reveal that our CLM model can achieve 96% accuracy, 95% precision, and 95% recall. So, the calcification level of the lower second molar is a reliable method to trust in the growth level, so the traditional OPG is enough. CNN multiclass classification is an accurate method to detect the level of maturation of dental patients seeking treatment, whether from cervical maturation or the calcification of the lower second molar.

Author Contributions

M.H.M. and T.M.A.M. conceptualization, data curation, methodology, writing—review, and editing; Z.Q.O., B.B.A. and J.F.A. data curation, investigation, writing—original draft and editing; F.A.K. and D.N.M. methodology, supervision, writing—original draft and editing. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The University of Sulaimani’s College of Dentistry’s ethics committee granted this study’s clearance (199 on the 10 December 2023).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The present study’s data are available with the corresponding author on request.

Conflicts of Interest

The authors declare no competing financial interests or personal relationships that could potentially influence the work of this article.

Appendix A

Figure A1. Schematic representation of the stages of cervical vertebrae according to the newly modified method (Vijayashree, et al.: Second molar calcificaon as an skeletal maturity indicator). Cited by: McNamara et al. 2005 [18].
Figure A1. Schematic representation of the stages of cervical vertebrae according to the newly modified method (Vijayashree, et al.: Second molar calcificaon as an skeletal maturity indicator). Cited by: McNamara et al. 2005 [18].
Jimaging 10 00278 g0a1

Appendix B

The level of the growth captured from the amount of root calcification of the 2nd molar representing level E based on Demirjian et al. 1973 [19] as showed in the next figure.
Figure A2. Developmental stages of tooth—Demirjian et al. (1973) [19]; color atlas of dental medicine; orthodontic diagnosis; Thomas grabber; 1992, Thieme Medical Publishers (page no. 100).
Figure A2. Developmental stages of tooth—Demirjian et al. (1973) [19]; color atlas of dental medicine; orthodontic diagnosis; Thomas grabber; 1992, Thieme Medical Publishers (page no. 100).
Jimaging 10 00278 g0a2

References

  1. Wong, K.F.; Lam, X.Y.; Jiang, Y.; Yeung, A.W.K.; Lin, Y. Artificial intelligence in orthodontics and orthognathic surgery: A bibliometric analysis of the 100 most-cited articles. Head. Face Med. 2023, 19, 38. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  2. Strunga, M.; Urban, R.; Surovková, J.; Thurzo, A. Artificial Intelligence Systems Assisting in the Assessment of the Course and Retention of Orthodontic Treatment. Healthcare 2023, 11, 683. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  3. Ionescu, E.; Teodorescu, E.; Badarau, A.; Grigore, R.; Popa, M. Prevention perspective in orthodontics and dentofacial orthopedics. J. Med. Life 2008, 1, 397–402. [Google Scholar]
  4. Kim, E.; Kuroda, Y.; Soeda, Y.; Koizumi, S.; Yamaguchi, T. Validation of Machine Learning Models for Craniofacial Growth Prediction. Diagnostics 2023, 13, 3369. [Google Scholar] [CrossRef] [PubMed]
  5. Kök, H.; Acilar, A.M.; İzgi, M.S. Usage and comparison of artificial intelligence algorithms for determination of growth and development by cervical vertebrae stages in orthodontics. Prog. Orthod. 2019, 20, 41. [Google Scholar] [CrossRef] [PubMed]
  6. van Meijeren-van Lunteren, A.W.; Liu, X.; Veenman, F.C.; Grgic, O.; Dhamo, B.; van der Tas, J.T.; Prijatelj, V.; Roshchupkin, G.V.; Rivadeneira, F.; Wolvius, E.B.; et al. Oral and craniofacial research in the Generation R study: An executive summary. Clin. Oral. Investig. 2023, 27, 3379–3392. [Google Scholar] [CrossRef]
  7. Saraç, F.; Baydemir Kılınç, B.; Çelikel, P.; Büyüksefil, M.; Yazıcı, M.B.; Şimşek Derelioğlu, S. Correlations between Dental Age, Skeletal Age, and Mandibular Morphologic Index Changes in Turkish Children in Eastern Anatolia and Their Chronological Age during the Pubertal Growth Spurt Period: A Cross-Sectional Study. Diagnostics 2024, 14, 887. [Google Scholar] [CrossRef]
  8. Felemban, N.H. Correlation between Cervical Vertebral Maturation Stages and Dental Maturation in a Saudi Sample. Acta Stomatol. Croat. 2017, 51, 283–289. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  9. Fernandes-Retto, P.; Matos, D.; Ferreira, M.; Bugaighis, I.; Delgado, A. Cervical vertebral maturation and its relationship to circum-pubertal phases of the dentition in a cohort of Portuguese individuals. J. Clin. Exp. Dent. 2019, 11, e642–e649. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  10. Schnider-Moser, U.E.M.; Moser, L. Very early orthodontic treatment: When, why, and how? Dental Press. J. Orthod. 2022, 27, e22spe2. [Google Scholar] [CrossRef]
  11. Jiménez-Silva, A.; Carnevali-Arellano, R.; Vivanco-Coke, S.; Tobar-Reyes, J.; Araya-Díaz, P.; Palomino-Montenegro, H. Craniofacial growth predictors for class II and III malocclusions: A systematic review. Clin. Exp. Dent. Res. 2021, 7, 242–262. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  12. Selmanagić, A.; Ajanović, M.; Kamber-Ćesir, A.; Redžepagić-Vražalica, L.; Jelešković, A.; Nakaš, E. Radiological Evaluation of Dental Age Assessment Based on the Development of Third Molars in Population of Bosnia and Herzegovina. Acta Stomatol. Croat. 2020, 54, 161–167. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  13. Monirifard, M.; Yaraghi, N.; Vali, A.; Vali, A.; Vali, A. Radiographic assessment of third molars development and its relation to dental and chronological age in an Iranian population. Dent. Res. J. 2015, 12, 64–70. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  14. Fernandes, A.P.; Battistella, M.A. Dental Implants in Pediatric Dentistry: A Literature Review. Braz. J. Implantol. Health Sci. 2020, 2, 1–2. [Google Scholar] [CrossRef]
  15. Nedumgottil, B.M.; Sam, S.; Abraham, S. Dental implants in children. Int. J. Oral. Care Res. 2020, 8, 57–59. [Google Scholar] [CrossRef]
  16. Singh, N.K.; Raza, K. Progress in deep learning-based dental and maxillofacial image analysis: A systematic review. Expert. Syst. Appl. 2022, 199, 116968. [Google Scholar] [CrossRef]
  17. Kafieh, R.; Aghazadeh, F. A deep learning approach for classification of tooth maturity stages using panoramic radiographs. J. Biomed. Phys. Eng. 2020, 10, 419–426. [Google Scholar]
  18. McNamara, J.A., Jr.; Franchi, L.; Baccei, T. The cervical vertebral maturation (CVM) method for the assessment of optimal treatment timing in dentofacial orthopedics. Semin. Orthod. 2005, 11, 119–129. [Google Scholar]
  19. Demirjian, A.; Goldstein, H.; Tanner, J.M. A new system of dental age assessment. Hum. Biol. 1973, 45, 211–227. [Google Scholar] [PubMed]
  20. Singh, N.K.; Faisal, M.; Hasan, S.; Goswami, G.; Raza, K. A Single-Stage Deep Learning Approach for Multiple Treatment and Diagnosis in Panoramic X-ray. In Intelligent Systems Design and Applications; ISDA 2023; Lecture Notes in Networks and Systems; Abraham, A., Bajaj, A., Hanne, T., Siarry, P., Eds.; Springer: Cham, Switzerland, 2024; Volume 1046. [Google Scholar] [CrossRef]
  21. Singh, N.K.; Raza, K. TeethU2Net: A Deep Learning-Based Approach for Tooth Saliency Detection in Dental Panoramic Radiographs. In Neural Information Processing; ICONIP 2022; Communications in Computer and Information Science; Tanveer, M., Agarwal, S., Ozawa, S., Ekbal, A., Jatowt, A., Eds.; Springer: Singapore, 2023; Volume 1794. [Google Scholar] [CrossRef]
  22. Caloro, E.; Ce, M.; Gibelli, D.; Palamenghi, A.; Martinenghi, C.; Oliva, G.; Cellina, M. Artificial Intelligence (AI)-Based Systems for Automatic Skeletal Maturity Assessment through Bone and Teeth Analysis: A Revolution in the Radiological Workflow? Appl. Sci. 2023, 13, 3860. [Google Scholar] [CrossRef]
  23. Ameli, N.; Lagravere, M.; Lai, H. Application of deep learning to classify skeletal growth phase on 3D radiographs. medRxiv 2023. [Google Scholar] [CrossRef]
  24. Pereira, S.A.; Corte-Real, A.; Melo, A.; Magalhães, L.; Lavado, N.; Santos, J.M. Diagnostic Accuracy of Cone Beam Computed Tomography and Periapical Radiography for Detecting Apical Root Resorption in Retention Phase of Orthodontic Patients: A Cross-Sectional Study. J. Clin. Med. 2024, 13, 1248. [Google Scholar] [CrossRef] [PubMed]
  25. Bonfim, M.A.; Costa, A.L.; Fuziy, A.; Ximenez, M.E.; Cotrim-Ferreira, F.A.; Ferreira-Santos, R.I. Cervical vertebrae maturation index estimates on cone beam CT: 3D reconstructions vs sagittal sections. Dentomaxillofacial Radiol. 2016, 45, 20150162. [Google Scholar] [CrossRef]
  26. Akay, G.; Akcayol, M.A.; Özdem, K.; Güngör, K. Deep convolutional neural network—The evaluation of cervical vertebrae maturation. Oral. Radiol. 2023, 39, 629–638. [Google Scholar] [CrossRef]
  27. Subramanian, A.K.; Chen, Y.; Almalki, A.; Sivamurthy, G.; Kafle, D. Cephalometric Analysis in Orthodontics Using Artificial Intelligence-A Comprehensive Review. Biomed. Res. Int. 2022, 16, 1880113. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  28. Kunz, F.; Stellzig-Eisenhauer, A.; Zeman, F.; Boldt, J. Artificial intelligence in orthodontics: Evaluation of a fully automated cephalometric analysis using a customized convolutional neural network. J. Orofac. Orthop. 2020, 81, 52–68. [Google Scholar] [CrossRef] [PubMed]
  29. Ezat, W.A.; Dessouky, M.M.; Ismail, N.A. Multi-class image classification using a deep learning algorithm. J. Phys. Conf. Ser. 2020, 21447, 012021. [Google Scholar] [CrossRef]
  30. Negi, A.; Kumar, K.; Chauhan, P. Deep neural network-based multi-class image classification for plant diseases. In Agricultural Informatics: Automation Using the IoT and Machine Learning; Wiley: Hoboken, NJ, USA, 2021; pp. 117–129. [Google Scholar]
  31. Heenaye-Mamode Khan, M.; Boodoo-Jahangeer, N.; Dullull, W.; Nathire, S.; Gao, X.; Sinha, G.R.; Nagwanshi, K.K. Multi-class classification of breast cancer abnormalities using Deep Convolutional Neural Network (CNN). PLoS ONE 2021, 16, e0256500. [Google Scholar] [CrossRef] [PubMed]
  32. Chaturvedi, S.S.; Tembhurne, J.V.; Diwan, T. A multi-class skin Cancer classification using deep convolutional neural networks. Multimed. Tools Appl. 2020, 79, 28477–28498. [Google Scholar] [CrossRef]
  33. Arshed, M.A.; Mumtaz, S.; Ibrahim, M.; Ahmed, S.; Tahir, M.; Shafi, M. Multi-class skin cancer classification using vision transformer networks and convolutional neural network-based pre-trained models. Information 2023, 14, 415. [Google Scholar] [CrossRef]
  34. Karaddi, S.H.; Sharma, L.D. Automated multi-class classification of lung diseases from CXR images using pre-trained convolutional neural networks. Expert. Syst. Appl. 2023, 211, 118650. [Google Scholar] [CrossRef]
  35. Rauf, A.M.; Mahmood, T.M.A.; Mohammed, M.H.; Omer, Z.Q.; Kareem, F.A. Orthodontic Implementation of Machine Learning Algorithms for Predicting Some Linear Dental Arch Measurements and Preventing Anterior Segment Malocclusion: A Prospective Study. Medicina 2023, 59, 1973. [Google Scholar] [CrossRef]
  36. Toodehzaeim, M.H.; Rafiei, E.; Hosseini, S.H.; Haerian, A.; Hazeri-Baqdad-Abad, M. Association between mandibular second molars calcification stages in the panoramic images and cervical vertebral maturity in the lateral cephalometric images. J. Clin. Exp. Dent. 2020, 12, e148–e153. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  37. Arkin, E.; Yadikar, N.; Xu, X.; Aysa, A.; Ubul, K. A survey: Object detection methods from CNN to transformer. Multimed. Tools Appl. 2023, 82, 21353–21383. [Google Scholar] [CrossRef]
  38. Roccetti, M.; Delnevo, G.; Casini, L.; Cappiello, G. Is bigger always better? A controversial journey to the center of machine learning design, with uses and misuses of big data for predicting water meter failures. J. Big Data 2019, 6, 1–23. [Google Scholar] [CrossRef]
Figure 1. OPG of an 11.5-year-old female showed open apex of lower 2nd molars.
Figure 1. OPG of an 11.5-year-old female showed open apex of lower 2nd molars.
Jimaging 10 00278 g001
Figure 2. A cephalometric radiograph of the same patient revealed cervical vertebral maturation.
Figure 2. A cephalometric radiograph of the same patient revealed cervical vertebral maturation.
Jimaging 10 00278 g002
Figure 3. The architecture of CNN mode.(The purple color is for convolution layer, The light green color is for batch normalization layer, The light pink color is for Relu optimization layer, The light purple is for maxpooling layer, Pinkish red is for dropout layer, The pale yellow is for flatten, The final two light blue is for dense layers.
Figure 3. The architecture of CNN mode.(The purple color is for convolution layer, The light green color is for batch normalization layer, The light pink color is for Relu optimization layer, The light purple is for maxpooling layer, Pinkish red is for dropout layer, The pale yellow is for flatten, The final two light blue is for dense layers.
Jimaging 10 00278 g003
Figure 4. The architecture of CNN mode, the same model is implemented for both genders (the blue box for Cervical maturation index, orange box for Second molar calcification level).
Figure 4. The architecture of CNN mode, the same model is implemented for both genders (the blue box for Cervical maturation index, orange box for Second molar calcification level).
Jimaging 10 00278 g004
Figure 5. Applying non-local means filter on cervical.
Figure 5. Applying non-local means filter on cervical.
Jimaging 10 00278 g005
Figure 6. Applying non-local means filter on molar.
Figure 6. Applying non-local means filter on molar.
Jimaging 10 00278 g006
Figure 7. Implementing augmentation technique.
Figure 7. Implementing augmentation technique.
Jimaging 10 00278 g007
Figure 8. Accuracy and loss for training and testing for all CNN models.
Figure 8. Accuracy and loss for training and testing for all CNN models.
Jimaging 10 00278 g008
Table 1. The accuracy result per each class for each gender.
Table 1. The accuracy result per each class for each gender.
No.GenderCervical Prediction AccuracySecond Molar Prediction Accuracy
1.Male98%96%
2.Female96%97%
Table 2. The F1-score result per each class for each gender.
Table 2. The F1-score result per each class for each gender.
No.GenderCervical F1-ScoreSecond Molar F1-Score
1.Male0.930.91
2.Female0.910.92
Table 3. Cervical image confusion matrix table for male and female.
Table 3. Cervical image confusion matrix table for male and female.
Predicted CS1Predicted CS2Predicted CS3Predicted CS4Predicted CS5Predicted CS6
Actual CS119910000
Actual CS221980000
Actual CS300199100
Actual CS400119810
Actual CS500011972
Actual CS600011198
Table 4. Second molar image confusion matrix table for male and female.
Table 4. Second molar image confusion matrix table for male and female.
Predicted
C
Predicted
D
Predicted
E
Predicted
F
Predicted
G
Predicted
H
Actual C19621100
Actual D11981000
Actual E11197100
Actual F00019811
Actual G00001991
Actual H00002198
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mohammed, M.H.; Omer, Z.Q.; Aziz, B.B.; Abdulkareem, J.F.; Mahmood, T.M.A.; Kareem, F.A.; Mohammad, D.N. Convolutional Neural Network-Based Deep Learning Methods for Skeletal Growth Prediction in Dental Patients. J. Imaging 2024, 10, 278. https://doi.org/10.3390/jimaging10110278

AMA Style

Mohammed MH, Omer ZQ, Aziz BB, Abdulkareem JF, Mahmood TMA, Kareem FA, Mohammad DN. Convolutional Neural Network-Based Deep Learning Methods for Skeletal Growth Prediction in Dental Patients. Journal of Imaging. 2024; 10(11):278. https://doi.org/10.3390/jimaging10110278

Chicago/Turabian Style

Mohammed, Miran Hikmat, Zana Qadir Omer, Barham Bahroz Aziz, Jwan Fateh Abdulkareem, Trefa Mohammed Ali Mahmood, Fadil Abdullah Kareem, and Dena Nadhim Mohammad. 2024. "Convolutional Neural Network-Based Deep Learning Methods for Skeletal Growth Prediction in Dental Patients" Journal of Imaging 10, no. 11: 278. https://doi.org/10.3390/jimaging10110278

APA Style

Mohammed, M. H., Omer, Z. Q., Aziz, B. B., Abdulkareem, J. F., Mahmood, T. M. A., Kareem, F. A., & Mohammad, D. N. (2024). Convolutional Neural Network-Based Deep Learning Methods for Skeletal Growth Prediction in Dental Patients. Journal of Imaging, 10(11), 278. https://doi.org/10.3390/jimaging10110278

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop