Convolutional Neural Network-Based Deep Learning Methods for Skeletal Growth Prediction in Dental Patients
Abstract
:1. Introduction
2. Materials and Methods
2.1. Registration for Study
2.2. Sample
2.3. Study Protocol
- Convolutional neural network works on extracting features using kernel filters 3 × 3 with conv2d sizes of 64, 128, 256, and 512 in consequences.
- Batch normalization to prevent gradient vanishing and accelerate the training process.
- Activation function, which works on extracting more complex patterns from the images, and rectified linear unit (ReLu), which is non-linear activation, is used as an activation function type.
- Max pooling works on reducing the spatial dimensionalities of the extracted feature maps, which decreases computational time and computer resources, and 2 × 2 max pooling size is used.
- The dropout layer forces the architecture network to learn more robust and generalized features, and 0.25 units are used during the convolution process, while 0.5 units are used during the flattening and fully connected layer, which are the last stage of the training model.
- Flatten layer converts the 2D feature into a 1D vector, and this flattening process prepares the data for the next layer, which is a fully connected layer.
- This fully connected layer works on learning the patterns and data of the images. In this stage, a higher-level representation is formed to make the final predictions on the input images.
- Output layer: This is the final layer of the model, and it uses the SoftMax function to convert the output to probability distributions. These distributions represent the likelihood that the input image belongs to a particular class, so the class with the highest probability is selected as the final prediction.
2.3.1. Reading Images
2.3.2. Preprocessing Step
2.3.3. Augmentation Step
2.3.4. CNN Model Architecture
Model Training Process
Prediction Process
Growth Stage Determination
Convolution Operation Equation
- : The output value of the feature map is at position (i,j) in the k-channel.
- : The input value at the corresponding position.
- : The weight of the filter at position (m,n) in the K-channel.
- : The bias term for the K-th channel.
- ∗: Convolution operation.
- x: is the input to the function, where x is the output from the previous CNN layer and is used with ReLU.
- max(0,x): this operation returns the maximum output of two values 0, x; if x is positive or zero, the function returns x; if x is negative, it returns 0.
- : standard exponential function for output vector.
- K: number of classes in the multi-class classifier.
2.3.5. Implementation Details
- I. 2nd molar image predictions
- 1.
- Initializing the directories and classes
- -
- Define the directories that hold the datasets for male and gender.
- -
- Define the class list: [C, D, E, F, G, H].
- 2.
- Loading dataset images from directories
- -
- Create two empty lists for holding image paths and image labels.
- -
- For each class name in classes, getting all image files for each class
- -
- Adding image paths and their labels to two separate array lists.
- 3.
- Splitting dataset into training and testing
- -
- Split all images and labels into train images, test images, train labels, and test labels using the train_test_split() method with 80% for training and 20% for testing.
- 4.
- Convert labels to categorical format
- -
- Convert train—labels and test—labels to one-hot encoding using to_categorical() function.
- 5.
- Handle class imbalance
- -
- Compute class_weigts using compute_class_weight() to handle imbalance classes.
- 6.
- Create a data generator using Data augmentation
- -
- Initialize image data generator with rescaling and light augmentation (zoom, shear, and brightness adjustment) using ImageDataGenerator().
- -
- Create generators for images to be implemented with the augmentation process.
- 7.
- Building CNN model
- -
- Define the sequential model.
- -
- Add a convolution layer with 32 filters and ReLU activation.
- -
- Add max pooling layer
- -
- Add batch normalization layer
- -
- Add a convolution layer with 64 filters and ReLU activation.
- -
- Add max pooling layer
- -
- Add batch normalization layer
- -
- Flatten the output
- -
- Add a dense layer with 256 units and ReLU activation.
- -
- Add a dropout layer to prevent overfitting
- -
- Add a final dense layer with SoftMax activation, which is used to classify the input into one of the 6 classes.
- 8.
- Compile the model
- -
- Compile the model with adam optimizer, categorical_crossentropy loss and accuracy as the metric.
- 9.
- Train the model
- -
- Train the model with the train_generator method for a fixed number of epochs, which is 20.
- -
- Validating the model with a validation_generator function.
- 10.
- Evaluate the model with test set
- -
- Create the test_generator using the same ImageDataGenerator.
- -
- Evaluate the model using the test set and printing the accuracy result.
- 11.
- Predict on test set
- -
- Get the true label from test_generator()
- -
- Use model.predict() to predict the probabilities for each image in the test set.
- -
- Use argmax() to convert the predicted probabilities to class labels.
- 12.
- calculate the performance metrics
- -
- Calculate the weighted F1-score using f1_score() based on true and predicted labels.
- II. cervical image predictions
- 1.
- Initializing the directories and classes
- -
- Define the directories that hold the datasets for male and gender.
- -
- Define the class list: [CS1, CS2, CS3, CS4, CS5, CS6].
- 2.
- Loading dataset images from directories
- -
- Create two empty lists for holding image paths and image labels.
- -
- For each class name in classes, obtain all image files for each class
- -
- Adding image paths and their labels to two separate array lists.
- 3.
- Splitting dataset into training and testing
- -
- Split all images and labels into train images, test images, train labels, and test labels using the train_test_split() method with 80% for training and 20% for testing.
- 4.
- Convert labels to categorical format
- -
- Convert train—labels and test—labels to one-hot encoding using to_categorical() function.
- 5.
- Handle class imbalance
- -
- Compute class_weigts using compute_class_weight() to handle imbalance classes.
- 6.
- Create a data generator using data augmentation
- -
- Initialize Image Data Generator with rescaling and light augmentation (zoom, shear, and brightness adjustment) using ImageDataGenerator().
- -
- Create generators for images to be implemented with the augmentation process.
- 7.
- Building CNN model
- -
- Define the sequential model.
- -
- Add a convolution layer with 32 filters and ReLU activation.
- -
- Add max pooling layer
- -
- Add batch normalization layer
- -
- Add a convolution layer with 64 filters and ReLU activation.
- -
- Add max pooling layer
- -
- Add batch normalization layer
- -
- Add a convolution layer with 128 filters and ReLU activation.
- -
- Add max pooling layer
- -
- Add batch normalization layer
- -
- Flatten the output
- -
- Add a dense layer with 256 units and ReLU activation.
- -
- Add a dense layer with 128 units and ReLU activation.
- -
- Add a dropout layer to prevent overfitting
- -
- Add a final dense layer with SoftMax activation, which classifies the input into one of the 6 classes.
- 8.
- Compile the model
- -
- Compile the model with adam optimizer, categorical_crossentropy loss and accuracy as the metric.
- 9.
- Tra in the model
- -
- Train the model with the train_generator method for a fixed number of epochs, which is 20.
- -
- Validating the model with a validation_generator function.
- 10.
- Evaluate the model with test set
- -
- Create the test_generator using the same ImageDataGenerator.
- -
- Evaluate the model using the test set and printing the accuracy result.
- 11.
- Predict on test set
- -
- Get the true label from test_generator()
- -
- Use model.predict() to predict the probabilities for each image in the test set.
- -
- Use argmax() to convert the predicted probabilities to class labels.
- 12.
- calculate the performance metrics
- -
- Calculate the weighted F1-score using f1_score() based on true and predicted labels.
3. Result
4. Discussion
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Appendix A
Appendix B
References
- Wong, K.F.; Lam, X.Y.; Jiang, Y.; Yeung, A.W.K.; Lin, Y. Artificial intelligence in orthodontics and orthognathic surgery: A bibliometric analysis of the 100 most-cited articles. Head. Face Med. 2023, 19, 38. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
- Strunga, M.; Urban, R.; Surovková, J.; Thurzo, A. Artificial Intelligence Systems Assisting in the Assessment of the Course and Retention of Orthodontic Treatment. Healthcare 2023, 11, 683. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
- Ionescu, E.; Teodorescu, E.; Badarau, A.; Grigore, R.; Popa, M. Prevention perspective in orthodontics and dentofacial orthopedics. J. Med. Life 2008, 1, 397–402. [Google Scholar]
- Kim, E.; Kuroda, Y.; Soeda, Y.; Koizumi, S.; Yamaguchi, T. Validation of Machine Learning Models for Craniofacial Growth Prediction. Diagnostics 2023, 13, 3369. [Google Scholar] [CrossRef] [PubMed]
- Kök, H.; Acilar, A.M.; İzgi, M.S. Usage and comparison of artificial intelligence algorithms for determination of growth and development by cervical vertebrae stages in orthodontics. Prog. Orthod. 2019, 20, 41. [Google Scholar] [CrossRef] [PubMed]
- van Meijeren-van Lunteren, A.W.; Liu, X.; Veenman, F.C.; Grgic, O.; Dhamo, B.; van der Tas, J.T.; Prijatelj, V.; Roshchupkin, G.V.; Rivadeneira, F.; Wolvius, E.B.; et al. Oral and craniofacial research in the Generation R study: An executive summary. Clin. Oral. Investig. 2023, 27, 3379–3392. [Google Scholar] [CrossRef]
- Saraç, F.; Baydemir Kılınç, B.; Çelikel, P.; Büyüksefil, M.; Yazıcı, M.B.; Şimşek Derelioğlu, S. Correlations between Dental Age, Skeletal Age, and Mandibular Morphologic Index Changes in Turkish Children in Eastern Anatolia and Their Chronological Age during the Pubertal Growth Spurt Period: A Cross-Sectional Study. Diagnostics 2024, 14, 887. [Google Scholar] [CrossRef]
- Felemban, N.H. Correlation between Cervical Vertebral Maturation Stages and Dental Maturation in a Saudi Sample. Acta Stomatol. Croat. 2017, 51, 283–289. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
- Fernandes-Retto, P.; Matos, D.; Ferreira, M.; Bugaighis, I.; Delgado, A. Cervical vertebral maturation and its relationship to circum-pubertal phases of the dentition in a cohort of Portuguese individuals. J. Clin. Exp. Dent. 2019, 11, e642–e649. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
- Schnider-Moser, U.E.M.; Moser, L. Very early orthodontic treatment: When, why, and how? Dental Press. J. Orthod. 2022, 27, e22spe2. [Google Scholar] [CrossRef]
- Jiménez-Silva, A.; Carnevali-Arellano, R.; Vivanco-Coke, S.; Tobar-Reyes, J.; Araya-Díaz, P.; Palomino-Montenegro, H. Craniofacial growth predictors for class II and III malocclusions: A systematic review. Clin. Exp. Dent. Res. 2021, 7, 242–262. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
- Selmanagić, A.; Ajanović, M.; Kamber-Ćesir, A.; Redžepagić-Vražalica, L.; Jelešković, A.; Nakaš, E. Radiological Evaluation of Dental Age Assessment Based on the Development of Third Molars in Population of Bosnia and Herzegovina. Acta Stomatol. Croat. 2020, 54, 161–167. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
- Monirifard, M.; Yaraghi, N.; Vali, A.; Vali, A.; Vali, A. Radiographic assessment of third molars development and its relation to dental and chronological age in an Iranian population. Dent. Res. J. 2015, 12, 64–70. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
- Fernandes, A.P.; Battistella, M.A. Dental Implants in Pediatric Dentistry: A Literature Review. Braz. J. Implantol. Health Sci. 2020, 2, 1–2. [Google Scholar] [CrossRef]
- Nedumgottil, B.M.; Sam, S.; Abraham, S. Dental implants in children. Int. J. Oral. Care Res. 2020, 8, 57–59. [Google Scholar] [CrossRef]
- Singh, N.K.; Raza, K. Progress in deep learning-based dental and maxillofacial image analysis: A systematic review. Expert. Syst. Appl. 2022, 199, 116968. [Google Scholar] [CrossRef]
- Kafieh, R.; Aghazadeh, F. A deep learning approach for classification of tooth maturity stages using panoramic radiographs. J. Biomed. Phys. Eng. 2020, 10, 419–426. [Google Scholar]
- McNamara, J.A., Jr.; Franchi, L.; Baccei, T. The cervical vertebral maturation (CVM) method for the assessment of optimal treatment timing in dentofacial orthopedics. Semin. Orthod. 2005, 11, 119–129. [Google Scholar]
- Demirjian, A.; Goldstein, H.; Tanner, J.M. A new system of dental age assessment. Hum. Biol. 1973, 45, 211–227. [Google Scholar] [PubMed]
- Singh, N.K.; Faisal, M.; Hasan, S.; Goswami, G.; Raza, K. A Single-Stage Deep Learning Approach for Multiple Treatment and Diagnosis in Panoramic X-ray. In Intelligent Systems Design and Applications; ISDA 2023; Lecture Notes in Networks and Systems; Abraham, A., Bajaj, A., Hanne, T., Siarry, P., Eds.; Springer: Cham, Switzerland, 2024; Volume 1046. [Google Scholar] [CrossRef]
- Singh, N.K.; Raza, K. TeethU2Net: A Deep Learning-Based Approach for Tooth Saliency Detection in Dental Panoramic Radiographs. In Neural Information Processing; ICONIP 2022; Communications in Computer and Information Science; Tanveer, M., Agarwal, S., Ozawa, S., Ekbal, A., Jatowt, A., Eds.; Springer: Singapore, 2023; Volume 1794. [Google Scholar] [CrossRef]
- Caloro, E.; Ce, M.; Gibelli, D.; Palamenghi, A.; Martinenghi, C.; Oliva, G.; Cellina, M. Artificial Intelligence (AI)-Based Systems for Automatic Skeletal Maturity Assessment through Bone and Teeth Analysis: A Revolution in the Radiological Workflow? Appl. Sci. 2023, 13, 3860. [Google Scholar] [CrossRef]
- Ameli, N.; Lagravere, M.; Lai, H. Application of deep learning to classify skeletal growth phase on 3D radiographs. medRxiv 2023. [Google Scholar] [CrossRef]
- Pereira, S.A.; Corte-Real, A.; Melo, A.; Magalhães, L.; Lavado, N.; Santos, J.M. Diagnostic Accuracy of Cone Beam Computed Tomography and Periapical Radiography for Detecting Apical Root Resorption in Retention Phase of Orthodontic Patients: A Cross-Sectional Study. J. Clin. Med. 2024, 13, 1248. [Google Scholar] [CrossRef] [PubMed]
- Bonfim, M.A.; Costa, A.L.; Fuziy, A.; Ximenez, M.E.; Cotrim-Ferreira, F.A.; Ferreira-Santos, R.I. Cervical vertebrae maturation index estimates on cone beam CT: 3D reconstructions vs sagittal sections. Dentomaxillofacial Radiol. 2016, 45, 20150162. [Google Scholar] [CrossRef]
- Akay, G.; Akcayol, M.A.; Özdem, K.; Güngör, K. Deep convolutional neural network—The evaluation of cervical vertebrae maturation. Oral. Radiol. 2023, 39, 629–638. [Google Scholar] [CrossRef]
- Subramanian, A.K.; Chen, Y.; Almalki, A.; Sivamurthy, G.; Kafle, D. Cephalometric Analysis in Orthodontics Using Artificial Intelligence-A Comprehensive Review. Biomed. Res. Int. 2022, 16, 1880113. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
- Kunz, F.; Stellzig-Eisenhauer, A.; Zeman, F.; Boldt, J. Artificial intelligence in orthodontics: Evaluation of a fully automated cephalometric analysis using a customized convolutional neural network. J. Orofac. Orthop. 2020, 81, 52–68. [Google Scholar] [CrossRef] [PubMed]
- Ezat, W.A.; Dessouky, M.M.; Ismail, N.A. Multi-class image classification using a deep learning algorithm. J. Phys. Conf. Ser. 2020, 21447, 012021. [Google Scholar] [CrossRef]
- Negi, A.; Kumar, K.; Chauhan, P. Deep neural network-based multi-class image classification for plant diseases. In Agricultural Informatics: Automation Using the IoT and Machine Learning; Wiley: Hoboken, NJ, USA, 2021; pp. 117–129. [Google Scholar]
- Heenaye-Mamode Khan, M.; Boodoo-Jahangeer, N.; Dullull, W.; Nathire, S.; Gao, X.; Sinha, G.R.; Nagwanshi, K.K. Multi-class classification of breast cancer abnormalities using Deep Convolutional Neural Network (CNN). PLoS ONE 2021, 16, e0256500. [Google Scholar] [CrossRef] [PubMed]
- Chaturvedi, S.S.; Tembhurne, J.V.; Diwan, T. A multi-class skin Cancer classification using deep convolutional neural networks. Multimed. Tools Appl. 2020, 79, 28477–28498. [Google Scholar] [CrossRef]
- Arshed, M.A.; Mumtaz, S.; Ibrahim, M.; Ahmed, S.; Tahir, M.; Shafi, M. Multi-class skin cancer classification using vision transformer networks and convolutional neural network-based pre-trained models. Information 2023, 14, 415. [Google Scholar] [CrossRef]
- Karaddi, S.H.; Sharma, L.D. Automated multi-class classification of lung diseases from CXR images using pre-trained convolutional neural networks. Expert. Syst. Appl. 2023, 211, 118650. [Google Scholar] [CrossRef]
- Rauf, A.M.; Mahmood, T.M.A.; Mohammed, M.H.; Omer, Z.Q.; Kareem, F.A. Orthodontic Implementation of Machine Learning Algorithms for Predicting Some Linear Dental Arch Measurements and Preventing Anterior Segment Malocclusion: A Prospective Study. Medicina 2023, 59, 1973. [Google Scholar] [CrossRef]
- Toodehzaeim, M.H.; Rafiei, E.; Hosseini, S.H.; Haerian, A.; Hazeri-Baqdad-Abad, M. Association between mandibular second molars calcification stages in the panoramic images and cervical vertebral maturity in the lateral cephalometric images. J. Clin. Exp. Dent. 2020, 12, e148–e153. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
- Arkin, E.; Yadikar, N.; Xu, X.; Aysa, A.; Ubul, K. A survey: Object detection methods from CNN to transformer. Multimed. Tools Appl. 2023, 82, 21353–21383. [Google Scholar] [CrossRef]
- Roccetti, M.; Delnevo, G.; Casini, L.; Cappiello, G. Is bigger always better? A controversial journey to the center of machine learning design, with uses and misuses of big data for predicting water meter failures. J. Big Data 2019, 6, 1–23. [Google Scholar] [CrossRef]
No. | Gender | Cervical Prediction Accuracy | Second Molar Prediction Accuracy |
---|---|---|---|
1. | Male | 98% | 96% |
2. | Female | 96% | 97% |
No. | Gender | Cervical F1-Score | Second Molar F1-Score |
---|---|---|---|
1. | Male | 0.93 | 0.91 |
2. | Female | 0.91 | 0.92 |
Predicted CS1 | Predicted CS2 | Predicted CS3 | Predicted CS4 | Predicted CS5 | Predicted CS6 | |
---|---|---|---|---|---|---|
Actual CS1 | 199 | 1 | 0 | 0 | 0 | 0 |
Actual CS2 | 2 | 198 | 0 | 0 | 0 | 0 |
Actual CS3 | 0 | 0 | 199 | 1 | 0 | 0 |
Actual CS4 | 0 | 0 | 1 | 198 | 1 | 0 |
Actual CS5 | 0 | 0 | 0 | 1 | 197 | 2 |
Actual CS6 | 0 | 0 | 0 | 1 | 1 | 198 |
Predicted C | Predicted D | Predicted E | Predicted F | Predicted G | Predicted H | |
---|---|---|---|---|---|---|
Actual C | 196 | 2 | 1 | 1 | 0 | 0 |
Actual D | 1 | 198 | 1 | 0 | 0 | 0 |
Actual E | 1 | 1 | 197 | 1 | 0 | 0 |
Actual F | 0 | 0 | 0 | 198 | 1 | 1 |
Actual G | 0 | 0 | 0 | 0 | 199 | 1 |
Actual H | 0 | 0 | 0 | 0 | 2 | 198 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Mohammed, M.H.; Omer, Z.Q.; Aziz, B.B.; Abdulkareem, J.F.; Mahmood, T.M.A.; Kareem, F.A.; Mohammad, D.N. Convolutional Neural Network-Based Deep Learning Methods for Skeletal Growth Prediction in Dental Patients. J. Imaging 2024, 10, 278. https://doi.org/10.3390/jimaging10110278
Mohammed MH, Omer ZQ, Aziz BB, Abdulkareem JF, Mahmood TMA, Kareem FA, Mohammad DN. Convolutional Neural Network-Based Deep Learning Methods for Skeletal Growth Prediction in Dental Patients. Journal of Imaging. 2024; 10(11):278. https://doi.org/10.3390/jimaging10110278
Chicago/Turabian StyleMohammed, Miran Hikmat, Zana Qadir Omer, Barham Bahroz Aziz, Jwan Fateh Abdulkareem, Trefa Mohammed Ali Mahmood, Fadil Abdullah Kareem, and Dena Nadhim Mohammad. 2024. "Convolutional Neural Network-Based Deep Learning Methods for Skeletal Growth Prediction in Dental Patients" Journal of Imaging 10, no. 11: 278. https://doi.org/10.3390/jimaging10110278
APA StyleMohammed, M. H., Omer, Z. Q., Aziz, B. B., Abdulkareem, J. F., Mahmood, T. M. A., Kareem, F. A., & Mohammad, D. N. (2024). Convolutional Neural Network-Based Deep Learning Methods for Skeletal Growth Prediction in Dental Patients. Journal of Imaging, 10(11), 278. https://doi.org/10.3390/jimaging10110278