Next Article in Journal
Automated Laryngeal Invasion Detector of Boluses in Videofluoroscopic Swallowing Study Videos Using Action Recognition-Based Networks
Previous Article in Journal
Ultrasound Appearance of Intravascular Uterine Smooth Muscle Tumor of Uncertain Malignant Potential (STUMP): A Case Report
Previous Article in Special Issue
Complex Craniofacial Cases through Augmented Reality Guidance in Surgical Oncology: A Technical Report
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Utilizing Deep Learning for Diagnosing Radicular Cysts

1
Clinic for Tumors, Clinical Hospital Center “Sisters of Mercy”, Ilica 197, 10000 Zagreb, Croatia
2
Faculty of Electrical Engineering and Computing, University of Zagreb, Unska ulica 3, 10000 Zagreb, Croatia
3
Department of Otorhinolaryngology and Maxillofacial Surgery, General Hospital Zadar, 23000 Zadar, Croatia
4
Department of Maxillofacial and Oral Surgery, Dubrava University Hospital, Avenija Gojka Šuška 6, 10000 Zagreb, Croatia
*
Authors to whom correspondence should be addressed.
Diagnostics 2024, 14(13), 1443; https://doi.org/10.3390/diagnostics14131443 (registering DOI)
Submission received: 5 June 2024 / Revised: 25 June 2024 / Accepted: 2 July 2024 / Published: 6 July 2024

Abstract

:
Objectives: The purpose of this study was to develop a deep learning algorithm capable of diagnosing radicular cysts in the lower jaw on panoramic radiographs. Materials and Methods: In this study, we conducted a comprehensive analysis of 138 radicular cysts and 100 normal panoramic radiographs collected from 2013 to 2023 at Clinical Hospital Dubrava. The images were annotated by a team comprising a radiologist and a maxillofacial surgeon, utilizing the GNU Image Manipulation Program. Furthermore, the dataset was enriched through the application of various augmentation techniques to improve its robustness. The evaluation of the algorithm’s performance and a deep dive into its mechanics were achieved using performance metrics and EigenCAM maps. Results: In the task of diagnosing radicular cysts, the initial algorithm performance—without the use of augmentation techniques—yielded the following scores: precision at 85.8%, recall at 66.7%, mean average precision (mAP)@50 threshold at 70.9%, and mAP@50-95 thresholds at 60.2%. The introduction of image augmentation techniques led to the precision of 74%, recall of 77.8%, mAP@50 threshold to 89.6%, and mAP@50-95 thresholds of 71.7, respectively. Also, the precision and recall were transformed into F1 scores to provide a balanced evaluation of model performance. The weighted function of these metrics determined the overall efficacy of our models. In our evaluation, non-augmented data achieved F1 scores of 0.750, while augmented data achieved slightly higher scores of 0.758. Conclusion: Our study underscores the pivotal role that deep learning is poised to play in the future of oral and maxillofacial radiology. Furthermore, the algorithm developed through this research demonstrates a capability to diagnose radicular cysts accurately, heralding a significant advancement in the field.

1. Introduction

A radicular cyst, also known as a periapical cyst, is the most common jaw cyst and represents one of the complications of dental caries. The development of radicular cysts is primarily associated with chronic periapical inflammation resulting from dental caries, although their formation can also be triggered by dental trauma or inadequate root canal treatment [1,2]. Radicular cysts are usually asymptomatic and are often discovered incidentally during routine radiological examinations. Many patients with a radicular cyst do not show symptoms until the cyst reaches a significant size or secondary infection occurs. Symptoms may include mild to moderate pain, swelling in the area of the affected tooth, and sometimes a visible change in the color of the tooth. Additionally, the causative tooth is often sensitive to percussion and usually tests negative for sensitivity [3]. A radicular cyst is typically a solitary lesion that can develop in both the upper and lower jaws, more commonly occurring in the upper jaw. This is likely due to the increased incidence of pulpal damage to the anterior maxillary teeth from trauma or palatal invaginations. In the lower jaw, they are most frequently located in the molar and premolar regions. Diagnostically, radicular cysts are most often identified on radiographs as radiolucent lesions surrounding the tooth root, often encased by a dense rim of cortical bone. Common radiographic techniques include panoramic radiographs, periapical radiographs, and cone beam computed tomography (CBCT) [4].
Treatment of radicular cysts involves surgical removal, which can range from enucleation to marsupialization, depending on the size and location of the cyst. Marsupialization is particularly significant for large cysts as it allows for the reduction in lesion size, facilitating eventual enucleation or healing. Recent case studies highlight the successful management of large radicular cysts through marsupialization, followed by orthodontic treatment to correct tooth displacement. Additionally, new perspectives suggest that some radicular cysts may heal after conventional root canal therapy, emphasizing the importance of conservative treatment approaches in certain cases [5,6,7].
Deep learning represents a specialized discipline within machine learning, focused on solving problems in the field of AI. Deep learning techniques have revolutionized the way computers recognize patterns and make sense of complex, unstructured data, enabling advancements in various areas such as speech recognition, visual object recognition, autonomous vehicles, medical diagnostics, and many others. Its ability to extract complex patterns from large datasets makes it ideal for solving problems that were unattainable with traditional algorithms. Deep learning employs algorithms inspired by the structure and function of the human brain, known as artificial neural networks [8,9]. These networks consist of layers of nodes or “neurons”, where each layer performs specific computations on the input data. The data pass through the layers, with each layer extracting and enhancing features relevant to the task at hand [10].
Panoramic radiographs depict the bone structures and teeth of the upper and lower jaw, facilitating the assessment of the size and position of the cyst relative to other anatomical elements. The imaging process is simple and non-invasive and can even be performed on patients unable to open their mouths. Additionally, panoramic radiography is relatively cost-effective and easily available, making it an ideal choice as the first-line diagnostic tool in cases of suspected cystic lesions of the lower jaw.
While this method offers numerous advantages, there are certain limitations to consider. Panoramic radiography does not provide the same level of detail as CT scans, especially in more complex situations. Overlapping structures can occur, making it difficult to identify smaller cysts or lesions in the early stages of development [11,12].
Since it is a two-dimensional representation, accurately determining depth and precise spatial relationships can be challenging. Therefore, proper interpretation of panoramic images requires the experience and expertise of dentists, radiologists, or surgeons [13].
Panoramic radiographs serve as a cornerstone in the field of dental radiology, providing a unique and comprehensive view of the oral cavity and playing a crucial role in the diagnosis, treatment planning, and monitoring of various dental and maxillofacial conditions, despite certain limitations.
AI is increasingly being integrated into oral and maxillofacial surgery, offering significant improvements in diagnostics, treatment planning, and the execution of surgical operations. AI can quickly and accurately interpret panoramic radiographs, identifying anomalies such as tumors, fractures, and other pathologies with high precision.
Chattopadhyay et al. developed a computer vision algorithm that can recognize occlusion disorders of the upper and lower jaws with an accuracy of 81%, while Xi et al. used automatic segmentation to determine bone volume and density [14,15].
Xiong et al., in their pilot study, created a deep neural network model for detecting caries based on intraoral images. They tested the model’s performance against a dentist with one year of professional experience. The developed model diagnosed caries at a higher rate than the clinician [16]. Ma et al. successfully developed an algorithm for detecting dental pulp inflammation using a convolutional neural network. The sample consisted of 348 intraoral periapical images, and the algorithm’s accuracy was 85% [17].
YOU ONLY LOOK ONCE (YOLO) refers to an object detection algorithm that represents a pivotal breakthrough in the field of computer vision technology. This algorithm has set new standards in efficiency and accuracy, surpassing the performance of renowned methods such as R-CNN, Fast R-CNN, Faster R-CNN, and SSD [18,19,20]. Since its initial introduction in 2016 with YOLOv1, YOLO models have undergone several upgrades and new versions, leading up to the latest iteration, YOLOv8, developed by the Ultralytics team. Each version has brought improvements in speed, accuracy, and model architecture, significantly contributing to object detection. YOLOv8 has introduced significant upgrades compared to previous models and utilizes an anchor-free approach. In other words, the model predicts the object’s position directly, without any adjustments, from a predefined anchor box, simplifying the detection process and reducing the number of required predictions. Changes have been made to the convolutional blocks of the model, where the first 6 × 6 convolution is replaced with 3 × 3, and mosaic augmentation is employed during training, where four different images from the dataset are merged into one image, with each of the original images occupying one quadrant of the new image. An alternative to the YOLO v8 series could be other frameworks such as EfficientDet, Faster R-CNN, or SSD (Single Shot MultiBox Detector). YOlov8 has been favored over alternatives primarily due to several key advantages. YOLO processes images much faster than many other object detection models, achieving real-time performance by processing frames in milliseconds. Unlike some alternatives that involve multiple stages or region proposals, YOLO utilizes a single neural network to directly predict bounding boxes and class probabilities from full images. This streamlined approach simplifies the pipeline and enhances efficiency. Moreover, YOLO models are open source and benefit from robust community support, which facilitates easier adoption and customization. Its architecture is adaptable and can be optimized for various hardware platforms and deployment scenarios, including edge devices, GPUs, and specialized hardware like TPUs (Tensor Processing Units). This flexibility enables efficient deployment in real-time applications where quick decision-making based on visual data is crucial, such as in robotics, surveillance, and medical imaging [21,22].
The purpose of this study was to diagnose radicular cysts in the lower jaw by developing a deep learning model based on a real-time object detection system, YOLOv8. Furthermore, the aim of this research was to explore and validate the potential applications of deep learning and AI in maxillofacial surgery.

2. Materials and Methods

This study was approved by the ethics committee of Clinical Hospital Dubrava (2023/2103-01) and performed in accordance with the tenets of the Declaration of Helsinki.

2.1. Panoramic Radiographs Selection

Patients were searched within the Hospital Information System (HIS) of Clinical Hospital Dubrava and the Department of Oral Surgery from 2013 to 2023, using the diagnosis K04.8 according to the International Classification of Diseases (ICD). The inclusion criteria required the presence of a radicular cyst in the lower jaw, confirmed by an oral surgeon, a radiologist, and a maxillofacial surgeon, along with histopathological verification of the diagnosis.
A total of 138 panoramic radiographs with radicular cysts were obtained, and an additional 100 panoramic radiographs excluding underlying medical conditions were included for comparative purposes (Figure 1). The digital panoramic radiographs were obtained using CRANEX 3D (Planmeca OY, Helsinki, Finland).

2.2. Preparation of the Imaging Datasets

The digital panoramic radiographs were carefully retrieved from the image database and saved in the Joint Photographic Experts Group (.JPEG) format, maintaining a resolution of 2776 × 1480 pixels. To develop an AI model that can label and diagnose radicular cysts of the lower jaw, the previously saved images needed to be prepared for annotation. Preparation was carried out by an oral surgeon, a radiologist, and a maxillofacial surgeon. The process involved marking the radiolucent lesion on each image using the GIMP program (GNU Image Manipulation Program). GIMP is free and open-source image editing software available for various operating systems, including Linux, macOS, and Windows. It was developed by a volunteer team of programmers through the GNU project and offers a rich set of tools for image processing, drawing, color correction, and format conversion. One of the key features of GIMP is the ability to import and save files in different formats, meeting the needs of a large number of users, and it is also open-source software.
The previously saved images in JPEG format were imported into GIMP 2.10.32 software. The image were then converted into a semi-transparent layer by changing the opacity parameter to 60 percent. This process makes the loaded image 60 percent visible and 40 percent transparent, which facilitates easier lesion marking. To ensure precise annotation, the brush settings used to draw the lesion edges need to be adjusted. In this study, a circular-shaped brush with 100 percent hardness was used. After completing all of the previous steps, the zoom-in option was selected for the lesion, and the lesion was marked with the brush. During the marking process, errors occurred in filling the lesion with the red brush, which were corrected using the paint bucket tool. The fully marked radiolucent lesion was saved in XCF (eXperimental Computing Facility) format for further preparation for AI model development (Figure 2).

2.3. Model

For this research, we chose to utilize the pretrained YOLOv8m and YOLOv8l models due to their advanced capabilities and proven effectiveness in various computer vision tasks. YOLOv8 stands out for its exceptional flexibility, capable of accurately identifying both singular and multiple object types depending on the training it undergoes. This model follows a highly efficient architecture, featuring a “backbone” for feature extraction and a “head” for predictions, ensuring robust feature extraction and precise predictions. The backbone of YOLOv8 employs a complex structure that includes multiple layers such as convolutional, batch normalization, and spatial pyramid pooling layers. These layers work together to extract and refine features throughout the network, capturing intricate details in the images. The head makes detailed predictions using these refined features, incorporating concatenation and upsampling techniques to enhance feature map resolution. These technical aspects contribute to YOLOv8’s ability to deliver high accuracy and fast processing speeds, making it ideal for real-time applications. YOLO v8m refers to a medium-sized version of the YOLO v8 architecture. It typically strikes a balance between speed and accuracy, making it suitable for applications where real-time processing is a priority but with moderate computational resources. YOLO v8l denotes a larger version of the YOLO v8 architecture. It is designed to prioritize accuracy over speed, making it suitable for applications where high precision in object detection is critical, even at the expense of slightly slower processing times and higher computational requirements.
We applied various augmentation techniques, including translation, scaling, horizontal flipping, rotation, and mosaic augmentation. These techniques were chosen to reflect natural variations in image capture, enhancing the model’s generalization capability. Despite appearing minor to the human eye, these transformations are perceived as new data by the model, significantly boosting its performance. Mosaic augmentation, in particular, has demonstrated outstanding results in numerous computer vision applications. The configuration file manages these augmentations, specifying the probability of applying mosaic and horizontal flipping, and setting the ranges for translation, scaling, and rotation. Augmentation was applied during each training epoch, ensuring that the model was exposed to a rich and varied dataset.
We configured the training process similarly to our previous research, with 100 epochs and a batch size of 4, constrained by the memory limits of our hardware and the high resolution of our images (2776 × 1480). Stochastic gradient descent was selected as the optimization algorithm, and training was executed on an NVIDIA RTX A6000 GPU to ensure rapid processing. To maintain consistent evaluation metrics on the test set, all training sessions were conducted deterministically. The dataset was divided into subsets according to the ratio of 3:1:1, with a larger portion for training and smaller portions for testing and validation.

2.4. Eigen Camera

EigenCam, short for Eigen Camera, refers to a technique used in computer vision and image processing to highlight the regions of an image that contribute most significantly to the predictions made by a deep learning model, particularly convolutional neural networks (CNNs). This technique is based on the concept of eigenvalues and eigenvectors from linear algebra, which help identify the principal components or the most significant features in a dataset. EigenCam provides insights into what the model is focusing on when making a prediction, which helps in understanding and interpreting the model’s decisions. This is particularly important in fields like medical imaging, where understanding why a model made a particular diagnosis is crucial.
By visualizing the important regions, we identified whether the model was focusing on the correct parts of the image or if it was being misled by irrelevant features. This aids in improving the model by refining the training data or the model architecture.

2.5. Metric Analysis

To examine the diagnostic accuracy of the AI model, metric analyses of precision, recall, and mean average precision were employed.
Mathematically, precision can be defined as the ratio of true positive predictions (TP) to the sum of true positive and false positive predictions (TP + FP), where:
TP (True Positives): The number of examples correctly classified as positive.
FP (False Positives): The number of examples incorrectly classified as positive.
The mathematical formula is:
P = T P T P + F P
High precision indicates that the model effectively avoids false positive errors, meaning that when the model classifies something as positive, there is a high probability that it is correct. However, precision does not consider all true positive examples (including those the model has failed to identify), which can be a limitation. Hence, we also used the metric of mean average precision and recall to obtain a more comprehensive view of the model’s performance.
Recall shows what portion of actual positive examples the model successfully identified as positive. Mathematically, recall is defined as the ratio of true positive predictions (TP) to the sum of true positive and false negative predictions (TP + FN), where:
TP (True Positives): The number of examples correctly classified as positive.
FN (False Negatives): The number of examples incorrectly classified as negative, though they are actually positive.
The formula for calculating recall is:
R = T P T P + F N
High recall indicates that the model identifies actual positive examples well. Additionally, to demonstrate the model’s success, mean average precision (mAP) and average precision (AP) were utilized.
mAP considers both precision and recall of the model, providing a comprehensive insight into the quality and accuracy of the model considering the entire set of predictions.
The mathematical formula is:
m A P = 1 C n = 1 c A P
C is the number of classes or queries and AP is the average precision for the class or query. In the context of object detection, mAP can be computed at different intersection over union (IoU) thresholds. IoU is a measure of overlap between the predicted bounding box and the ground truth bounding box. For example, mAP@50 refers to mAP calculated at an IoU threshold of 50.
Additionally, the F1 score was utilized to measure the harmonic mean of precision and recall. It balances the precision and recall of a model and is especially useful when the class distribution is imbalanced.
The mathematical formula is:
F 1   s c o r e = 2 × P r e c i s i o n × R e c a l l P r e c i s i o n + R e c a l l
Data were collected and stored in a database in MS Excel.

3. Results

The test set performance of the trained model is presented in Table 1 and Figure 3.
Precision and recall were transformed into F1 scores to provide a balanced evaluation of model performance. The weighted function of these metrics determined the overall efficacy of our models. In our evaluation, non-augmented data achieved F1 scores of 0.750, while augmented data achieved slightly higher scores of 0.758. Additionally, our models achieved mAP@50 scores of 0.709 and 0.896 for non-augmented and augmented data, respectively. Furthermore, the mAP@50-95 scores were 0.602 and 0.717 for non-augmented and augmented data, respectively. Despite the lower precision observed when trained on augmented data, the improved F1 scores suggest an overall enhancement in detection performance.
These results underscore the significance of considering both precision, recall, and mAP scores in evaluating model performance, especially in medical imaging tasks like cyst diagnosis. The precision, recall, mA@P50, and mAP@50-95 scores without augmentation were recorded at 85.8%, 66.7%, 70.9%, and 60.2%, respectively. However, with augmentation, precision, and recall, mA@P50 and mA@P50-95 were 74%, 77.8%, 89.6%, and 71.7%, respectively.
The trained model is capable of predicting bounding boxes and probabilities as shown in Figure 4 and Figure 5.
In addition to evaluating the model’s performance on cases with cysts, we also conducted tests on a separate set of 100 orthopantomograms devoid of lesions and excluding any other underlying medical issues. Remarkably, the model demonstrated impeccable performance on this set, yielding no false positives, affirming its robustness in accurately identifying the absence of anomalies in such cases.
Furthermore, EigenCam was employed to gain deeper insights into the model, as depicted in Figure 6. In thermal imaging, each pixel represents a temperature value. An EigenCAM thermal zone would therefore highlight the areas with temperature variations that are most relevant to the model’s predictions. Additionally, there are some cold zones in the maxillary sinus, indicating that our algorithm faces challenges in distinguishing between the sinus and cysts. The red thermal zone in the lower jaw demonstrates higher accuracy, particularly around the dental roots. The concentration of intensity on the thermal map around the area of the lower jaw confirms the previous assertion and highlights the model’s capability to discern features pertinent to detecting mandibular cysts.

4. Discussion

The existence of radiolucent lesions in the lower jaw presents complex challenges in clinical diagnosis and treatment due to their varied morphology and potential complications. Consequently, the development of automated tools for precise detection, accurate segmentation, and reliable diagnostic classification holds significant potential in advancing patient care and treatment outcomes in the field of oral radiology and maxillofacial surgery.
AI has found applications in various fields, including medicine and dentistry. As early as the 1970s, the MYCIN system for diagnosing bacterial infections was developed, significant due to its ability to draw conclusions based on uncertain or incomplete information, using rules grounded in expert knowledge from the medical field. Functioning through questionnaires filled out by physicians about patients and 600 rules for differentially diagnosing various bacterial infections and recommending antibiotics [23,24,25]. Leveraging the MYCIN platform, Henry Pople developed the INTERNIST-1 medical diagnostic algorithm, later upgraded to the CADUCEUS expert system. CADUCEUS, based on the knowledge of American molecular biologist John Edgar Myers, could diagnose over a thousand different diseases. Dxplain, developed at the University of Massachusetts, provided students and physicians insight into a list of differential diagnoses for over five thousand different diseases based on complex symptoms [26,27].
Esteva et al. trained a neural network to recognize and differentiate melanoma from other skin lesions, achieving the ability to distinguish at a level comparable to experienced dermatologists [28]. Gargeya et al. developed an impressive algorithm for diabetic retinopathy detection, achieving sensitivity and specificity of 94% and 98%, respectively [29].
Notably, Kim et al. from Yonsei University in South Korea developed an algorithm for myocardial infarction screening based on deep neural networks and a database of over 2000 participants, demonstrating impressive metric results including precision, recall, and F1-score of 99.38% [30].
Moreover, AI has been used in diagnosing Alzheimer’s disease by analyzing amyloid imaging databases, accurately detecting response to therapy. The significant presence of input data in these studies, used for developing UI models, is also a potential challenge [31].
Given the extensive input data in these studies, developing AI models, there is potential for issues such as data imbalance. For instance, most Western countries compile databases and employ personnel in healthcare institutions and universities solely dedicated to data entry, enabling them to have tens of thousands of input data for developing algorithms using neural networks.
Diagnosing and treating periapical lesions can be challenging for clinicians. The characteristics of periapical radiolucencies and alveolar bone resorption allow for the development of various UI algorithms. Endras et al. developed a model for diagnosing periapical lesions and tested it against maxillofacial surgeons, showing better diagnostic results compared to 14 out of 24 participating surgeons [32].
Ryong Ha et al. attempted, through machine learning, to recognize factors influencing the prognosis of osseointegration and the success of dental implant placement, finding that implants placed mesiodistally had the greatest impact on the final success of dental rehabilitation [33]. Additionally, in forensic dentistry, there is increasing use of deep neural networks in analyzing bite marks and estimating age and gender based on permanent dentition.
In light of AI’s wide application within the pathology of the oral and maxillofacial sphere, it is imperative to initiate research projects aimed at integrating such technological solutions into a coherent clinical decision support system.
In this study, a model for diagnosing radicular cysts of the lower jaw was developed using the most advanced version of YOLOv8. In our previous work, we utilized YOLOv8 for lesion segmentation and detection while in this study, we extended its application to the diagnosis of radicular cysts. As stated, according to the available literature, YOLOv8 has not been used for this purpose [34]. Kwon et al. used YOLOv3 in their study and utilized a dataset of 1282 radiolucent lesions. The developed model’s accuracy in detecting radiolucent lesions of the lower jaw was 87%, with a sensitivity of 83% [35]. Despite having a much larger dataset, their detection metrics were similar to ours. The precision in diagnosing radicular cyst of the lower jaw in our model was 74%, with an mAP50 of 89.6% and sensitivity of 77.8% (Table 1).
Additionally, Ariji et al. used the Detect-Net deep neural network in an attempt to develop a UI model for detecting jaw lesions, achieving a detection accuracy of 81% for radicular cysts [36]. In our study, the mAP50 with augmentation was 89.6% (Table 1).
Van Berne et al. utilized YOLOv3 and MobileNetV2 to develop an algorithm for classifying radicular cysts and periapical granulomas. Their final dataset comprised 80 radicular cysts and 72 periapical granulomas, with images resized to a resolution of 256 × 256 pixels. The model achieved a sensitivity of 1.00 (95% CI 0.63–1.00) for radicular cysts, a specificity of 0.95 (95% CI 0.86–0.99), and an AUC of 0.97 [37]. The images in our study had a resolution of 2776 × 1480 pixels.
Considering the relatively small amount of data in this study, a limitation is noted, although it is consistent with similar research.
Innovative visualization techniques, such as EigenCAM, further enrich our understanding of how models make predictions. EigenCAM generates heat maps that visually represent regions the model deems crucial for its predictions. These maps offer a clear visual insight into parts of the image critical for recognizing pathological conditions, often highlighting specific regions, such as the lower jaw, with heightened intensity, emphasizing potential locations of radiolucent lesions (Figure 6).
The findings of this study underscore the remarkable diagnostic proficiency of the developed model in identifying radicular cysts located in the lower jaw. A limitation of this study is the relatively small amount of input data used for model development and testing, though it is in line with similar studies. Implementing AI algorithms in medical practice presents numerous ethical considerations and challenges. AI systems depend on extensive patient data, which raises concerns about privacy breaches and the security of confidential medical information. Moreover, ensuring the accuracy, reliability, and safety of AI algorithms in medical settings necessitates rigorous clinical validation. Regulatory frameworks must evolve to effectively oversee the integration of AI in healthcare. Furthermore, AI’s involvement in medical decision-making has the potential to reshape the traditional physician–patient relationship, potentially diminishing direct interaction and human judgment. Access to AI-driven medical technologies may also widen existing disparities in healthcare access, particularly affecting underserved or marginalized communities.
Furthermore, it is essential to highlight that our research was limited to radicular cysts of the lower jaw. The algorithm encountered difficulties with the maxillary sinus when detecting radicular cysts in the upper jaw. Future studies could address this issue by expanding the application of YOLOv8 to include the upper jaw and different types of cysts, thereby enhancing its potential utility. The plan is to expand the dataset and adapt the algorithm for use in the upper jaw. Generalizing the algorithm is crucial to ensuring its effectiveness and reliability in clinical applications. Future research will focus on collecting panoramic radiographs from various sources and demographic groups, as well as adapting the algorithm based on feedback from clinical practice. Additionally, continuous testing on new, unseen data is planned to ensure that the algorithm maintains a high level of accuracy even in novel situations.

5. Conclusions

Considering the limiting factors of this study, it is concluded that artificial intelligence represents a powerful tool capable of detailed analysis of medical images. Additionally, the developed deep learning algorithm is capable of diagnosing radicular cysts of the lower jaw with impressive metric indicators. By analyzing how the developed model processes data through visualization activations and heat maps, we pave the way for its further refinement.

Author Contributions

M.R. conceived the presented idea, developed the theory, performed the sample gathering, and drafted the manuscript while M.T. performed the analytic calculations. J.P.-B. substantially contributed to sample gathering while I.Č. aided in interpreting the results and worked on the manuscript. M.S. verified the numerical results and provided critical feedback. E.D. supervised the work and was in charge of the overall direction. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

This study was approved by the ethics committee of Clinical Hospital Dubrava (2023/2103-01) and performed in accordance with the tenets of the Declaration of Helsinki.

Informed Consent Statement

Not applicable.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Conflicts of Interest

No author received any material or financial gain or personal advancement in the production of this manuscript. The authors have no competing interests.

References

  1. Goswami, M.; Chauhan, N. Radicular Cyst with Primary Mandibular Molar: A Rare Occurrence. Int. J. Clin. Pediatr. Dent. 2023, 16, 769–773. [Google Scholar] [CrossRef] [PubMed]
  2. Lai, P.T.; Chen, I.J.; Li, C.Y.; Chiang, C.P. Radicular cyst with actinomycosis. J. Dent. Sci. 2024, 19, 666–668. [Google Scholar] [CrossRef] [PubMed]
  3. Kammer, P.V.; Mello, F.W.; Rivero, E.R.C. Comparative analysis between developmental and inflammatory odontogenic cysts: Retrospective study and literature review. Oral Maxillofac. Surg. 2020, 24, 73–84. [Google Scholar] [CrossRef] [PubMed]
  4. Gliga, A.; Imre, M.; Grandini, S.; Marruganti, C.; Gaeta, C.; Bodnar, D.; Dimitriu, B.A.; Foschi, F. The Limitations of Periapical X-ray Assessment in Endodontic Diagnosis—A Systematic Review. J. Clin. Med. 2023, 12, 4647. [Google Scholar] [CrossRef]
  5. Umer, F.; Adnan, N.; Khan, M.; Qureshi, M.B. The Dilemma Of Management Of Cystic Lesions; An Uncertain Way Forward: A Case Report. J. Pak. Med. Assoc. 2023, 73, 2269–2272. [Google Scholar] [CrossRef]
  6. Jurt, A.; Stanowska, O.; Braun, D.; Schulze, R. Atypisch grosse radikuläre Zyste. Operative Entfernung einer atypisch grossen ausgedehnten radikulären Zyste im Unterkiefer, ein Fallbericht [Surgical removal of an atypically large extensive radicular cyst in the mandible: A case report.]. Swiss Dent. J. 2023, 133, 810–815. [Google Scholar] [CrossRef]
  7. Ahmed, T.; Kaushal, N. Treatment of Radicular Cyst with Marsupialization in Children: Report of Two Rare Cases. Int. J. Clin. Pediatr. Dent. 2022, 15, 462–467. [Google Scholar] [CrossRef] [PubMed]
  8. Learning, D.; Goodfellow, I.; Bengio, Y.; Courville, A. Adaptive Computation and Machine Learning Series; MIT Press: Cambridge, MA, USA, 2016. [Google Scholar]
  9. Bengio, Y. Learning Deep Architectures for AI. Found. Trends® Mach. Learn. 2019, 2, 1–127. [Google Scholar]
  10. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
  11. Molander, B. Panoramic radiography in dental diagnostics. Swed. Dent. J. Suppl. 1996, 119, 1–26. [Google Scholar]
  12. Rondon, R.H.; Pereira, Y.C.; do Nascimento, G.C. Common positioning errors in panoramic radiography: A review. Imaging Sci. Dent. 2014, 44, 1–6. [Google Scholar] [CrossRef]
  13. Rushton, V.E.; Horner, K. The use of panoramic radiology in dental practice. J. Dent. 1996, 24, 185–201. [Google Scholar] [CrossRef]
  14. Chattopadhyay, J.; Deb, A.; Sharma, K.; Nawaid, K.A.; Gandhi, R.; Joshi, P.; Makkad, R.S. Creating and Testing a New Computer Vision System for Detecting Dental Problems in Orthodontic Patients. J. Pharm. Bioallied Sci. 2024, 16 (Suppl. S1), S466–S468. [Google Scholar] [CrossRef] [PubMed]
  15. Xi, R.; Ali, M.; Zhou, Y.; Tizzano, M. A reliable deep-learning-based method for alveolar bone quantification using a murine model of periodontitis and micro-computed tomography imaging. J. Dent. 2024, 8, 105057. [Google Scholar] [CrossRef]
  16. Xiong, Y.; Zhang, H.; Zhou, S.; Lu, M.; Huang, J.; Huang, Q.; Huang, B.; Ding, J. Simultaneous detection of dental caries and fissure sealant in intraoral photos by deep learning: A pilot study. BMC Oral Health 2024, 24, 553. [Google Scholar] [CrossRef] [PubMed]
  17. Ma, T.; Zhu, J.; Wang, D.; Xu, Z.; Bai, H.; Ding, P.; Chen, X.; Xia, B. Deep learning-based detection of irreversible pulpitis in primary molars. Int. J. Paediatr. Dent. 2024; Epub ahead of print. [Google Scholar]
  18. Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich feature hierarchies for accurate object detection and semantic segmentation [Internet]. arXiv 2013, arXiv:1311.2524. [Google Scholar]
  19. Girshick, R. Fast R-CNN [Internet]. arXiv 2015, arXiv:1504.08083. [Google Scholar]
  20. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards real-time object detection with region proposal networks [Internet]. arXiv 2016, arXiv:1506.01497. [Google Scholar] [CrossRef]
  21. Yavuz, M.B.; Sali, N.; Kurt Bayrakdar, S.; Ekşi, C.; İmamoğlu, B.S.; Bayrakdar, İ.Ş.; Çelik, Ö.; Orhan, K. Classification of Periapical and Bitewing Radiographs as Periodontally Healthy or Diseased by Deep Learning Algorithms. Cureus 2024, 16, e60550. [Google Scholar] [CrossRef]
  22. Diwan, T.; Anirudh, G.; Tembhurne, J.V. Object detection using YOLO: Challenges, architectural successors, datasets and applications. Multimed. Tools Appl. 2023, 82, 9243–9275. [Google Scholar] [CrossRef]
  23. Buchanan, B.G.; Shortliffe, E.H. Rule Based Expert Systems: The MYCIN Experiments of the Stanford Heuristic Programming Project; Addison-Wesley: Reading, MA, USA, 1984. [Google Scholar]
  24. Kulikowski, C.A. Beginnings of artificial intelligence in medicine (AIM): Computational artifice assisting scientific inquiry and clinical artd with reflections on present AIM challenges. Yearb. Med. Inform. 2019, 28, 249–256. [Google Scholar] [CrossRef] [PubMed]
  25. Wolfram, D.A. An appraisal of INTERNIST-I. Artif. Intell. Med. 1995, 7, 93–116. [Google Scholar] [CrossRef] [PubMed]
  26. Banks, G. Artificial intelligence in medical diagnosis: The INTERNIST/CADUCEUS approach. Crit. Rev. Med. Inform. 1986, 1, 23–54. [Google Scholar] [PubMed]
  27. London, S. DXplainTM: A web-based diagnostic decision support system for medical students. Med. Ref. Serv. Q. 1998, 17, 17–28. [Google Scholar] [CrossRef] [PubMed]
  28. Esteva, A.; Kuprel, B.; Novoa, R.A.; Ko, J.; Swetter, S.M.; Blau, H.M.; Thrun, S. Dermatologist-level classification of skin cancer with deep neural networks. Nature 2017, 542, 115–118. [Google Scholar] [CrossRef] [PubMed]
  29. Gargeya, R.; Leng, T. Automated identification of diabetic retinopathy using deep learning. Ophthalmology 2017, 124, 962–969. [Google Scholar] [CrossRef] [PubMed]
  30. Kim, Y.; Jo, H.; Jang, T.G.; Park, S.Y.; Park, H.Y.; Cho, S.P.; Park, J.; Kim, S.H.; Urtnasan, E. SleepMI: An AI-based screening algorithm for myocardial infarction using nocturnal electrocardiography. Heliyon 2024, 10, e26548. [Google Scholar] [CrossRef] [PubMed]
  31. Trzepacz, P.T.; Yu, P.; Sun, J.; Schuh, K.; Case, M.; Witte, M.M.; Hochstetler, H.; Hake, A. Comparison of neuroimaging modalities for the prediction of conversion from mild cognitive impairment to Alzheimer’s dementia. Neurobiol. Aging 2014, 35, 143–151. [Google Scholar] [CrossRef] [PubMed]
  32. Endres, M.G.; Hillen, F.; Salloumis, M.; Sedaghat, A.R.; Niehues, S.M.; Quatela, O.; Hanken, H.; Smeets, R.; Beck-Broichsitter, B.; Rendenbach, C.; et al. Development of a Deep Learning Algorithm for Periapical Disease Detection in Dental Radiographs. Diagnostics 2020, 10, 430. [Google Scholar] [CrossRef]
  33. Ha, S.R.; Park, H.S.; Kim, E.H.; Kim, H.K.; Yang, J.Y.; Heo, J.; Yeo, I.L. A pilot study using machine learning methods about factors influencing prognosis of dental implants. J. Adv. Prosthodont. 2018, 10, 395–400. [Google Scholar] [CrossRef]
  34. Rašić, M.; Tropčić, M.; Karlović, P.; Gabrić, D.; Subašić, M.; Knežević, P. Detection and Segmentation of Radiolucent Lesions in the Lower Jaw on Panoramic Radiographs Using Deep Neural Networks. Medicina 2023, 59, 2138. [Google Scholar] [CrossRef] [PubMed]
  35. Kwon, O.; Yong, T.H.; Kang, S.R.; Kim, J.E.; Huh, K.H.; Heo, M.S.; Lee, S.S.; Choi, S.C.; Yi, W.J. Automatic diagnosis for cysts and tumors of both jaws on panoramic radiographs using a deep convolution neural network. Dentomaxillofac. Radiol. 2020, 49, 20200185. [Google Scholar] [CrossRef] [PubMed]
  36. Ariji, Y.; Yanashita, Y.; Kutsuna, S.; Muramatsu, C.; Fukuda, M.; Kise, Y.; Nozawa, M.; Kuwada, C.; Fujita, H.; Katsumata, A.; et al. Automatic detection and classification of radiolucent lesions in the mandible on panoramic radiographs using a deep learning object detection technique. Oral Surg. Oral Med. Oral Pathol. Oral Radiol. 2019, 128, 424–430. [Google Scholar] [CrossRef] [PubMed]
  37. Ver Berne, J.; Saadi, S.B.; Politis, C.; Jacobs, R. A deep learning approach for radiological detection and classification of radicular cysts and periapical granulomas. J. Dent. 2023, 135, 104581. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Examples of the included radicular cysts.
Figure 1. Examples of the included radicular cysts.
Diagnostics 14 01443 g001
Figure 2. Manually annotated radicular cyst in the GIMP program located in the lower jaw.
Figure 2. Manually annotated radicular cyst in the GIMP program located in the lower jaw.
Diagnostics 14 01443 g002
Figure 3. Metrical analysis of non-augmented and augmented data.
Figure 3. Metrical analysis of non-augmented and augmented data.
Diagnostics 14 01443 g003
Figure 4. (A) Panoramic radiograph of radicular cyst in the lower jaw (B) Developed model detection prediction of a radicular cyst with precision of 93%.
Figure 4. (A) Panoramic radiograph of radicular cyst in the lower jaw (B) Developed model detection prediction of a radicular cyst with precision of 93%.
Diagnostics 14 01443 g004
Figure 5. (A) Panoramic radiograph of radicular cyst in the lower jaw with surgically removed roots (B) Developed model detection of a radicular with precision of 72%.
Figure 5. (A) Panoramic radiograph of radicular cyst in the lower jaw with surgically removed roots (B) Developed model detection of a radicular with precision of 72%.
Diagnostics 14 01443 g005
Figure 6. An intensified thermal zone is visible in the lower jaw area.
Figure 6. An intensified thermal zone is visible in the lower jaw area.
Diagnostics 14 01443 g006
Table 1. Performance of the model in the detection of radicular cysts with and without augmentation: precision (positive predictive value), recall (sensitivity), and mean average precision.
Table 1. Performance of the model in the detection of radicular cysts with and without augmentation: precision (positive predictive value), recall (sensitivity), and mean average precision.
PRmAP@50mAP@50-95
Without augmentation0.8580.6670.7090.602
With augmentation0.740.7780.8960.717
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Rašić, M.; Tropčić, M.; Pupić-Bakrač, J.; Subašić, M.; Čvrljević, I.; Dediol, E. Utilizing Deep Learning for Diagnosing Radicular Cysts. Diagnostics 2024, 14, 1443. https://doi.org/10.3390/diagnostics14131443

AMA Style

Rašić M, Tropčić M, Pupić-Bakrač J, Subašić M, Čvrljević I, Dediol E. Utilizing Deep Learning for Diagnosing Radicular Cysts. Diagnostics. 2024; 14(13):1443. https://doi.org/10.3390/diagnostics14131443

Chicago/Turabian Style

Rašić, Mario, Mario Tropčić, Jure Pupić-Bakrač, Marko Subašić, Igor Čvrljević, and Emil Dediol. 2024. "Utilizing Deep Learning for Diagnosing Radicular Cysts" Diagnostics 14, no. 13: 1443. https://doi.org/10.3390/diagnostics14131443

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop