Next Article in Journal
Advances in Bridge Design and Construction: Technologies and Applications
Previous Article in Journal
The Effect of Conductor Sag on EMF Exposure Assessment for 400 kV Double-Bundle
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Application of AI-Driven Software Diagnocat in Managing Diagnostic Imaging in Dentistry: A Retrospective Study

1
Department of Clinical Disciplines, University ‘Alexander Xhuvani’ of Elbasan, 3001 Elbasan, Albania
2
Department of Life, Health & Environmental Sciences, University of L’Aquila, 67100 L’Aquila, Italy
3
Department of Physical and Chemical Sciences, University of L’Aquila, 67100 L’Aquila, Italy
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(17), 9790; https://doi.org/10.3390/app15179790
Submission received: 25 July 2025 / Revised: 27 August 2025 / Accepted: 5 September 2025 / Published: 6 September 2025
(This article belongs to the Special Issue Advanced Dental Imaging Technology)

Abstract

Background: This study investigates the diagnostic reliability of an artificial intelligence (AI)-based software (Diagnocat) in caries, dental restorations, missing teeth, and periodontal bone loss on panoramic radiographs (PRs), comparing its performance with evaluations from three independent dental experts serving as ground truth. Methods: A total of 104 PRs were analyzed using Diagnocat, which assigned a likelihood score (0–100%) for each condition. The same images were independently evaluated by three experts. The diagnostic performance of Diagnocat was evaluated using sensitivity, specificity, and receiver operating characteristic (ROC) curve analysis, while inter-rater agreement was assessed through Cohen’s kappa (κ). Results: Diagnocat showed high overall sensitivity (99.2%), identifying nearly all conditions marked as present by human evaluators. Specificity was low (8.7%), indicating a tendency to overdiagnose. Overall accuracy was 96%, likely influenced by the coexistence of multiple conditions. Sensitivity ranged from 77% to 96%, while specificity varied: dental restorations (66%), missing teeth (68%), periodontal bone loss (71%), and caries signs (47%). The agreement was fair for dental restorations (κ = 0.39) and missing teeth (κ = 0.37), but poor for caries signs (κ = −0.15) and periodontal bone loss (κ = −0.62). Conclusions: Diagnocat shows promise as a screening tool due to its high sensitivity, but low specificity and poor agreement for certain conditions warrant cautious interpretation alongside clinical evaluation.

1. Introduction

Artificial intelligence (AI) has emerged as a transformative force across medical disciplines, reshaping both diagnostic procedures and therapeutic strategies [1]. Its ability to process vast datasets and uncover subtle patterns often surpasses human capabilities. Within the realm of imaging diagnostics, AI offers precision and speed that far exceeds those of traditional methods [2,3]. Additionally, predictive modeling is based on patient data, allowing for more personalized and effective treatment plans [4,5]. Machine learning (ML) is a key subfield of AI, which empowers systems to learn autonomously from data inputs and refine their predictive accuracy over time [6]. Among ML techniques, deep learning (DL) has gained prominence, particularly in image analysis, due to its use of complex neural networks trained on large-scale datasets [7]. Unlike traditional ML, DL facilitates automatic feature extraction, minimizing the need for manual input. In the field of dentistry, AI has made remarkable advancements, seamlessly integrating into routine practice [8], enhancing diagnostic precision, streamlining treatment planning, and optimizing patient management workflows [9,10,11]. Dental imaging remains a cornerstone of oral healthcare, and AI-driven tools offer the potential to elevate both the efficiency and reliability of radiographic interpretation [12,13].
Currently, convolutional neural networks (CNNs) represent the most effective architecture for image-based analysis. These networks comprise multiple layers, such as convolutional, pooling, and fully connected layers [14], that enable robust image classification and pattern recognition across diverse domains, including medical imaging, object detection, and natural language processing [15]. In dentistry, CNNs can assist in identifying and analyzing several conditions [16], including dental caries [17,18], tooth numbering [19,20,21], tooth classification [22], periapical pathologies [23,24], periodontal disease [25], and oral cancer [26,27]. Additionally, this technology can identify root fractures, analyze the root canal anatomy, and assist in determining working lengths [28,29,30]. These systems also support population-level oral health monitoring [31] help mitigate diagnostic errors linked to examiner fatigue [32]. By offering algorithmic second opinions, AI tools can reinforce clinical decision-making and enhance communication with patients regarding treatment rationale. One notable AI-based system that utilizes CNN technology is Diagnocat (DGNCT LLC, Miami, FL, USA), which utilizes CNN-based algorithms to analyze various radiographic formats, including panoramic radiographs (PRs), intraoral images, and cone beam computed tomography (CBCT) scans, in real time [33,34,35,36]. The system is capable of making detailed assessments of various conditions, including caries signs, dental restorations, periodontal bone loss, among others [37], and can serve as a valuable aid in patient communication and treatment planning. Reports may also include recommendations for further imaging or specialist consultations. However, despite these advancements, the clinical adoption of Diagnocat remains in its early stages. Further research is necessary to validate its diagnostic reliability, particularly in terms of accuracy, sensitivity, and specificity.
This retrospective study aims to evaluate the diagnostic performance of Diagnocat, in comparison to the consensus of three experienced dentists. The focus is on identifying caries, dental restorations, missing teeth, and periodontal bone loss using panoramic radiographs.

2. Materials and Methods

2.1. Ethical Approval

The study protocol (No. 2968, dated 6 December 2023) received approval from the Internal Review Board of the University of Elbasan, Albania. This research was conducted independently, without financial support from external sources. All procedures adhered to the ethical standards outlined in the Declaration of Helsinki for research involving human participants [38].

2.2. Data Collection

This study analyzed 104 PRs, comprising 54 female and 50 male patients, obtained from the Radiology Department of the International Hospital (Hygeia) Tiranë, Albania. The images were collected between November 2024 and February 2025. All PRs were de-identified prior to analysis. Anonymization was performed by removing patient identifiers from the image metadata and assigning a unique study code to each radiograph. Radiographs were eligible for inclusion if they were of diagnostically acceptable or high quality, depicted at least six teeth, and allowed for evaluation of both marginal and apical periodontium. To maintain a diverse and representative sample, no limitations were placed on participants’ age, sex, or dental status. Radiographs were excluded if they exhibited poor image quality, including ghosting, pronounced artifacts, or signs of developmental anomalies.

2.3. Imaging and Diagnocat Analysis

Each PR was cataloged and assigned a unique identifier. Subsequently, the images were uploaded to the Diagnocat platform (DGNCT LLC, Miami, FL, USA), which automatically generated a radiological report for each case. PRs were analyzed using Diagnocat (version 1.0), accessed via its online platform on 4 February 2025. No manual region-of-interest selection or contrast/brightness adjustments were applied prior to analysis. Diagnocat’s internal processing pipeline was used without API customization. According to the vendor’s documentation, Diagnocat has been trained on a large dataset of annotated dental images, although specific details about the training data remain proprietary.
The AI system assigned a percentage likelihood (ranging from 0 to 100%) for caries detection, dental restoration, missing teeth, and periodontal bone loss. Diagnocat’s percentage score served as a confidence measure, with a predefined threshold of ≥50% used to classify a tooth as positive for the condition. This threshold represents the standard decision point embedded in the software’s diagnostic logic and was adopted to maintain consistency with its intended clinical use.
Each tooth was treated as an independent unit for performance evaluation. Although multiple teeth per patient were included, clustering effects were not adjusted for in the analysis.

2.4. Human Assessment

The same set of PRs was independently reviewed by three qualified dental professionals (S.A., D.P., and M.G.), each with clinical experience in general dentistry, restorative dentistry, periodontology, and dental radiology. A customized evaluation form was designed for consistent data entry across all cases. To maintain objectivity, each evaluator conducted their assessment separately and without access to the Diagnocat results. Their findings were documented in structured spreadsheets, organized by diagnostic category, tooth number, and evaluator identity. For each tooth, evaluators recorded either the presence or absence of the specified condition. To establish a reference standard (ground truth), Diagnocat’s outputs were compared against the consensus derived from the human evaluations. A diagnosis was considered valid if at least two of the three experts concurred on the condition’s presence or absence.

2.5. Statistical Analysis

To ensure terminological precision and facilitate the interpretation of diagnostic performance, the following metrics are defined as applied in this study:
  • Sensitivity (True Positive Rate): The proportion of actual positive cases correctly identified by the algorithm. This metric reflects the model’s capacity to detect a condition when it is truly present.
  • Specificity (True Negative Rate): The proportion of actual negative cases correctly classified as such. It indicates the model’s ability to exclude a condition when it is genuinely absent.
  • Accuracy: The overall proportion of correct classifications, comprising both true positives and true negatives, relative to the total number of evaluated cases. It provides a general measure of diagnostic correctness.
  • Reliability: In the context of this study, reliability refers to the consistency of Diagnocat’s diagnostic outputs in comparison to the reference standard. It was assessed using inter-rater agreement metrics, specifically Cohen’s kappa coefficient.

2.5.1. ROC Analysis

Statistical computations were performed using R (version 4.2.1), employing the pROC package was used to generate ROC curves and compute the area under the curve (AUC) for each diagnostic category. Diagnocat’s continuous percentage scores were treated as the predictor variable, while the expert’s binary assessment (presence or absence) served as the response variable. Binary outputs were also generated imposing Diagnocat threshold at </≥50% directly compared with human evaluations. While this cutoff offers a balanced starting point between sensitivity and specificity, considering that threshold selection can significantly influence diagnostic outcomes, we included ROC and Precision-Recall (P-R) curve analyses to illustrate model performance across varying thresholds. To evaluate the diagnostic accuracy of the AI system, key performance metrics, including sensitivity, specificity, true positives (TP), true negatives (TN), false positives (FP), and false negatives (FN), were extracted using the coords ( ) function. These measures provided a detailed understanding of Diagnocat’s performance in identifying oral conditions.

2.5.2. Confusion Matrices and Visualization

Confusion matrices were generated using the yardstick package to illustrate Diagnocat’s classification performance across the four conditions. To enhance interpretability, confusion matrices and ROC curves were plotted using ggplot2 and patchwork, ensuring a consistent and clear visual representation of diagnostic accuracy.

2.5.3. Differential Analysis

Differences in age distribution between sexes were analyzed using independent t-tests. Additionally, sex-based variations in the prevalence of dental caries, dental restoration, missing teeth, and periodontal bone loss were assessed using Pearson’s Chi-squared tests. These analyses helped determine whether demographic factors influenced disease occurrence within the study.

3. Results

In this retrospective analysis, the diagnostic performance of Diagnocat, an AI-driven platform, was assessed using PRs from a sample of 104 patients with permanent dentition. The analysis focused on different key dental conditions—caries signs, dental restoration, missing teeth, and periodontal bone loss—comparing Diagnocat’s performance against assessments of three expert evaluators. Figure 1 displays a representative PR used for the present study, showing diagnoses automatically generated by Diagnocat.

3.1. Study Design and Data Structure

The study population consisted of 104 subjects, with a nearly balanced gender distribution: 54 females (51.9%) and 50 males (48.1%). The overall mean age was 33.18 years, with a standard deviation of 16.53 (Table 1). Age distributions did not differ significantly between sexes (females: mean = 32.41, SD = 16.41; males: mean = 34.02, SD = 16.77; p = 0.621, ANOVA).
A total of 1686 dental conditions analyzed across all participants (total teeth: females, N = 880; males, N = 806), with each participant contributing multiple teeth for evaluation. Individual teeth could exhibit one or more coexisting pathologies (e.g., caries and bone loss), reflecting the hierarchical nature of the data. This structure—participants as clusters of teeth and teeth as clusters of conditions—necessitated careful interpretation of prevalence and diagnostic metrics to avoid overestimating statistical independence. For each diagnostic condition, only teeth that met specific inclusion criteria, such as sufficient image quality and anatomical visibility, were considered evaluable and included in the analysis. Consequently, the number of teeth analyzed varies across conditions and does not correspond to the full dataset of 1686 teeth. This condition-specific filtering was applied to ensure the validity of diagnostic performance metrics.

3.2. Prevalence of Dental Conditions

Condition prevalence was calculated at the tooth level to account for multiplicity within participants, as reported above in Table 1. Caries signs were identified in 119 teeth (7% of a total of 1686), with comparable rates between females (N = 57) and males (N = 58) (Table 2). Dental restoration was detected in 450 teeth (26.5%), with slight, statistically not significant, male prevalence (52.89%) than females (47.11%). The number of missing teeth (N = 316; 18.6%) was significantly higher in females (61.1%; N = 193) than males (38.9%, N = 123) (p < 0.001). Periodontal bone loss was registered in 473 teeth (27.9%), with a similar rate between females and males (49.9% vs. 50.1%, respectively).

3.3. Inter-Rater Agreement Analysis (Cohen’s Kappa)

Cohen’s kappa (κ) was calculated to assess the level of agreement among human evaluators, as well as between the Diagnocat software and these human raters, while adjusting for chance concordance. The statistical analysis demonstrated substantial agreement among all inter-evaluator assessments, with a κ value of approximately 0.77. This finding highlights the high credibility of the ground truth and the reproducibility of the reports. However, the statistical evaluation of the agreement between the ground truth and the results produced by Diagnocat revealed poor reliability in detecting signs of caries and periodontal bone loss, with κ values of −0.15 and −0.62, respectively. The observed negative values of Cohen’s κ for periodontal bone loss and caries may reflect the influence of low prevalence and unbalanced marginal distributions, which are known to affect κ estimates. In contrast, a fair level of agreement was observed for dental restorations and missing teeth, with κ values of 0.39 and 0.37, respectively. Table 3 summarizes the levels of agreement between Diagnocat and human rates based on these criteria. Supplementary Table S1 reports the diagnostic performance (AUC with 95% CI) of Diagnocat and of each practitioner.

3.4. Overall Performance (All Conditions Combined)

Diagnocat demonstrated a remarkably high sensitivity of 99.2%, effectively identifying nearly all cases classified as “condition present” by human evaluators (Figure 2). However, it exhibited low specificity, at approximately 9%, meaning it rarely predicted “no condition” when the condition was truly absent. The overall correct classification rate was impressive, achieving an accuracy of 96%, largely attributable to the high prevalence of dental conditions among the evaluated teeth. Supplementary Figure S1 displays the P–R curve, which offers a clearer view of overall performance under class imbalance. The confusion matrix provided further insight into the model’s performance, showing true positives (TP): 1303; false negatives (FN): 10; false positives (FP): 42; true negatives (TN): 4.

3.5. Caries Signs

Diagnocat demonstrated a moderate ability to discriminate the likelihood of caries signs, with an area under the curve (AUC) of 0.73. At the optimal threshold, it exhibited very high sensitivity at 0.95, but its specificity was lower at 0.47, suggesting a tendency for frequent over-reporting (Figure 3). Supplementary Figure S2 displays the P–R curve for caries signs, offering a clearer view of performance under class imbalance. The confusion matrix revealed 118 TP and 1 FN, along with 1228 FP and 13 TN.

3.6. Dental Restoration

In terms of detecting dental restorations, Diagnocat also displayed moderate discriminative ability, with an AUC of 0.76 (Figure 4). At the selected threshold, the model reached a sensitivity of 77%, successfully detecting most cases with restorations. Its specificity was. 66%, indicating a fair ability to recognize teeth without restoration. Supplementary Figure S3 displays the P–R curve for dental restoration, offering a clearer view of performance under class imbalance. The confusion matrix indicated 450 TP, 0 FN, 896 FP, and 14 TN.

3.7. Missing Teeth

For the detection of missing teeth, Diagnocat’s performance remained moderate, with an AUC of 0.73 (Figure 5). It showed a high sensitivity of 0.81, indicating strong detection of missing teeth, although its specificity was lower at 0.68. Supplementary Figure S4 displays the P-R curve for missing teeth, offering a clearer view of performance under class imbalance. The confusion matrix showed 303 TP and 13 FN, alongside 1043 FP and 1 TN.

3.8. Periodontal Bone Loss

For periodontal bone loss, Diagnocat exhibited a strong performance with an AUC of 0.85. The sensitivity was very high at 0.96, while the specificity was moderate at 0.71 (Figure 6). Supplementary Figure S5 displays the P–R curve for periodontal bone loss, offering a clearer view of performance under class imbalance. The model classified 14 non-bone loss cases (TN) and 473 true bone loss cases (TP) but misclassified 873 healthy teeth as pathological (FP) and 0 true cases as healthy (FN).

4. Discussion

This retrospective analysis explored the diagnostic reliability of Diagnocat, an AI-powered tool, in detecting key dental conditions, including caries, dental restorations, missing teeth, and periodontal bone loss, using PRs from a sample of 104 individuals. The results were then compared with the consensus of three expert dentists, who served as ground-truth. The Diagnocat software demonstrated a high sensitivity of 99.2% across all conditions, indicating that it is very effective at correctly identifying positive cases (TP). However, it showed low specificity at 8.7%, which means it tends to mistakenly identify many negative cases as positives. Consequently, while Diagnocat rarely missed a diseased tooth, it is more prone to over-diagnosing certain conditions. Overall, the correct classification rate was impressive, achieving an accuracy of 96% across all conditions. While there was a fair level of agreement between Diagnocat and the ground-truth for dental restorations and missing teeth (with κ values of 0.39 and 0.37, respectively), the statistical analysis indicated lower reliability for detecting signs of caries and periodontal bone loss, with κ values of −0.15 and −0.62, respectively. These negative values of Cohen’s κ may reflect agreement lower than expected by chance. This phenomenon can occur in contexts of low prevalence or when marginal distributions are highly unbalanced. In such cases, κ may underestimate the true level of agreement. The interpretation of these values was therefore made with caution, considering the influence of prevalence and distribution asymmetry. Moreover, statistical analysis revealed that Diagnocat performance varied by each condition. For instance, Diagnocat was highly sensitive in identifying the likelihood of caries signs and periodontal bone loss, achieving sensitivity of 95% and 96%, respectively. A slightly lower sensitivity of the software was found in recording the probability of dental restorations and missing teeth (0.77 and 0.81, respectively). However, a low specificity of 0.47% was observed for caries identification, with the confusion matrix showing 1228 FP. This suggests that the software frequently misidentified many negative cases as positives, increasing the number of FP. In contrast, Diagnocat showed moderate discriminative performance in identifying dental restorations, missing teeth, and periodontal bone loss, achieving specificities of 66%, 68%, and 71%, respectively.
Our findings suggest that Diagnocat excels in sensitivity but falls short in specificity, positioning it as a potentially valuable screening tool for detecting oral health issues. In this scenario, Diagnocat’s high sensitivity would allow it to catch most of the actual positive cases, which could be further evaluated to confirm the presence of the condition. This approach could be particularly useful in clinical settings where it is crucial not to miss positive cases, but it is also important to confirm the diagnosis with further evaluations to avoid FP and unnecessary treatments.
Previous studies have explored the diagnostic reliability of AI-based software Diagnocat in dental imaging. In a study by Zadrożny et al., 30 panoramic radiographs (PRs) were analyzed using Diagnocat’s automated AI-based evaluation and compared with manual assessments conducted by three dentists with varying levels of experience [37]. The authors concluded that Diagnocat is useful for initial evaluations of screening PRs for dental applications. Additionally, the system-generated report identifies potential pathologies that should be evaluated by specialists or analyzed using more accurate methods such as CBCT [37].
In a study by Issa et al., the AI system achieved a sensitivity of 92.3%, specificity of 97.8%, and an overall accuracy of 96.6% in detecting periapical lesions on 2D radiographs [39]. Despite these high metrics, the algorithm misclassified one FN and one FP case, underscoring the importance of further validation. The authors emphasize the need for continued research to better understand the limitations and optimize AI integration into clinical workflows [39]. Complementing these findings, Orhan et al. evaluated Diagnocat’s performance across 4497 teeth using 100 PRs [40]. The system and three human experts independently assessed various dental conditions, including caries, restorations, calculus, interproximal contact loss, and periapical lesions. The study revealed that Diagnocat’s lowest reliability was observed in diagnosing caries, periapical lesions, voids in root canals, and overhangs. The authors stressed that caries detection remains clinically nuanced process, requiring both direct examination and radiographic interpretation [40]. In another analysis by Szabó et al., the Diagnocat system was evaluated for its ability to detect carious lesions on 323 inter-proximal surfaces [41]. Two observers independently assessed the radiographs, followed by a second evaluation four months later with Diagnocat assistance. The data were analyzed using a convolutional neural network (CNN). Despite this rigorous multi-phase approach, the authors identified the lack of clinical validation as the study’s main limitation, emphasizing the need for real-world testing to confirm AI reliability [41]. Kazimierczak et al. extended the evaluation by comparing Diagnocat’s performance on PRs and CBCT scans from the same patients [42]. They also compared the program’s results with those of a human expert. Diagnocat demonstrated high sensitivity and specificity in detecting periapical lesions on CBCT images, although its sensitivity was lower in PR images. Despite this, Diagnocat maintained high specificity and positive predictive value across both modalities, suggesting its potential as a reliable adjunct in dental diagnostics when paired with appropriate imaging techniques [42]. In the study conducted by Ezhov et al., the diagnostic performance of Diagnocat was evaluated using 30 CBCT scans, assessed by two groups of dentists, one aided by the AI system and the other unaided [36]. The aided group achieved a sensitivity of 85.4% and specificity of 96.7%, while the unaided group recorded 76.7% sensitivity and 96.2% specificity. These results indicate that Diagnocat significantly enhanced the sensitivity of dental pathology detection without compromising specificity. The authors concluded that AI assistance can improve diagnostic accuracy in CBCT-based evaluations, supporting its integration into clinical workflows to enhance decision-making and patient outcomes [36].
Despite the promising results, our study has some significant limitations. The primary concern is the lack of a direct clinical evaluation. The diagnostic accuracy of AI-based tools such as Diagnocat may be limited when relying solely on PRs, especially for early-stage or subtle conditions such as incipient caries or mild periodontal lesions [43]. Although this retrospective study did not include clinical examination, we acknowledge that integrating radiographic imaging with clinical assessment would provide a more comprehensive and clinically relevant evaluation. Future prospective studies should incorporate clinical findings, patient history, and intraoral examination to validate AI-based diagnoses more thoroughly. Moreover, this study used expert consensus (agreement of at least two out of three experienced clinicians) as the reference standard. While pragmatic, this approach may introduce subjective bias and does not represent an absolute gold standard.
A further limitation concerns the statistical approach adopted. The analysis did not account for intra-subject clustering, treating teeth as independent observations. This may have overestimated confidence in diagnostic metrics, as teeth belonging to the same patient tend to share similar pathological patterns. Failure to remediate this clustering introduces potential bias and reduces the robustness of the results. In many cases, the teeth had concomitant pathological conditions, such as tooth decay and bone loss. These overlaps can interfere with the Diagnocat software’s ability to distinguish between different pathologies, leading to misclassification. For example, inflammation associated with tooth decay can mask bone loss, while the coexistence of multiple conditions can confound the automatic extraction of radiographic features. Although clustering corrections were not applied in this analysis, future research should consider mixed-effects models or cluster-based bootstrapping to account for intra-subject variability and improve statistical robustness. Moreover, this study did not include stratified validation of disease-specific sample sizes, which may affect the reliability of performance estimates for less prevalent or mild conditions. The lack of stratified sampling may limit the model’s ability to generalize across different disease severities. Future studies should incorporate subgroup analyses or stratified sampling strategies to better assess diagnostic performance across a range of clinical presentations. This study did not include certain performance metrics such as positive and negative predictive values, likelihood ratios, F1 score, or micro/macro-averaged measures, due to limitations in data structure and computational feasibility. Nonetheless, we provided 95% CIs for all reported estimates and included P-R curves for low-prevalence outcomes, which offer a more informative assessment of model performance in imbalanced scenarios. Future work may explore a broader set of evaluation metrics to further characterize diagnostic performance across clinical conditions.
Although Diagnocat showed high sensitivity, the specificity—particularly in the detection of caries—was significantly lower. The software tends to incorrectly classify healthy teeth as pathological, generating a high number of false positives. This tendency to overdiagnosis limits its usefulness as a stand-alone diagnostic tool and raises concerns about possible unnecessary treatments and anxiety in the patient. While high sensitivity is desirable in screening settings, specificity is critical to ensure diagnostic accuracy.
The absence of a direct comparison between Diagnocat and other dental AI tools on the same dataset represents a limitation in assessing the relative performance of the system. Benchmarking studies using standardized datasets are needed to evaluate the competitiveness and generalizability of different AI-based diagnostic platforms. Such comparative analyses would provide valuable insights into the strengths and weaknesses of each tool and guide clinical adoption.
The poor diagnostic consistency observed for dental caries and periodontal bone loss may be attributed to several technical factors. First, an imbalance in the training data may have led the AI model to overfit common conditions while underperforming on less frequent or early-stage lesions. Second, limitations in feature extraction, particularly the reduced sensitivity of CNNs to subtle radiographic changes, could have affected the detection of early caries and mild bone loss. Third, the resolution constraints of panoramic radiographs inherently limit the anatomical detail available for identifying small or incipient lesions. These factors collectively suggest that the current model may not be optimized for detecting early-stage pathology. Future algorithm development should consider the use of balanced training datasets, improved feature extraction techniques, and validation using higher-resolution imaging modalities such as CBCT to enhance diagnostic accuracy and clinical applicability. Additionally, Diagnocat operates without access to patient-specific clinical information, including symptoms, medical history, and risk factors. This limits the system’s ability to detect relevant clinical signals or accurately interpret radiographic findings in asymptomatic patients [17,44]. Finally, this study was conducted using data from a single hospital, with a relatively young patient population (mean age: 33.18 years), which may limit the generalizability of the findings. The lack of representation of pediatric and geriatric patients restricts the applicability of the results across diverse clinical scenarios. Future research should include more heterogeneous populations to better assess the performance of AI-based diagnostics in broader clinical contexts.
In summary, although Diagnocat represents a promising tool for radiographic screening, the limitations highlighted necessitate a cautious interpretation of the results and underscore the importance of integrating the outputs of artificial intelligence within a broader diagnostic framework, based on clinical evaluation and professional judgment.

5. Conclusions

Considering our findings, Diagnocat is confirmed as a promising tool for radiographic screening in dentistry, highlighting its high sensitivity and automated analysis capacity. However, the limitations encountered—from the lack of clinical evaluation to the low specificity and absence of contextual information about the patient—dictate a cautious reading of the results. AI cannot replace clinical judgment, but it can serve as a valuable support when integrated into a multidimensional diagnostic approach. To improve the diagnostic performance and clinical applicability of AI-based radiographic tools, several future directions are proposed. First, the integration of clinical metadata—such as patient age, oral hygiene status, and medical history—into the AI model through multimodal learning frameworks may enhance the contextual interpretation of radiographic findings and improve diagnostic specificity. Second, the use of multimodal image fusion techniques, combining panoramic radiographs with higher-resolution imaging modalities such as CBCT or intraoral scans, could provide complementary anatomical and pathological information, particularly for conditions that are difficult to detect on PRs alone. Finally, to address the challenge of overlapping diseases, future models should incorporate multi-label classification architectures and attention mechanisms to better distinguish coexisting pathologies. These enhancements may help disentangle complex radiographic features and reduce misclassification, offering a clearer technical roadmap for future research and clinical translation. Future developments should focus on enhancing diagnostic performance by incorporating clinical data and refining algorithms to better manage overlapping disease conditions. These improvements could increase Diagnocat’s reliability in several clinical settings, ultimately leading to more accurate and efficient dental diagnostics.

Supplementary Materials

The following supporting information can be downloaded at https://www.mdpi.com/article/10.3390/app15179790/s1.

Author Contributions

Conceptualization, D.P., H.M., and S.A.; methodology, D.P., H.M., and S.A.; software, H.M.; validation, D.P. and S.A.; formal analysis, D.P.; investigation, H.M., E.G., Y.A., and M.G. (Mitilda Gugu); resources, S.T.; data curation, H.M.; writing—original draft preparation, S.A. and D.P.; writing—review and editing, D.P. and M.G. (Mario Giannoni); visualization, D.P.; supervision, D.P., M.G. (Mario Giannoni), and S.T.; project administration, H.M. and D.P.; funding acquisition, D.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The research was conducted in accordance with the Declaration of Helsinki and approved by the Internal Review Board (Protocol n. 2968, 6 December 2023) of the University of Elbasan (Albania).

Informed Consent Statement

Patient consent was waived due to the retrospective nature of the study and the use of anonymized radiographic data, as approved by the institutional ethics committee.

Data Availability Statement

The data supporting the findings of this study are not publicly available due to privacy and ethical restrictions. The panoramic radiographs were obtained from clinical archives and analyzed retrospectively under institutional approval.

Acknowledgments

During the preparation of this manuscript, the authors used Microsoft Copilot (version July 2025) for the purposes of language refinement and structural editing. The authors have reviewed and edited the output and take full responsibility for the content of this publication.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
PRsPanoramic radiographs
P-RPrecision–recall
ROCReceiver operating characteristic
AIArtificial intelligence
MLMachine learning
DLDeep learning
CNNsConvolutional neural networks
CBCTCone beam computed tomography
TPTrue positive
TNTrue negative
FPFalse positive
FNFalse negative
AUCArea under curve

References

  1. Aung, Y.Y.M.; Wong, D.C.S.; Ting, D.S.W. The promise of artificial intelligence: A review of the opportunities and challenges of artificial intelligence in healthcare. Br. Med. Bull. 2021, 139, 4–15. [Google Scholar] [CrossRef]
  2. Howard, J. Artificial intelligence: Implications for the future of work. Am. J. Ind. Med. 2019, 62, 917–926. [Google Scholar] [CrossRef]
  3. Sarker, I.H. AI-Based Modeling: Techniques, Applications and Research Issues Towards Automation, Intelligent and Smart Systems. SN Comput. Sci. 2022, 3, 158. [Google Scholar] [CrossRef] [PubMed]
  4. Amisha; Malik, P.; Pathania, M.; Rathaur, V.K. Overview of artificial intelligence in medicine. J. Family Med. Prim. Care 2019, 8, 2328–2331. [Google Scholar] [CrossRef] [PubMed]
  5. Old, O.; Friedrichson, B.; Zacharowski, K.; Kloka, J.A. Entering the new digital era of intensive care medicine: An overview of interdisciplinary approaches to use artificial intelligence for patients’ benefit. Eur. J. Anaesthesiol. Intensive Care 2023, 2, e0014. [Google Scholar] [CrossRef]
  6. Woschank, M.; Rauch, E.; Zsifkovits, H. A Review of Further Directions for Artificial Intelligence, Machine Learning, and Deep Learning in Smart Logistics. Sustainability 2020, 12, 3760. [Google Scholar] [CrossRef]
  7. Morid, M.A.; Borjali, A.; Del Fiol, G. A scoping review of transfer learning research on medical image analysis using ImageNet. Comput. Biol. Med. 2021, 128, 104115. [Google Scholar] [CrossRef] [PubMed]
  8. Heo, M.S.; Kim, J.E.; Hwang, J.J.; Han, S.S.; Kim, J.S.; Yi, W.J.; Park, I.W. Artificial intelligence in oral and maxillofacial radiology: What is currently possible? Dentomaxillofac. Radiol. 2021, 50, 20200375. [Google Scholar] [CrossRef]
  9. Shan, T.; Tay, F.R.; Gu, L. Application of Artificial Intelligence in Dentistry. J. Dent. Res. 2021, 100, 232–244. [Google Scholar] [CrossRef]
  10. Carrillo-Perez, F.; Pecho, O.E.; Morales, J.C.; Paravina, R.D.; Della Bona, A.; Ghinea, R.; Pulgar, R.; Perez, M.D.M.; Herrera, L.J. Applications of artificial intelligence in dentistry: A comprehensive review. J. Esthet. Restor. Dent. 2022, 34, 259–280. [Google Scholar] [CrossRef]
  11. Rokhshad, R.; Ducret, M.; Chaurasia, A.; Karteva, T.; Radenkovic, M.; Roganovic, J.; Hamdan, M.; Mohammad-Rahimi, H.; Krois, J.; Lahoud, P.; et al. Ethical considerations on artificial intelligence in dentistry: A framework and checklist. J. Dent. 2023, 135, 104593. [Google Scholar] [CrossRef]
  12. Hwang, J.J.; Jung, Y.H.; Cho, B.H.; Heo, M.S. An overview of deep learning in the field of dentistry. Imaging Sci. Dent. 2019, 49, 1–7. [Google Scholar] [CrossRef]
  13. Thurzo, A.; Urbanova, W.; Novak, B.; Czako, L.; Siebert, T.; Stano, P.; Marekova, S.; Fountoulaki, G.; Kosnacova, H.; Varga, I. Where Is the Artificial Intelligence Applied in Dentistry? Systematic Review and Literature Analysis. Healthcare 2022, 10, 1269. [Google Scholar] [CrossRef]
  14. Litjens, G.; Kooi, T.; Bejnordi, B.E.; Setio, A.A.A.; Ciompi, F.; Ghafoorian, M.; van der Laak, J.; van Ginneken, B.; Sanchez, C.I. A survey on deep learning in medical image analysis. Med. Image Anal. 2017, 42, 60–88. [Google Scholar] [CrossRef]
  15. Yamashita, R.; Nishio, M.; Do, R.K.G.; Togashi, K. Convolutional neural networks: An overview and application in radiology. Insights Imaging 2018, 9, 611–629. [Google Scholar] [CrossRef] [PubMed]
  16. Schwendicke, F.; Golla, T.; Dreher, M.; Krois, J. Convolutional neural networks for dental image diagnostics: A scoping review. J. Dent. 2019, 91, 103226. [Google Scholar] [CrossRef] [PubMed]
  17. Mohammad-Rahimi, H.; Motamedian, S.R.; Rohban, M.H.; Krois, J.; Uribe, S.E.; Mahmoudinia, E.; Rokhshad, R.; Nadimi, M.; Schwendicke, F. Deep learning for caries detection: A systematic review. J. Dent. 2022, 122, 104115. [Google Scholar] [CrossRef]
  18. Amasya, H.; Alkhader, M.; Serindere, G.; Futyma-Gabka, K.; Aktuna Belgin, C.; Gusarev, M.; Ezhov, M.; Rozylo-Kalinowska, I.; Onder, M.; Sanders, A.; et al. Evaluation of a Decision Support System Developed with Deep Learning Approach for Detecting Dental Caries with Cone-Beam Computed Tomography Imaging. Diagnostics 2023, 13, 3471. [Google Scholar] [CrossRef]
  19. Tuzoff, D.V.; Tuzova, L.N.; Bornstein, M.M.; Krasnov, A.S.; Kharchenko, M.A.; Nikolenko, S.I.; Sveshnikov, M.M.; Bednenko, G.B. Tooth detection and numbering in panoramic radiographs using convolutional neural networks. Dentomaxillofac. Radiol. 2019, 48, 20180051. [Google Scholar] [CrossRef]
  20. Estai, M.; Tennant, M.; Gebauer, D.; Brostek, A.; Vignarajan, J.; Mehdizadeh, M.; Saha, S. Deep learning for automated detection and numbering of permanent teeth on panoramic images. Dentomaxillofac. Radiol. 2022, 51, 20210296. [Google Scholar] [CrossRef] [PubMed]
  21. Ghorbani, Z.; Mirebeigi-Jamasbi, S.S.; Hassannia Dargah, M.; Nahvi, M.; Hosseinikhah Manshadi, S.A.; Akbarzadeh Fathabadi, Z. A novel deep learning-based model for automated tooth detection and numbering in mixed and permanent dentition in occlusal photographs. BMC Oral Health 2025, 25, 455. [Google Scholar] [CrossRef] [PubMed]
  22. Miki, Y.; Muramatsu, C.; Hayashi, T.; Zhou, X.; Hara, T.; Katsumata, A.; Fujita, H. Classification of teeth in cone-beam CT using deep convolutional neural network. Comput. Biol. Med. 2017, 80, 24–29. [Google Scholar] [CrossRef]
  23. Orhan, K.; Bayrakdar, I.S.; Ezhov, M.; Kravtsov, A.; Ozyurek, T. Evaluation of artificial intelligence for detecting periapical pathosis on cone-beam computed tomography scans. Int. Endod. J. 2020, 53, 680–689. [Google Scholar] [CrossRef]
  24. Sadr, S.; Mohammad-Rahimi, H.; Motamedian, S.R.; Zahedrozegar, S.; Motie, P.; Vinayahalingam, S.; Dianat, O.; Nosrat, A. Deep Learning for Detection of Periapical Radiolucent Lesions: A Systematic Review and Meta-analysis of Diagnostic Test Accuracy. J. Endod. 2023, 49, 248–261. [Google Scholar] [CrossRef]
  25. Patil, S.; Joda, T.; Soffe, B.; Awan, K.H.; Fageeh, H.N.; Tovani-Palone, M.R.; Licari, F.W. Efficacy of artificial intelligence in the detection of periodontal bone loss and classification of periodontal diseases: A systematic review. J. Am. Dent. Assoc. 2023, 154, 795–804. [Google Scholar] [CrossRef]
  26. Sultan, A.S.; Elgharib, M.A.; Tavares, T.; Jessri, M.; Basile, J.R. The use of artificial intelligence, machine learning and deep learning in oncologic histopathology. J. Oral Pathol. Med. 2020, 49, 849–856. [Google Scholar] [CrossRef]
  27. Al-Rawi, N.; Sultan, A.; Rajai, B.; Shuaeeb, H.; Alnajjar, M.; Alketbi, M.; Mohammad, Y.; Shetty, S.R.; Mashrah, M.A. The Effectiveness of Artificial Intelligence in Detection of Oral Cancer. Int. Dent. J. 2022, 72, 436–447. [Google Scholar] [CrossRef] [PubMed]
  28. Fukuda, M.; Inamoto, K.; Shibata, N.; Ariji, Y.; Yanashita, Y.; Kutsuna, S.; Nakata, K.; Katsumata, A.; Fujita, H.; Ariji, E. Evaluation of an artificial intelligence system for detecting vertical root fracture on panoramic radiography. Oral Radiol. 2020, 36, 337–343. [Google Scholar] [CrossRef] [PubMed]
  29. Aminoshariae, A.; Kulild, J.; Nagendrababu, V. Artificial Intelligence in Endodontics: Current Applications and Future Directions. J. Endod. 2021, 47, 1352–1357. [Google Scholar] [CrossRef]
  30. Khanagar, S.B.; Alfadley, A.; Alfouzan, K.; Awawdeh, M.; Alaqla, A.; Jamleh, A. Developments and Performance of Artificial Intelligence Models Designed for Application in Endodontics: A Systematic Review. Diagnostics 2023, 13, 414. [Google Scholar] [CrossRef]
  31. Turosz, N.; Checinska, K.; Checinski, M.; Rutanski, I.; Sielski, M.; Sikora, M. Oral Health Status and Treatment Needs Based on Artificial Intelligence (AI) Dental Panoramic Radiograph (DPR) Analysis: A Cross-Sectional Study. J. Clin. Med. 2024, 13, 3686. [Google Scholar] [CrossRef]
  32. Leite, A.F.; Gerven, A.V.; Willems, H.; Beznik, T.; Lahoud, P.; Gaeta-Araujo, H.; Vranckx, M.; Jacobs, R. Artificial intelligence-driven novel tool for tooth detection and segmentation on panoramic radiographs. Clin. Oral Investig. 2021, 25, 2257–2267. [Google Scholar] [CrossRef]
  33. Chan, M.; Dadul, T.; Langlais, R.; Russell, D.; Ahmad, M. Accuracy of extraoral bite-wing radiography in detecting proximal caries and crestal bone loss. J. Am. Dent. Assoc. 2018, 149, 51–58. [Google Scholar] [CrossRef]
  34. Cosson, J. Interpreting an orthopantomogram. Aust. J. Gen. Pract. 2020, 49, 550–555. [Google Scholar] [CrossRef] [PubMed]
  35. Vinayahalingam, S.; Goey, R.S.; Kempers, S.; Schoep, J.; Cherici, T.; Moin, D.A.; Hanisch, M. Automated chart filing on panoramic radiographs using deep learning. J. Dent. 2021, 115, 103864. [Google Scholar] [CrossRef] [PubMed]
  36. Ezhov, M.; Gusarev, M.; Golitsyna, M.; Yates, J.M.; Kushnerev, E.; Tamimi, D.; Aksoy, S.; Shumilov, E.; Sanders, A.; Orhan, K. Clinically applicable artificial intelligence system for dental diagnosis with CBCT. Sci. Rep. 2021, 11, 15006. [Google Scholar] [CrossRef] [PubMed]
  37. Zadrozny, L.; Regulski, P.; Brus-Sawczuk, K.; Czajkowska, M.; Parkanyi, L.; Ganz, S.; Mijiritsky, E. Artificial Intelligence Application in Assessment of Panoramic Radiographs. Diagnostics 2022, 12, 224. [Google Scholar] [CrossRef]
  38. World Medical, A. World Medical Association Declaration of Helsinki: Ethical principles for medical research involving human subjects. JAMA 2013, 310, 2191–2194. [Google Scholar] [CrossRef]
  39. Issa, J.; Jaber, M.; Rifai, I.; Mozdziak, P.; Kempisty, B.; Dyszkiewicz-Konwinska, M. Diagnostic Test Accuracy of Artificial Intelligence in Detecting Periapical Periodontitis on Two-Dimensional Radiographs: A Retrospective Study and Literature Review. Medicina 2023, 59, 768. [Google Scholar] [CrossRef]
  40. Orhan, K.; Aktuna Belgin, C.; Manulis, D.; Golitsyna, M.; Bayrak, S.; Aksoy, S.; Sanders, A.; Onder, M.; Ezhov, M.; Shamshiev, M.; et al. Determining the reliability of diagnosis and treatment using artificial intelligence software with panoramic radiographs. Imaging Sci. Dent. 2023, 53, 199–208. [Google Scholar] [CrossRef]
  41. Szabo, V.; Szabo, B.T.; Orhan, K.; Veres, D.S.; Manulis, D.; Ezhov, M.; Sanders, A. Validation of artificial intelligence application for dental caries diagnosis on intraoral bitewing and periapical radiographs. J. Dent. 2024, 147, 105105. [Google Scholar] [CrossRef] [PubMed]
  42. Kazimierczak, W.; Wajer, R.; Wajer, A.; Kiian, V.; Kloska, A.; Kazimierczak, N.; Janiszewska-Olszowska, J.; Serafin, Z. Periapical Lesions in Panoramic Radiography and CBCT Imaging-Assessment of AI’s Diagnostic Accuracy. J. Clin. Med. 2024, 13, 2709. [Google Scholar] [CrossRef]
  43. Keenan, J.R.; Keenan, A.V. Accuracy of dental radiographs for caries detection. Evid. Based Dent. 2016, 17, 43. [Google Scholar] [CrossRef] [PubMed]
  44. Lee, J.H.; Kim, D.H.; Jeong, S.N.; Choi, S.H. Detection and diagnosis of dental caries using a deep learning-based convolutional neural network algorithm. J. Dent. 2018, 77, 106–111. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Representative panoramic radiograph (PR) showing diagnoses automatically generated by Diagnocat (DGNCT LLC, Miami, FL, USA; version 1.0). Diagnocat report including simple diagram of teeth with a legend of findings and referral recommendations pointing specific specialists for specific teeth (A). Diagnocat report including specific teeth captions and description with percentage of accuracy (B).
Figure 1. Representative panoramic radiograph (PR) showing diagnoses automatically generated by Diagnocat (DGNCT LLC, Miami, FL, USA; version 1.0). Diagnocat report including simple diagram of teeth with a legend of findings and referral recommendations pointing specific specialists for specific teeth (A). Diagnocat report including specific teeth captions and description with percentage of accuracy (B).
Applsci 15 09790 g001
Figure 2. ROC curve illustrating the diagnostic performance of the model for overall performance with AUC, optimal threshold, sensitivity, and specificity values. The accompanying confusion matrix compares model predictions with human assessments and is based on a condition-specific subset of the sample, including only teeth that met the predefined evaluability criteria.
Figure 2. ROC curve illustrating the diagnostic performance of the model for overall performance with AUC, optimal threshold, sensitivity, and specificity values. The accompanying confusion matrix compares model predictions with human assessments and is based on a condition-specific subset of the sample, including only teeth that met the predefined evaluability criteria.
Applsci 15 09790 g002
Figure 3. ROC curve illustrating the diagnostic performance of the model for detecting caries signs, with AUC, optimal threshold, sensitivity, and specificity values. The accompanying confusion matrix compares model predictions with human assessments and is based on a condition-specific subset of the sample, including only teeth that met the predefined evaluability criteria.
Figure 3. ROC curve illustrating the diagnostic performance of the model for detecting caries signs, with AUC, optimal threshold, sensitivity, and specificity values. The accompanying confusion matrix compares model predictions with human assessments and is based on a condition-specific subset of the sample, including only teeth that met the predefined evaluability criteria.
Applsci 15 09790 g003
Figure 4. ROC curve illustrating the diagnostic performance of the model for detecting dental restoration, with AUC, optimal threshold, sensitivity, and specificity values. The accompanying confusion matrix compares model predictions with human assessments and is based on a condition-specific subset of the sample, including only teeth that met the predefined evaluability criteria.
Figure 4. ROC curve illustrating the diagnostic performance of the model for detecting dental restoration, with AUC, optimal threshold, sensitivity, and specificity values. The accompanying confusion matrix compares model predictions with human assessments and is based on a condition-specific subset of the sample, including only teeth that met the predefined evaluability criteria.
Applsci 15 09790 g004
Figure 5. ROC curve illustrating the diagnostic performance of the model for detecting missing teeth, with AUC, optimal threshold, sensitivity, and specificity values. The accompanying confusion matrix compares model predictions with human assessments and is based on a condition-specific subset of the sample, including only teeth that met the predefined evaluability criteria.
Figure 5. ROC curve illustrating the diagnostic performance of the model for detecting missing teeth, with AUC, optimal threshold, sensitivity, and specificity values. The accompanying confusion matrix compares model predictions with human assessments and is based on a condition-specific subset of the sample, including only teeth that met the predefined evaluability criteria.
Applsci 15 09790 g005
Figure 6. ROC curve illustrating the diagnostic performance of the model for detecting periodontal bone loss, with AUC, optimal threshold, sensitivity, and specificity values. The accompanying confusion matrix compares model predictions with human assessments and is based on a condition-specific subset of the sample, including only teeth that met the predefined evaluability criteria.
Figure 6. ROC curve illustrating the diagnostic performance of the model for detecting periodontal bone loss, with AUC, optimal threshold, sensitivity, and specificity values. The accompanying confusion matrix compares model predictions with human assessments and is based on a condition-specific subset of the sample, including only teeth that met the predefined evaluability criteria.
Applsci 15 09790 g006
Table 1. Clinical and demographic data of patients.
Table 1. Clinical and demographic data of patients.
OverallFemaleMalep-Value
N1045450
Age (mean (SD))33.18 (16.53)32.41 (16.41)34.02 (16.77)0.621
Table 2. Characteristics of evaluated teeth.
Table 2. Characteristics of evaluated teeth.
ConditionMean Age (Years)Female N (%)Male
N (%)
Teeth
N (%)
X-Rays (N)Diagnocat Positive (%)Human Consensus Positive (%)
Caries18.557
(47.9)
58
(48.74)
119
(7)
4699.295
Restorations28.9212 (47.11)238 (52.89)450 (26.5)7610098.5
Missing40.8193 (61.1)123
(38.9)
316 (18.6)8795.997.5
Periodontal
bone loss
46.6236 (49.9)237
(50.1)
473 (27.9)6110094.9
Table 3. Diagnostic performance of Diagnocat and its agreement levels with ground truth across the evaluated conditions.
Table 3. Diagnostic performance of Diagnocat and its agreement levels with ground truth across the evaluated conditions.
ConditionAUCSensitivitySpecificityCohen’s κAgreement Level
Caries signs0.730.950.47−0.15Poor
Dental restoration0.760.770.660.39Fair
Missing teeth0.730.810.680.37Fair
Periodontal bone loss0.850.960.71−0.62Poor
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mema, H.; Gaxhja, E.; Alicka, Y.; Gugu, M.; Topi, S.; Giannoni, M.; Pietropaoli, D.; Altamura, S. Application of AI-Driven Software Diagnocat in Managing Diagnostic Imaging in Dentistry: A Retrospective Study. Appl. Sci. 2025, 15, 9790. https://doi.org/10.3390/app15179790

AMA Style

Mema H, Gaxhja E, Alicka Y, Gugu M, Topi S, Giannoni M, Pietropaoli D, Altamura S. Application of AI-Driven Software Diagnocat in Managing Diagnostic Imaging in Dentistry: A Retrospective Study. Applied Sciences. 2025; 15(17):9790. https://doi.org/10.3390/app15179790

Chicago/Turabian Style

Mema, Haris, Elona Gaxhja, Ylli Alicka, Mitilda Gugu, Skender Topi, Mario Giannoni, Davide Pietropaoli, and Serena Altamura. 2025. "Application of AI-Driven Software Diagnocat in Managing Diagnostic Imaging in Dentistry: A Retrospective Study" Applied Sciences 15, no. 17: 9790. https://doi.org/10.3390/app15179790

APA Style

Mema, H., Gaxhja, E., Alicka, Y., Gugu, M., Topi, S., Giannoni, M., Pietropaoli, D., & Altamura, S. (2025). Application of AI-Driven Software Diagnocat in Managing Diagnostic Imaging in Dentistry: A Retrospective Study. Applied Sciences, 15(17), 9790. https://doi.org/10.3390/app15179790

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop