Next Article in Journal
A New Formula Consisting of the Initial Independent Predictors of All-Cause Mortality Derived from a Single-Centre Cohort of Antineutrophil Cytoplasmic Antibody-Associated Vasculitis
Previous Article in Journal
Normothermic Machine Perfusion in Orphan Liver Graft Viability Assessment
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Automatic Segmentation of the Nasolacrimal Canal: Application of the nnU-Net v2 Model in CBCT Imaging

1
Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Sakarya University, Sakarya 54050, Turkey
2
Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Inonu University, Malatya 44000, Turkey
3
Independent Researcher, Sakarya 54100, Turkey
4
Mathematics and Computer Science, Faculty of Science, Eskişehir Osmangazi University, Eskişehir 26040, Turkey
5
Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Eskişehir Osmangazi University, Eskişehir 26040, Turkey
*
Author to whom correspondence should be addressed.
J. Clin. Med. 2025, 14(3), 778; https://doi.org/10.3390/jcm14030778
Submission received: 13 November 2024 / Revised: 17 January 2025 / Accepted: 18 January 2025 / Published: 25 January 2025
(This article belongs to the Topic AI in Medical Imaging and Image Processing)

Abstract

:
Background/Objectives: There are various challenges in the segmentation of anatomical structures with artificial intelligence due to the different structural features of the relevant region/tissue. The aim of this study was to detect the nasolacrimal canal (NLC) using the nnU-Net v2 convolutional neural network (CNN) model in cone beam-computed tomography (CBCT) images and to evaluate the successful performance of the model in automatic segmentation. Methods: CBCT images of 100 patients were randomly selected from the data archive. The raw data were transferred to the 3D Slicer imaging software in DICOM format (Version 4.10.2; MIT, Massachusetts, USA). NLC was labeled using the polygonal type of manual method. The dataset was split into training, validation and test sets in a ratio of 8:1:1. nnU-Net v2 architecture was applied to the training and test datasets to predict and generate appropriate algorithm weight factors. The confusion matrix was used to check the accuracy and performance of the model. As a result of the test, the Dice Coefficient (DC), Intersection over Union (IoU), F1-Score and 95% Hausdorff distance (95% HD) metrics were calculated. Results: By testing the model, DC, IoU, F1-Scores and 95% HD metric values were found to be 0.8465, 0.7341, 0.8480 and 0.9460, respectively. According to the data obtained, the receiver-operating characteristic (ROC) curve was drawn and the AUC value under the curve was determined to be 0.96. Conclusions: These results showed that the proposed nnU-Net v2 model achieves NLC segmentation on CBCT images with high precision and accuracy. The automated segmentation of NLC may assist clinicians in determining the surgical technique to be used to remove lesions, especially those affecting the anterior wall of the maxillary sinus.

1. Introduction

The nasolacrimal canal (NLC), part of the lacrimal drainage system, consists of a bony and a membranous canal. The bony part is flanked laterally by the sulcus lacrimalis and medially by the processus lacrimalis. It extends from the lower end of the lacrimal sac to the meatus nasi inferior, forming an angle of approximately 15 to 30 degrees with the coronal plane. The average width of the intraosseous canal is 4 mm, and its length ranges between 13 and 28 mm. The mucosa lining the lacrimal drainage system forms folds and sinuses, with the most crucial being the Hasner valve at the meatus nasi inferior [1,2,3].
Epiphora is a condition characterized by the overflow of tears due to partial or complete blockage of the NLC, which may be congenital or acquired [3,4,5]. Primary acquired dacryostenosis, an idiopathic fibroinflammatory obstruction, is the leading cause. Secondary acquired dacryostenosis often results from infections, trauma, surgery, tumors or radiation therapy [4,6,7].
Over the years, various imaging methods have been utilized to examine the morphological changes in the NLC, as well as to detect, treat and follow up duct and sac diseases. These methods include conventional dacryocystography (DCG) [8], digital subtraction dacryocystography (DS-DCG) [9], computed tomography (CT) [4], computerized tomography–dacryocystography (CT–DCG) [10], magnetic resonance imaging (MRI) [11], and cone beam-computed tomography (CBCT) [12]. Accurate detection of the NLC is essential for maxillofacial surgical procedures, as damage to the lower opening of the NLC or the anterior wall of the lacrimal sac may occur during surgery. Therefore, selecting an imaging method that is both accurate and effective is crucial [13,14].
The widespread use of three-dimensional (3D) imaging methods, particularly cone beam-computed tomography (CBCT) in dentistry, has addressed the limitations of two-dimensional (2D) imaging [15]. In addition, the effectiveness and success of artificial intelligence algorithms have been enhanced by the use of CBCT and other 3D imaging methods. Diagnostic images have become a primary resource for developing AI systems, which can be used to detect pathological changes, segment craniofacial anatomical structures, and classify maxillofacial tumors and cysts [16,17,18,19,20]. For example, Widiasri et al. [21] developed an AI model for detecting alveolar bone thickness and the mandibular canal to aid in dental implant planning. In another study, AI algorithms were utilized to assess growth and development based on cervical vertebra stages [22]. Zhang et al. [23] successfully predicted postoperative facial swelling in orthognathic surgery patients using a convolutional neural network (CNN)-based model. These capabilities are invaluable for clinicians during both intraoperative and postoperative decision-making. AI technology has the potential to minimize errors, assist in surgical treatment planning, reduce diagnosis and treatment times and improve overall operational efficiency [24,25,26].
However, most current deep learning algorithms require manual intervention for lesion detection and segmentation, making the process time-consuming and less practical for routine clinical use [27,28]. To address these limitations and improve usability, researchers are focusing on developing algorithms that can fully automate these steps [16,29,30].
To the best of our knowledge, no previous study has evaluated the automatic segmentation of NLC on CBCT images using the nnU-Net v2. This model simplifies and streamlines the system by employing a systematic approach, avoiding the need for additional network structures and eliminating system complexity. Furthermore, the nnU-Net v2 model can be adapted into a semi-automatic system by incorporating manual inputs, enabling external intervention to address deficiencies and enhance network performance [31].
The aim of this study is to develop a CNN algorithm utilizing the nnU-Net v2 architecture and evaluate its performance in automatically detecting the NLC in axial CBCT images. A successful model could serve as a reliable tool for clinicians, aiding in the assessment of the relationship between pathologies affecting the paranasal sinuses, nasal region and the NLC. This could improve preoperative planning and intraoperative guidance, ultimately enhancing surgical outcomes.
Additionally, the development of an automatic segmentation tool for the NLC could streamline the diagnostic process, reducing the need for manual segmentation and saving time for radiologists and surgeons. This could lead to increased efficiency in clinical workflows. Our hypothesis is that a CNN algorithm utilizing the nnU-Net v2 architecture will achieve high accuracy and efficiency in automatically detecting the NLC in axial CBCT images.

2. Materials and Methods

2.1. Study Design

In this study, the CBCT data of patients who applied to Inonu University Faculty of Dentistry, Oral and Maxillofacial Radiology Clinic for any reason were used. Since the aim is to develop and validate an AI for automatic NLC detection, this study consists of three phases: development, training and testing. CBCT scans used full convolutional neural network algorithms (nnU-Net v2) for 3D automated detection of NLC. The AI Checklist in Medical Imaging (CLAIM) and Standards for Reporting of Diagnostic Accuracy Studies (STARD) were used in the preparation of this article. The study was retrospectively designed and all procedures were carried out in accordance with the Declaration of Helsinki and similar ethical standards. The Inonu University Non-Interventional Clinical Research Ethics Committee approved the study protocol (decision no: 2024/6575). As a routine protocol, all patients in the CBCT archive had been informed and provided written consent regarding the use of their data for scientific research.

2.2. Data

This study’s sample size was determined using the G*Power software (version 3.1.9.7; Franz Faul, University of Kiel, Kiel, Germany). With an α error probability of 0.05 and a power of the study of 0.95, the actual power of the study was calculated to be 95% when at least 100 samples were included [32]. The final sample was obtained from 100 CBCT images (200 right and left NLC) of 50 males and 50 females randomly selected using archival records. The ages of the patients ranged from 18 to 72 years, and the mean age was 52 years. The inclusion criteria are as follows:
  • Individuals over 18 years of age.
  • Individuals without any syndrome or bone disease.
  • Clearly identified images of NLC’s bone boundaries.
The exclusion criteria are as follows:
  • Individuals with known pre-existing infection, neoplasm and malformations associated with NLC.
  • Individuals who have undergone surgical operations and trauma involving the maxillofacial region and NLC.
  • Images with motion or metal artifacts that prevent NLC from being displayed and degrade diagnostic quality.

2.3. Obtaining and Evaluating CBCT Images

The scans were performed using a NewTom 5G CBCT machine (Quantitative Radiology, Verona, Italy); 110 kVp, 1–11 mA, 3.6 s, 8 × 8 cm2, 12 × 8 cm2 and 15 × 12 cm2 field of view (FOV) were obtained with 0.2 mm3–0.3 mm3 voxel size parameters. CBCT images were taken between 2021 and 2023 for various reasons.

2.4. Ground Truth

After reconstruction of the raw data, DICOM images were transferred to the 3D Slicer imaging software (Version 4.10.2; MIT, Cambridge, MA, USA) for manual segmentation. NLC was manually labeled in the coronal, sagial and axial planes using 3D Slicer imaging software, an open source program. The labeled DICOM data were converted to NIfTI (Neuro Imaging Information Technology Initiative) format and exported for processing. The ground truth was defined by the consensus of two maxillofacial radiologists (E.H. and I.G.), both of whom had at least 6 years of experience in maxillofacial radiology. Upon completion of the manual segmentation process, it was checked by senior individuals (I.S.B. and F.K.) who had at least 10 years of experience in maxillofacial radiology and reached full agreement on all labels.

2.5. Testing Data

After screening the best-performing model through comprehensive comparison, 100 CBCT datasets were divided into training, validation and test sets according to the ratio of 8:1:1 (Figure 1).

2.6. Model

The optimal model was used for 10-fold cross-validation on the training set, and the validation and test sets were evaluated. Training model and parameters: nnU-Net v2 based FCNN model, 1000 epochs, 0.00001 learning rate. The algorithm of the nnU-Net v2 model for automatic segmentation of NLC was developed in a Python environment (v3.6.1; Python Software Foundation, Wilmington, DE, USA) using the PyTorch library. The CranioCatch AI software (Version 2.1; CranioCatch, Eskisehir, Turkey) was used in the deep learning model development and training process described by Bayrakdar et al. [32].
U-Net was introduced to the world in 2015 by Olaf Ronneberber, Philip Fischer and Thomas Brox in an article titled “U-Net: Convolutional Networks for Biomedical Image Segmentation” for better segmentation, particularly in biomedical images [33]. The FCNN (Fourier Convolution Neural Network) model used in this study, nnU-Net (Neural Networks U-Net) v2, is an improved version of the U-Net architecture. Large datasets are needed for training in classical convolutional neural network models. The images in these datasets are labeled and presented to the network, and the network recognizes the images with this label information. This labeling process is particularly challenging for biomedical images due to their pixel-based nature, requiring significant human and hardware resources. Unlike classical CNN models, U-Net offers a unique architecture for pixel-based image segmentation, addressing these challenges.

2.7. Evaluation

The model’s performance in the automated 3D segmentation of the NLC on CBCT volumes was evaluated using a confusion matrix. The confusion matrix is an essential performance evaluation tool for measuring algorithm accuracy and establishing success parameters. Probabilities of segmentation models were derived from the true positive (TP), true negative (TN), false positive (FP) and false negative (FN) values in the confusion matrix. The test results yielded the confusion matrix for the nnU-Net v2 model.
Recall (sensitivity) and precision metrics of the proposed model are calculated with the expressions in the confusion matrix (Table 1). In this present study, the results are presented with Dice Coefficient (DC), F1-Score and Intersection over Union Intersection (IoU) metrics. Additionally, the area under curve (AUC) value and the 95% Hausdorff distance (95% HD) were calculated in mm. These metrics are frequently used in the literature to measure the success of segmentation, and information about the metrics is given in this section.

3. Results

Within the scope of this study, the nnU-Net v2 model, which is one of the deep learning networks, was used for NLC segmentation. A total of 80% of the dataset was determined as the training group, 10% as the validation group, and 10% as the test group. The training was completed in an average of 1000 epochs. In all models, “Adam” was used as the optimization algorithm. ReLU activation is used in the proposed model. The training parameters of the nnU-Net v2 model are given in Table 2. The predictive analysis of the proposed nnU-Net v2 model is shown in Figure 2.
The values of the precision and recall metrics were calculated as 0.7888 and 0.9168, respectively (Table 1). According to the results, the highest DC, IoU, F1-Score and 95% Hausdorff distance values after the segmentation performed in the nnU-Net v2 architecture were found to be 0.8465, 0.7341, 0.8480 and 0.9460, respectively (Table 1). The DC and IoU metrics of the test data are shown in Figure 3 and Figure 4.
According to the test results, the receiver-operating characteristic (ROC) curve was drawn and the AUC value under the curve was determined to be 0.96 (Figure 5). In order to have more information about the training of the model, a graph of the dice score and loss function values at each number of cycles was created, starting from the first number of cycles of the model (Figure 6).

4. Discussion

Paranasal sinuses and nasal anatomical structures are closely related to the NLC. Due to this anatomical proximity, paranasal sinus diseases and pathologies involving the nasal region often affect the NLC [34]. In cases where the anterior wall and floor of the maxillary sinus are involved, surgical intervention can become challenging. In such situations, the anterior wall of the maxillary sinus can be accessed using the prelacrimal approach [35]. This technique may lead to the removal of the NLC’s bony component and displacement of the mucosa. To determine whether NLC dislocation or resection is required, the distance between the anterior wall of the maxillary sinus and the NLC should be considered [36]. Automatic segmentation of the NLC can help clinicians practically and easily assess the distance between the NLC and the anterior wall of the maxillary sinus, aiding in determining the appropriate surgical technique for planned interventions in the maxillary sinus and nasal region. Moreover, accurate segmentation of the NLC provides crucial guidance for maxillofacial surgical procedures and pathology evaluations, serving as an essential tool for preoperative and intraoperative planning and ultimately enhancing surgical outcomes. This study introduces a new automatic segmentation tool that uses the nnU-Net v2-based AI model to achieve high accuracy and efficiency in NLC segmentation. It is innovative in that it is one of the first studies in the literature to use AI for NLC segmentation.
To the best of our knowledge, there are no studies on NLC segmentation using AI in the literature yet. This study aimed to evaluate the performance of the nnU-Net v2 model in segmenting the NLC from CBCT scans. The DC was 0.8465, indicating successful NLC segmentation. An IoU metric of 0.7341 further confirmed accurate segmentation, as an IoU above 0.5 is typically deemed successful [37]. The F1-Score of 0.8480 demonstrated the model’s high sensitivity and precision. The ROC curve indicated an AUC value of 0.96, highlighting the model’s strong ability to accurately distinguish the NLC from CBCT images. These findings confirmed our hypothesis (H1). This suggests the model’s potential for use in diagnosing NLC, planning treatment and evaluating NLC-related diseases before maxillofacial surgery.
Manual labeling of CBCT scans is labor-intensive, time-consuming and costly. Even with semi-automatic labeling, operator-dependent errors are unavoidable. Precise segmentation can aid forensic science by providing detailed anatomical information that is critical for identifying individuals and understanding trauma mechanisms. In addition, precise automatic segmentation can process large datasets in a fraction of the time, significantly reducing the workload for researchers and clinicians. Moreover, automatic segmentation provides consistent and reproducible results. This consistency is crucial for comparative studies and longitudinal research where precise measurements are essential. Thus, there is a growing need for the fully automated labeling of CBCT scans [38,39]. Preda et al. [40] investigated automatic segmentation of the maxillofacial complex using CBCT data. They reported that automated segmentation takes less than a minute and is 204 times faster than manual segmentation. In their study, they reported the mean values of DSC, 95% HD and IoU metrics as 0.926, 0.621 and 0.862, respectively. The findings showed high similarity between automatically and manually segmented CBCT data. These results highlight that automated segmentation increases efficiency in the workflow and saves time [41].
AI and DL algorithms are used for different purposes in medical imaging. These include image segmentation and preprocessing, disease detection and diagnosis, personalized treatment planning, predictive analytics, quality control, monitoring and follow-up [42]. In particular, DL models can be used to follow the radiotherapy process by automating the segmentation of organs and tumors. Moreover, DL algorithms enable adaptive radiotherapy, where treatment plans can be continuously optimized over time according to changes in a patient’s anatomy. This adaptive approach ensures that radiation is delivered with the highest possible accuracy, improving treatment outcomes while minimizing side effects [43]. Different deep learning models, the nnU-Net model and its variants are widely used in the literature for similar studies (Table 3). This model has been employed for segmenting various anatomical structures and detecting pathologies, such as mandibular canal abnormalities [34], maxillary sinus pathologies [41] and the classification of jaw lesions [44], have been carried out with this model.
Zhu et al. [49] utilized the nnU-Net model for diagnosing impacted teeth, dental caries, crowns, missing teeth and residual roots. However, there are no studies yet on the automatic segmentation of NLC. Three-dimensional imaging of the NLC not only enhances anatomical understanding but also facilitates the creation of virtual models to guide surgical planning in the maxillofacial region. In this context, the precise and comprehensive segmentation of the NLC is a crucial preliminary step.
Shi et al. [50] employed the nnU-Net model to segment CBCT data from 48 patients with class II malocclusion who underwent orthognathic surgery. They evaluated the success of automatic segmentation in identifying condylar changes. The DC values for the automatic segmentation of the maxilla, mandible and condylar changes were 0.9263, 0.9387 and 0.971, respectively. This study concluded that artificial intelligence exhibits high sensitivity in detecting condylar changes [50]. Oztürk et al. [37] performed maxillary sinus segmentation on CBCT images with the U-Net network. They used F1-Score and IoU metrics to evaluate the model’s performance. The segmentation results demonstrated an F1-Score of 0.9784 and an IoU value of 0.9275. These metric values confirmed the model’s success in maxillary sinus segmentation [37]. As highlighted in the aforementioned studies, artificial intelligence-based segmentation serves diverse purposes, and its performance can be evaluated using various metrics.
Morita et al. [17] assessed the segmentation of eight anatomical parameters in the facial region using a 2D U-Net model on CT data. The mean DC values for maxilla, mandible, right zygoma segmentation and left zygoma segmentation were found to be 0.909 ± 0.036, 0.984 ± 0.017, 0.936 ± 0.029 and 0.926 ± 0.049, respectively. The model showed high success. However, the average DC values for the nasal bone (0.838 ± 0.084) and frontal bone (0.858 ± 0.060) showed lower accuracy compared to the others. They attributed this to the fact that the nasal bone is thinner and smaller than other bones. They also reported that sutures on the nasal and frontal bones would affect segmentation success [17]. In this present study, the DC value for NLC segmentation was comparable to the values reported by Morita et al. [17] for the nasal and frontal bones, yet it exhibited lower success rates compared to other anatomical structures.
Yağmur et al. [45] utilized the nnU-Net v2 model, similar to the methodology employed in this study. The DC value for mandibular canal segmentation in their study was reported as 0.76. This current study demonstrated that NLC segmentation achieved higher success using the same model. However, success performance was found to be lower in this current study compared to other studies [40,41,51,52]. This discrepancy can be attributed to the anatomical connection of the NLC’s bony component with adjacent spaces. Especially, its connection to the lower meatus is wide and angulated, making manual segmentation difficult. Consequently, accurate segmentation proves challenging in cases where distinct bony boundaries are not clearly discernible.
Chang et al. [48] developed an automatic method for staging periodontitis using a deep learning hybrid framework. They proposed a novel approach that combines deep learning architecture for detection with conventional CAD (Computer-Aided Diagnosis) processing for classification. The deep learning model was employed to detect the radiographic bone level as a simple structure for the entire jaw on panoramic radiographs. They found that this hybrid framework, which integrates deep learning and traditional CAD methods, demonstrated high accuracy and excellent reliability in automatically diagnosing periodontal bone loss and staging periodontitis.
As evident from previous studies, the U-Net network architecture has been employed for various purposes in medicine and dentistry [17,37,45,46]. High achievement performance with correct training and appropriate data is one of the reasons why this model is preferred. U-Net network architecture provides success even with very limited data and has superior performance speed compared to other models. Due to these qualified features, it is frequently preferred in the biomedical field, especially for segmentation purposes [46,52].
This study had several limitations. Firstly, while the developed model allowed for the bone canal border of the NLC to be clearly determined, it did not achieve the same success rate in distinguishing the membranous canal. The use of multimodal segmentation models (CT-MR, CBCT-MR) to improve soft tissue contrast may be considered in the future, which can enhance the model’s robustness and performance. Furthermore, issues related to low resolution could be addressed. Secondly, CBCT data were obtained from a single institution. Data from different centers would increase heterogeneity and allow for the results to be generalized. Finally, there is the difficulty of separating the NLC from the spaces it connects. Therefore, using more advanced multi-stream (multi-angle, multi-scale and multi-modality) models would be beneficial for improving the accuracy of NLC segmentation in 3D images.

5. Conclusions

This study evaluated the nnU-Net v2 model for segmenting the NLC in CBCT images using a dataset of 100 patients. The DC value of 0.8465 and the AUC value of 0.96 demonstrated high accuracy. The complex anatomical structure of the maxillofacial region makes the manual labeling of CBCT images laborious and time-consuming in practice. The NLC is an important anatomical structure that must be considered before surgical procedures in this region, particularly due to its proximity to the anterior wall of the maxillary sinus. The proposed nnU-Net v2 model demonstrated promising performance in segmenting the NLC on CBCT images. Our results suggest that the automated segmentation of the nasolacrimal canal can facilitate clinical decision-making, improve work efficiency and save time during diagnosis and treatment.
Future efforts will focus on addressing the current limitations by expanding the dataset with additional CBCT images and labels. Furthermore, specific adjustments to the nnU-Net v2 model, such as hyperparameter tuning, enhanced data augmentation techniques and architectural modifications (e.g., attention mechanisms), are planned to further improve the DC and other performance metrics. These refinements aim to develop a more robust and accurate model for clinical applications.

Author Contributions

Conceptualization, I.G., E.H. and M.S.D.; Methodology, I.G., S.B.D. and M.S.D.; Software, I.G., E.H., O.C. and M.C.E.; Validation, I.S.B., F.K. and S.B.D.; Formal Analysis, I.G. and O.C.; Investigation, I.G., S.B.D. and M.S.D.; Data Curation, M.C.E. and E.H.; Writing—Original Draft Preparation, I.G., E.H. and O.C.; Writing—Review and Editing, F.K., M.S.D. and I.S.B.; Visualization, E.H. and M.C.E.; Supervision, I.G. and E.H.; Project Administration, I.G. All authors have read and agreed to the published version of the manuscript.

Funding

This study was not funded by any organization.

Institutional Review Board Statement

Ethical approval was obtained from the Health Sciences Ethics Committee of Inonu University, Malatya, Türkiye (reference date/no:9 October 2024; 2024/6575). All procedures performed in studies involving human participants were in accordance with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.

Informed Consent Statement

For this type of study, formal consent is not required.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

References

  1. Lee, S.; Lee, U.Y.; Yang, S.W.; Lee, W.J.; Kim, D.H.; Youn, K.H.; Kim, Y.S. 3D morphological classification of the nasolacrimal duct: Anatomical study for planning treatment of tear drainage obstruction. Clin. Anat. 2021, 34, 624–633. [Google Scholar] [CrossRef]
  2. Jañez-Garcia, L.; Saenz-Frances, F.; Ramirez-Sebastian, J.M.; Toledano-Fernandez, N.; Urbasos-Pascual, M.; Jañez-Escalada, L. Three-dimensional reconstruction of the bony nasolacrimal canal by automated segmentation of computed tomography images. PLoS ONE 2016, 11, e0155436. [Google Scholar] [CrossRef] [PubMed]
  3. Maliborski, A.; Różycki, R. Diagnostic imaging of the nasolacrimal drainage system. Part I. Radiological anatomy of lacrimal pathways. Physiology of tear secretion and tear outflow. Med. Sci. Monit. 2014, 20, 628. [Google Scholar] [PubMed]
  4. Czyz, C.N.; Bacon, T.S.; Stacey, A.W.; Cahill, E.N.; Costin, B.R.; Karanfilov, B.I.; Cahill, K.V. Nasolacrimal system aeration on computed tomographic imaging: Sex and age variation. Ophthalmic Plast. Reconstr. Surg. 2016, 32, 11–16. [Google Scholar] [CrossRef] [PubMed]
  5. Atum, M.; Alagöz, G. Blood cell ratios in patients with primary acquired nasolacrimal duct obstruction. Ophthalmol. J. 2020, 5, 76–80. [Google Scholar] [CrossRef]
  6. Kousoubris, P.D.; Rosman, D.A. Radiologic evaluation of lacrimal and orbital disease. Otolaryngol. Clin. N. Am. 2006, 39, 865–893. [Google Scholar] [CrossRef] [PubMed]
  7. Ansari, S.A.; Pak, J.; Shields, M. Pathology and imaging of the lacrimal drainage system. Neuroimaging Clin. N. Am. 2005, 15, 221–237. [Google Scholar] [CrossRef]
  8. Francisco, F.; Carvalho, A.; Francisco, V.; Francisco, M.; Neto, G. Evaluation of 1000 lacrimal ducts by dacryocystography. Br. J. Ophthalmol. 2007, 91, 43–46. [Google Scholar] [CrossRef] [PubMed]
  9. Saleh, G.; Gauba, V.; Tsangaris, P.; Tharmaseelan, K. Digital subtraction dacryocystography and syringing in the management of epiphora. Orbit 2007, 26, 249–253. [Google Scholar] [CrossRef]
  10. Ali, M.J.; Singh, S.; Naik, M.N.; Kaliki, S.; Dave, T.V. Interactive navigation-guided ophthalmic plastic surgery: The utility of 3D CT-DCG-guided dacryolocalization in secondary acquired lacrimal duct obstructions. Clin. Ophthalmol. 2016, 11, 127–133. [Google Scholar] [CrossRef]
  11. Coskun, B.; Ilgit, E.; Onal, B.; Konuk, O.; Erbas, G. MR dacryocystography in the evaluation of patients with obstructive epiphora treated by means of interventional radiologic procedures. AJNR Am. J. Neuroradiol. 2012, 33, 141–147. [Google Scholar] [CrossRef]
  12. Nakamura, J.; Kamao, T.; Mitani, A.; Mizuki, N.; Shiraishi, A. Analysis of lacrimal duct morphology from cone-beam computed tomography dacryocystography in a Japanese population. Clin. Ophthalmol. 2022, 16, 2057–2067. [Google Scholar] [CrossRef]
  13. Janssen, A.G.; Mansour, K.; Bos, J.J.; Castelijns, J.A. Diameter of the bony lacrimal canal: Normal values and values related to nasolacrimal duct obstruction: Assessment with CT. AJNR Am. J. Neuroradiol. 2001, 22, 845–850. [Google Scholar]
  14. Moran, I.; Virdee, S.; Sharp, I.; Sulh, J. Postoperative complications following LeFort 1 maxillary advancement surgery in cleft palate patients: A 5-year retrospective study. Cleft Palate Craniofac. J. 2018, 55, 231–237. [Google Scholar] [CrossRef] [PubMed]
  15. Gumussoy, I.; Duman, S.B. Morphometric analysis of occipital condyles using alternative imaging technique. Surg. Radiol. Anat. 2020, 42, 161–169. [Google Scholar] [CrossRef] [PubMed]
  16. Abdolali, F.; Zoroofi, R.A.; Otake, Y.; Sato, Y. Automatic segmentation of maxillofacial cysts in cone beam CT images. Comput. Biol. Med. 2016, 72, 108–119. [Google Scholar] [CrossRef] [PubMed]
  17. Morita, D.; Mazen, S.; Tsujiko, S.; Otake, Y.; Sato, Y.; Numajiri, T. Deep-learning-based automatic facial bone segmentation using a two-dimensional U-Net. Int. J. Oral Maxillofac. Surg. 2023, 52, 787–792. [Google Scholar] [CrossRef]
  18. Lee, J.-H.; Kim, D.-h.; Jeong, S.-N.; Choi, S.-H. Diagnosis and prediction of periodontally compromised teeth using a deep learning-based convolutional neural network algorithm. J. Periodontal Implant. Sci. 2018, 48, 114–123. [Google Scholar] [CrossRef]
  19. Lee, J.-H.; Kim, D.-H.; Jeong, S.-N.; Choi, S.-H. Detection and diagnosis of dental caries using a deep learning-based convolutional neural network algorithm. J. Dent. 2018, 77, 106–111. [Google Scholar] [CrossRef]
  20. Setzer, F.C.; Shi, K.J.; Zhang, Z.; Yan, H.; Yoon, H.; Mupparapu, M.; Li, J. Artificial intelligence for the computer-aided detection of periapical lesions in cone-beam computed tomographic images. J. Endod. 2020, 46, 987–993. [Google Scholar] [CrossRef]
  21. Widiasri, M.; Arifin, A.Z.; Suciati, N.; Fatichah, C.; Astuti, E.R.; Indraswari, R.; Putra, R.H.; Za’in, C. Dental-yolo: Alveolar bone and mandibular canal detection on cone beam computed tomography images for dental implant planning. IEEE Access 2022, 10, 101483–101494. [Google Scholar] [CrossRef]
  22. Kök, H.; Acilar, A.M.; İzgi, M.S. Usage and comparison of artificial intelligence algorithms for determination of growth and development by cervical vertebrae stages in orthodontics. Prog. Orthod. 2019, 20, 41. [Google Scholar] [CrossRef] [PubMed]
  23. Zhang, W.; Li, J.; Li, Z.-B.; Li, Z. Predicting postoperative facial swelling following impacted mandibular third molars extraction by using artificial neural networks evaluation. Sci. Rep. 2018, 8, 12281. [Google Scholar] [CrossRef] [PubMed]
  24. Zhang, L.; Li, W.; Lv, J.; Xu, J.; Zhou, H.; Li, G.; Ai, K. Advancements in oral and maxillofacial surgery medical images segmentation techniques: An overview. J. Dent. 2023, 138, 104727. [Google Scholar] [CrossRef] [PubMed]
  25. Koç, U.; Sezer, E.A.; Özkaya, Y.A.; Yarbay, Y.; Beşler, M.S.; Taydaş, O.; Yalçın, A.; Evrimler, Ş.; Kızıloğlu, H.A.; Kesimal, U.; et al. Elevating healthcare through artificial intelligence: Analyzing the abdominal emergencies data set (Tr_Abdomen_Rad_Emergency) At Teknofest-2022. Eur. Radiol. 2024, 34, 3588–3597. [Google Scholar] [CrossRef]
  26. Mutlu, F.; Çetinel, G.; Gül, S. A fully-automated computer-aided breast lesion detection and classification system. Biomed. Signal Process. Control 2020, 62, 102157. [Google Scholar] [CrossRef]
  27. Yilmaz, E.; Kayikcioglu, T.; Kayipmaz, S. Computer-aided diagnosis of periapical cyst and keratocystic odontogenic tumor on cone beam computed tomography. Comput. Methods Programs Biomed. 2017, 146, 91–100. [Google Scholar] [CrossRef] [PubMed]
  28. Nicolielo, L.F.P.; Van Dessel, J.; Van Lenthe, G.H.; Lambrichts, I.; Jacobs, R. Computer-based automatic classification of trabecular bone pattern can assist radiographic bone quality assessment at dental implant site. Br. J. Radiol. 2018, 91, 20180437. [Google Scholar] [CrossRef] [PubMed]
  29. Abdolali, F.; Zoroofi, R.A.; Otake, Y.; Sato, Y. Automated classification of maxillofacial cysts in cone beam CT images using contourlet transformation and Spherical Harmonics. Comput. Methods Programs Biomed. 2017, 139, 197–207. [Google Scholar] [CrossRef] [PubMed]
  30. Lee, J.H.; Kim, D.H.; Jeong, S.N. Diagnosis of cystic lesions using panoramic and cone beam computed tomographic images based on deep learning neural network. Oral Dis. 2020, 26, 152–158. [Google Scholar] [CrossRef]
  31. Isensee, F.; Jäger, P.F.; Kohl, S.A.; Petersen, J.; Maier-Hein, K.H. Automated design of deep learning methods for biomedical image segmentation. arXiv 2019, arXiv:1904.08128. [Google Scholar]
  32. Bayrakdar, I.S.; Elfayome, N.S.; Hussien, R.A.; Gulsen, I.T.; Kuran, A.; Gunes, I.; Al-Badr, A.; Celik, O.; Orhan, K. Artificial intelligence system for automatic maxillary sinus segmentation on cone beam computed tomography images. DentoMaxilloFacial Radiol. 2024, 53, 256–266. [Google Scholar] [CrossRef] [PubMed]
  33. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, 5–9 October 2015; Proceedings, part III 18. Springer: Cham, Switzerland, 2015; pp. 234–241. [Google Scholar]
  34. Razavi, M.; Shams, N.; Afshari, F.; Nanai, S. Investigating the Morphology of the Nasolacrimal Canal in Cone Beam Computed Tomography Images and Its Relationship with Age and Gender. Maedica 2024, 19, 303. [Google Scholar] [CrossRef] [PubMed]
  35. Duman, S.B.; Gumussoy, İ. Assesment of Prelacrimal Recess in Patients With Maxillary Sinus Hypoplasia Using Cone Beam Computed Tomography. Am. J. Rhinol. Allergy 2021, 35, 361–367. [Google Scholar] [CrossRef] [PubMed]
  36. Simmen, D.; Veerasigamani, N.; Briner, H.R.; Jones, N.; Schuknecht, B. Anterior maxillary wall and lacrimal duct relationship-CT analysis for prelacrimal access to the maxillary sinus. Rhinology 2017, 55, 170–174. [Google Scholar] [CrossRef]
  37. Ozturk, B.; Taspinar, Y.S.; Koklu, M.; Tassoker, M. Automatic segmentation of the maxillary sinus on cone beam computed tomographic images with U-Net deep learning model. Eur. Arch. Oto-Rhino-Laryngol. 2024, 281, 6111–6121. [Google Scholar] [CrossRef] [PubMed]
  38. Jaskari, J.; Sahlsten, J.; Järnstedt, J.; Mehtonen, H.; Karhu, K.; Sundqvist, O.; Hietanen, A.; Varjonen, V.; Mattila, V.; Kaski, K. Deep learning method for mandibular canal segmentation in dental cone beam computed tomography volumes. Sci. Rep. 2020, 10, 5842. [Google Scholar] [CrossRef]
  39. Morgan, N.; Van Gerven, A.; Smolders, A.; de Faria Vasconcelos, K.; Willems, H.; Jacobs, R. Convolutional neural network for automatic maxillary sinus segmentation on cone-beam computed tomographic images. Sci. Rep. 2022, 12, 7523. [Google Scholar] [CrossRef] [PubMed]
  40. Preda, F.; Morgan, N.; Van Gerven, A.; Nogueira-Reis, F.; Smolders, A.; Wang, X.; Nomidis, S.; Shaheen, E.; Willems, H.; Jacobs, R. Deep convolutional neural network-based automated segmentation of the maxillofacial complex from cone-beam computed tomography: A validation study. J. Dent. 2022, 124, 104238. [Google Scholar] [CrossRef]
  41. Jung, S.-K.; Lim, H.-K.; Lee, S.; Cho, Y.; Song, I.-S. Deep active learning for automatic segmentation of maxillary sinus lesions using a convolutional neural network. Diagnostics 2021, 11, 688. [Google Scholar] [CrossRef] [PubMed]
  42. Obuchowicz, R.; Strzelecki, M.; Piórkowski, A. Clinical Applications of Artificial Intelligence in Medical Imaging and Image Processing—A Review. Cancers 2024, 16, 1870. [Google Scholar] [CrossRef] [PubMed]
  43. Damilakis, J.; Stratakis, J. Descriptive overview of AI applications in x-ray imaging and radiotherapy. J. Radiol. Prot. 2024, 44, 041001. [Google Scholar] [CrossRef]
  44. Liu, W.; Li, X.; Liu, C.; Gao, G.; Xiong, Y.; Zhu, T.; Zeng, W.; Guo, J.; Tang, W. Automatic classification and segmentation of multiclass jaw lesions in cone-beam CT using deep learning. DentoMaxilloFacial Radiol. 2024, 53, 439–446. [Google Scholar] [CrossRef] [PubMed]
  45. Yağmur, Ü.S.; Namdar, P.F. Evaluation of the mandibular canal by CBCT with a deep learning approach. Balk. J. Dent. Med. 2024, 28, 122–128. [Google Scholar] [CrossRef]
  46. Asci, E.; Kilic, M.; Celik, O.; Cantekin, K.; Bircan, H.B.; Bayrakdar, İ.S.; Orhan, K. A Deep Learning Approach to Automatic Tooth Caries Segmentation in Panoramic Radiographs of Children in Primary Dentition, Mixed Dentition, and Permanent Dentition. Children 2024, 11, 690. [Google Scholar] [CrossRef] [PubMed]
  47. Icoz, D.; Terzioglu, H.; Ozel, M.A.; Karakurt, R. Evaluation of an artificial intelligence system for the diagnosis of apical periodontitis on digital panoramic images. Niger. J. Clin. Pract. 2023, 8, 1085–1090. [Google Scholar] [CrossRef]
  48. Chang, H.J.; Lee, S.J.; Yong, T.H.; Shin, N.Y.; Jang, B.G.; Kim, J.E.; Huh, K.H.; Lee, S.-S.; Heo, M.-S.; Choi, S.-C.; et al. Deep Learning Hybrid Method to Automatically Diagnose Periodontal Bone Loss and Stage Periodontitis. Sci. Rep. 2020, 10, 7531. [Google Scholar] [CrossRef]
  49. Zhu, J.; Chen, Z.; Zhao, J.; Yu, Y.; Li, X.; Shi, K.; Zhang, F.; Yu, F.; Shi, K.; Sun, Z.; et al. Artificial intelligence in the diagnosis of dental diseases on panoramic radiographs: A preliminary study. BMC Oral Health 2023, 23, 358. [Google Scholar] [CrossRef] [PubMed]
  50. Shi, J.; Lin, G.; Bao, R.; Zhang, Z.; Tang, J.; Chen, W.; Chen, H.; Zuo, X.; Feng, Q.; Liu, S. An automated method for assessing condyle head changes in patients with skeletal class II malocclusion based on Cone-beam CT images. DentoMaxilloFacial Radiol. 2024, 53, 325–335. [Google Scholar] [CrossRef] [PubMed]
  51. Choi, H.; Jeon, K.J.; Kim, Y.H.; Ha, E.-G.; Lee, C.; Han, S.-S. Deep learning-based fully automatic segmentation of the maxillary sinus on cone-beam computed tomographic images. Sci. Rep. 2022, 12, 14009. [Google Scholar] [CrossRef] [PubMed]
  52. Wu, X.; Zhao, L. Study on iris segmentation algorithm based on dense U-Net. IEEE Access 2019, 7, 123959–123968. [Google Scholar] [CrossRef]
Figure 1. Workflow model of automatic segmentation of the nasolacrimal canal.
Figure 1. Workflow model of automatic segmentation of the nasolacrimal canal.
Jcm 14 00778 g001
Figure 2. Automatic segmentation of nasolacrimal canal using artificial intelligence model in axial CBCT slices.
Figure 2. Automatic segmentation of nasolacrimal canal using artificial intelligence model in axial CBCT slices.
Jcm 14 00778 g002
Figure 3. Dice confidence (DC) scores of the test data.
Figure 3. Dice confidence (DC) scores of the test data.
Jcm 14 00778 g003
Figure 4. IoU metrics of the test data.
Figure 4. IoU metrics of the test data.
Jcm 14 00778 g004
Figure 5. Receiver-operating characteristic (ROC) curve and AUC value.
Figure 5. Receiver-operating characteristic (ROC) curve and AUC value.
Jcm 14 00778 g005
Figure 6. Dice score and loss function values at each number of cycles of the model.
Figure 6. Dice score and loss function values at each number of cycles of the model.
Jcm 14 00778 g006
Table 1. Metrics used to evaluate the performance of the nnU-Net v2 model, and the results obtained.
Table 1. Metrics used to evaluate the performance of the nnU-Net v2 model, and the results obtained.
MetricsMetric FormulaMetric Value
True Positive 16,297.7
False Positive 4214.2
False Negative 1624.5
PrecisionTP/(TP + FP)0.7888
Recall (Sensitivity)TP/(TP + FN)0.9168
Dice Coefficients (DC)(2 × T P)/(2 × T P + F P + F N)0.8465
Intersection over Union (IoU)(|A∩B|)/(|A∪B|)0.7341
F1-Score2 × (Precision × Recall)/(Precision + Recall)0.8480
95% Hausdorff Distance (95%HD) mmdH95(A, B) = max(d95(A, B), d95(A, B))0.9460
Notes: TP: True Positive, FP: False Positive, FN: False Negative.
Table 2. Training parameters and value of the proposed nnU-Net v2 model.
Table 2. Training parameters and value of the proposed nnU-Net v2 model.
ParameterValue
ModelNnU-Net v2
Epoch1000
Batch Size2
Learning Rate0.00001
OptimizationADAM
ActivationReLU
Table 3. A summary of several studies in the literature related to automatic segmentation of dental structures.
Table 3. A summary of several studies in the literature related to automatic segmentation of dental structures.
AuthorsAimSampleSegmentation ModelImaging MethodEvaluation
Metrics
Ozturk [37]The aim of this study is to develop a deep learning-based method to perform maxillary sinus segmentation using CBCT images.100 ScansU-NetCBCTF-1 Score: 0.9784
IoU: 0.9275
Preda et al. [40]This present study investigated the accuracy, consistency and time-efficiency of a novel deep convolutional neural network (CNN)-based model for the automated maxillofacial bone segmentation from CBCT images.144 PatientsU-NetCBCTDC: 0.926
%95 HD: 0.621
IoU: 0.862
Shi et al. [44]This study proposes an automated method to measure condylar changes in patients with skeletal class II malocclusion following surgical orthodontic treatment.48
Patients
nnU-NetCBCTMaxilla
DC: 0.9263
Mandible
DC: 0.9387
Condyle
DC: 0.971
Yağmur et al. [45]The aim of this study is to evaluate the mandibular canal with CBCT using a deep learning approach.300 PatientsnnU-Net v2CBCTDC: 0.76
Ascı et al. [46]The purpose of this study was to evaluate the effectiveness of dental caries segmentation on the panoramic radiographs taken from children in primary dentition, mixed dentition and permanent dentition with AI models developed using the deep learning method. 6075 PatientsU-NetPanoramic RadiographsSensitivity: 0.8269
Precision: 0.9123
F-1 Score: 0.8675
İçöz et al. [47] The aim of this study was to evaluate the effectiveness of an AI system in the detection of roots with apical periodontitis on digital panoramic radiographs.306 ScansYOLOv3 Panoramic RadiographsSensitivity: 98%
Specificity: 56%
F-1 Score: 71%
Chang et al. [48]The aim of this study was to develop an automated method for diagnosing periodontal bone loss for staging periodontitis on dental panoramic radiographs using the deep learning hybrid method for the first time.340
Scans
Mask R-CNNPanoramic RadiographsPeriodontal Bone Level
IoU: 0.88
Accuracy: 0.92
DC: 0.93
Cementoenamel Junction Level
IoU: 0.84
Accuracy: 0.87
DC: 0.91
Teeth and Implants
IoU: 0.83
Accuracy: 0.87
DC: 0.91
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Haylaz, E.; Gumussoy, I.; Duman, S.B.; Kalabalik, F.; Eren, M.C.; Demirsoy, M.S.; Celik, O.; Bayrakdar, I.S. Automatic Segmentation of the Nasolacrimal Canal: Application of the nnU-Net v2 Model in CBCT Imaging. J. Clin. Med. 2025, 14, 778. https://doi.org/10.3390/jcm14030778

AMA Style

Haylaz E, Gumussoy I, Duman SB, Kalabalik F, Eren MC, Demirsoy MS, Celik O, Bayrakdar IS. Automatic Segmentation of the Nasolacrimal Canal: Application of the nnU-Net v2 Model in CBCT Imaging. Journal of Clinical Medicine. 2025; 14(3):778. https://doi.org/10.3390/jcm14030778

Chicago/Turabian Style

Haylaz, Emre, Ismail Gumussoy, Suayip Burak Duman, Fahrettin Kalabalik, Muhammet Can Eren, Mustafa Sami Demirsoy, Ozer Celik, and Ibrahim Sevki Bayrakdar. 2025. "Automatic Segmentation of the Nasolacrimal Canal: Application of the nnU-Net v2 Model in CBCT Imaging" Journal of Clinical Medicine 14, no. 3: 778. https://doi.org/10.3390/jcm14030778

APA Style

Haylaz, E., Gumussoy, I., Duman, S. B., Kalabalik, F., Eren, M. C., Demirsoy, M. S., Celik, O., & Bayrakdar, I. S. (2025). Automatic Segmentation of the Nasolacrimal Canal: Application of the nnU-Net v2 Model in CBCT Imaging. Journal of Clinical Medicine, 14(3), 778. https://doi.org/10.3390/jcm14030778

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop