Next Article in Journal
Evaluation of Cytocompatibility of PEEK-Based Composites as a Function of Manufacturing Processes
Previous Article in Journal
Speech Perception Improvement Algorithm Based on a Dual-Path Long Short-Term Memory Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Novel Procedure for Automatic Registration between Cone-Beam Computed Tomography and Intraoral Scan Data Supported with 3D Segmentation

1
Department of Orthodontics, Asan Medical Center, University of Ulsan College of Medicine, Seoul 05505, Republic of Korea
2
Department of Orthodontics, Chungang University Gwangmyeong Hospital, Gwangmyeong 14353, Republic of Korea
3
Department of Mechanical Design Engineering, Hanyang University, Seoul 04763, Republic of Korea
4
BK21 FOUR ERICA-ACE Center, Hanyang University, Ansan 15588, Republic of Korea
5
Seoul Ami Orthodontic Private Practice, Incheon 22011, Republic of Korea
6
Department of Mechanical Engineering, Hanyang University, Ansan 15588, Republic of Korea
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Bioengineering 2023, 10(11), 1326; https://doi.org/10.3390/bioengineering10111326
Submission received: 16 October 2023 / Revised: 10 November 2023 / Accepted: 15 November 2023 / Published: 17 November 2023
(This article belongs to the Section Biomedical Engineering and Biomaterials)

Abstract

:
In contemporary practice, intraoral scans and cone-beam computed tomography (CBCT) are widely adopted techniques for tooth localization and the acquisition of comprehensive three-dimensional models. Despite their utility, each dataset presents inherent merits and limitations, prompting the pursuit of an amalgamated solution for optimization. Thus, this research introduces a novel 3D registration approach aimed at harmonizing these distinct datasets to offer a holistic perspective. In the pre-processing phase, a retrained Mask-RCNN is deployed on both sagittal and panoramic projections to partition upper and lower teeth from the encompassing CBCT raw data. Simultaneously, a chromatic classification model is proposed for segregating gingival tissue from tooth structures in intraoral scan data. Subsequently, the segregated datasets are aligned based on dental crowns, employing the robust RANSAC and ICP algorithms. To assess the proposed methodology’s efficacy, the Euclidean distance between corresponding points is statistically evaluated. Additionally, dental experts, including two orthodontists and an experienced general dentist, evaluate the clinical potential by measuring distances between landmarks on tooth surfaces. The computed error in corresponding point distances between intraoral scan data and CBCT data in the automatically registered datasets utilizing the proposed technique is quantified at 0.234 ± 0.019 mm, which is significantly below the 0.3 mm CBCT voxel size. Moreover, the average measurement discrepancy among expert-identified landmarks ranges from 0.368 to 1.079 mm, underscoring the promise of the proposed method.

1. Introduction

With the advancement of new Computer-Aided Design and Computer-Aided Manufacturing (CAD/CAM) technologies in dentistry, clinicians are increasingly employing virtual simulations for various dental treatments. Dental cone-beam computed tomography (CBCT) is a widely utilized imaging modality known for its low-dose radiation capabilities, enabling the visualization of teeth and craniofacial structures. It provides comprehensive 3D diagnostic capabilities and finds extensive application in various dental treatments, including orthodontics, dental implants, and orthognathic surgery.
For instance, the visualization of impacted teeth supports the precise formulation of treatment plans to align and establish final tooth positions for orthodontic treatment goals. In the case of dental implants, fixture diameter, and positioning can be determined based on a patient’s alveolar bone morphology. In orthognathic surgery, a 3D virtual model of patients’ jaws can be generated, enabling the computer-simulated surgery to accurately correct jaw deformities. Subsequently, leveraging the simulation data, various devices like clear aligners for orthodontic treatment, 3D printed surgical guides, and splints for dental implants and orthognathic surgery, respectively, can be manufactured.
However, it is essential to acknowledge that the spatial resolution of dental CBCT may be insufficient for precisely defining tooth crowns and occlusion. Therefore, the imperative task of acquiring and integrating high-resolution IOS data with CBCT data arises to overcome this limitation [1,2].
The integration between the IOS and CBCT data, referred to as registration, aligns these two data sets by utilizing common reference points captured separately via the IOS and CBCT machines. The accuracy of this registration process is of utmost importance, especially in the context of appliance manufacturing, as it directly influences the success of treatments following the translation of virtual simulations based on the registration results.
For example, even a small amount of variance during the registration process between CBCT and IOS data for the creation of surgical splints in orthognathic surgery can lead to inaccuracies in jaw positions during surgery, which can have permanent consequences [3]. While manual registration, supported by various commercial software [4,5,6,7], is a feasible option, it demands a significant investment of time and effort and is susceptible to the operator’s level of expertise [8,9].
The automated registration of heterogeneous CBCT and IOS data presents several challenges, despite their shared patient origin. CBCT comprises a range of cross-sectional images, including the skin, craniofacial skeleton, tooth crown, root, and soft tissues. The low-dose imaging modality often results in limited contrast between different anatomical structures, making object recognition difficult. In contrast, IOS provides high-resolution surface data of the tooth crown and adjacent soft tissues. To achieve accurate registration between CBCT and IOS data, an initial segmentation of the tooth crown from the CBCT data is essential.
Jang et al. [10] developed a method to generate metal-artifact-free panoramic images from CT scans, combined with a tooth detection approach that classified teeth into four types based on their morphology. However, identifying neighboring teeth in panoramic images, especially in cases involving missing teeth with similar types, remains a challenging task. Chang et al. [11] introduced a deep-learning-based metal artifact reduction method that utilized intra-oral scan data as supplementary information to aid tooth segmentation. Nonetheless, the training data were limited to axial views, which had limitations in achieving a clear separation between the upper and lower teeth in the 3D CBCT model.
To improve the alignment between CBCT and IOS data, particularly in the crown region, an initial alignment was attempted using principal component analysis (PCA) [12], a commonly used statistical method for dimensionality reduction [13]. PCA transforms data into a new coordinate system, effectively capturing data variation in fewer dimensions. However, conventional PCA can be sensitive to noise [14,15], making it challenging to differentiate noise from signal variance and handle missing observations.
In contrast, the random sample consensus (RANSAC) algorithm [16], a machine learning technique, estimates model parameters via iterative sampling, enabling the achievement of optimal fitting results in datasets containing both inliers and outliers [17].
In this paper, we not only implemented a Mask-RCNN [18], which was retrained for CBCT slices in the sagittal direction of the NHP posture, to facilitate the clear separation of upper and lower teeth from the entire CBCT raw data as the appropriate region of interest (ROI), but we also measured the position of the tooth arch in the sagittal view, supported by the tooth arch extraction scheme along the axial view [19]. This approach allowed us to separate and categorize the teeth into upper and lower teeth based on the panoramic view obtained using Mask-RCNN. Once the CBCT and IOS data were properly extracted, focusing on the crown part, we applied the RANSAC algorithm for the initial alignment. In the case of CBCT, it included both the crown and root, while IOS only encompassed the crown. After achieving the initial alignment, matching the data sets based on the center position of the crown, a fine alignment was performed using the point-to-plane algorithm [20].
To validate the integration of automatically registered data, we employed the Bland–Altman method [21,22,23], focusing on the cross-sectional tooth area. The quantitative integration assessment involved the chamfer and Hausdorff distances [24,25] between matched CBCT and IOS data planes. These distances were evaluated by three experienced orthodontists, who manually measured four key crown feature positions.

2. Data Preparation

2.1. 3D Dental Data

This study was approved by the institutional review board of Hallym University (2022-04-018). We collected CBCT and plaster mold from 200 patients who had visited the Department of Orthodontics at Kangnam Sacred Heart Hospital, Hallym University, Seoul, Republic of Korea, for orthodontic diagnosis. IOS 3d data were collected from plaster molds of patients. Patient consent was waived due to the retrospective nature of this study.
A single CBCT with a full field of image measuring 230 × 170 (mm × mm) was acquired using an I-CAT CBCT scanning machine (KaVo Dental GmbH, Biberach, Germany). The operational parameters for this scan were set at 120 kV, 37.1 mA, with a voxel size of 0.3 mm and a scan time of 8.9 s. It consists of 576 slices at a resolution of 768 × 768 × 576 pixels with a slice thickness of 0.3 mm. The capture conditions were standardized with the patient’s eyes instructed to focus on a 400 mm × 500 mm mirror, which was positioned on the wall approximately 1500 mm away from the patient’s head (maintaining the natural head position, NHP [26]). Prior to operating the CBCT scanning machine, patients were asked to exercise their heads up and down in accordance with Solow and Tallgren’s method [26].
IOS data were acquired using the Trios3 intraoral scanner (3Shape, Copenhagen, Denmark). This scanner falls into the category of structured light scanners and employs confocal microscopy and ultra-optical scanning technologies. It offers a field of view measuring 20.6 mm × 17.28 mm with an accuracy of 6.9 ± 0.9 µm.
Figure 1 illustrates the CBCT and IOS systems, and Table 1 provides the specifications of the measuring machines used in this study.

2.2. ROI Extraction

To enhance both the efficiency and accuracy of the automated registration process between cone-beam computed tomography (CBCT) and intraoral optical scanning (IOS) data, a necessity arises to selectively extract only the pertinent target objects, namely teeth, from the comprehensive dataset. Traditional CBCT datasets encompass not only the geometric attributes of teeth but also a composite depiction of the craniofacial skeleton, muscles, skin, and additional soft tissues. Similarly, IOS data inherently encompass the geometric profiles of both teeth and gingival tissues.
In the case of CBCT, the region of interest considered for this segmentation ranged from A-point to Pog-point along the z-axis and between two mandible heads with respect to the y-axis.
In the extracted CBCT image X , X ( x , y , z ) . represents the Hounsfield units in the CBCT at the voxel position ( x , y , z ) . Firstly, along the sagittal direction, the 3D segmentation process is performed by applying the re-trained Mask-RCNN [18] on every slice.
Different from the previous research [10,11], which processes along the z direction of the inputted CBCT, we had difficulties dealing with cases in which patients clench their teeth. In this situation, if it is processed along the z direction of the inputted CBCT, there are multiple positions where there is no empty space between the upper and lower teeth and the boundary consequently cannot be defined. Therefore, in this research, the method for tooth segmentation is performed along the y direction, which allows us to observe the cross-sectional shape, including the crown and root section of a tooth, and is considered the most convenient circumstance for defining the boundary for upper and lower teeth.
In every slice s , the segmentated mask   M s a g i t t a l n ( s ) is represented by a center point p   M s a g i t t a l n ( s ) of the detected bounding box ( x n s , z n s , h n s , w n s ) , as shown in Figure 2. The coordinate of the representative point p   M s a g i t t a l n ( s ) is shown in Equation (1).
p M s a g i t t a l n s ( x , y , z ) = x n s + w n s 2 , s , z n s + h n s 2
Secondly, on the axial view, the dental arch curve C is detected on the mean intensity projection using the technique proposed in the previous study of Ahn et al. [19]. Based on the obtained results of the dental curve, the panoramic image is extracted using by Equation (2).
P X c , z = w w X r c + t n c , z d t ,
where c is the index of the point in the dental arch, r c C , n ( c ) is the unit normal vector at r c , and w is the considered range from the dental arch along the normal vector n ( c ) , as shown in Figure 3.
Thirdly, based on the obtained panoramic view, teeth segmentation is performed by applying the Mask-RCNN. The masks obtained are split into upper masks   M p a n o u p p e r and lower mask   M p a n o l o w e r , which, respectively, represents upper and lower teeth, by applying the linear regression to estimate the boundary between the jaws [19], as shown in Figure 4a.
The representative points p   M s a g i t t a l n ( s ) , which represents segmentated masks along the sagittal axis, are projected on the panoramic image as p p a n o s a g i , similarly to the converting process for panoramic view in Equation (3).
p p a n o s a g i = p r = q r , z r : 1 r N r ,
where N r is the number of representative points projected.
Consequently, as shown in Figure 4b, considering cells p x p a n o u p p e r M p a n o u p p e r , if p r p x p a n o u p p e r , the segmented mask M s a g i t t a l n ( s ) , which is represented as p r , is collected into the group of upper teeth masks X ~ u p p e r . Similarly, if p r p x p a n o l o w e r , the segmented mask M s a g i t t a l n ( s ) is given into the group of lower teeth X ~ l o w e r , with the obtained results shown in Figure 4c.
Concerning the IOS data, with the objective of optimizing the efficacy of the matching procedure, exclusive focus is placed on the segment corresponding to dental structures. This selected dental portion is then employed for alignment with the CBCT data. Consequently, the need arises for a methodology to segregate the dental structures encompassing teeth and gums from the original IOS dataset. The color attributes inherent in the IOS point cloud are harnessed to categorize each individual point within the dataset into one of two distinct groups. Within the input IOS point cloud, every point is characterized by RGB color values, denoting the red, green, and blue color channels. To enhance user-friendliness, these values are subsequently converted into the Hue–Saturation–Value (HSV) color model, where the Hue channel specifically encapsulates color information. In this proposed framework, the K-Nearest Neighbor (KNN) machine learning model [27] is enlisted for the purpose of categorizing each individual point within the IOS dataset into two primary color groups: one representing teeth and the other representing gums. In effectuating this classification, the KNN model assesses neighboring points that lie within a pre-defined distance based on the H, S, and V values of each given point. The predominant color group among these proximate points is then assigned to the color category of the particular input point under consideration, as illustrated in Figure 5.
In some cases, CBCT data and IOS data did not align due to patients having orthodontic appliances or completed treatment. For these patients, we utilized their initial plaster models obtained at the outset of treatment, coinciding with CBCT data acquisition, for this study. These plaster models were scanned to obtain IOS models via the process depicted in Figure 6. Since some of these IOS data had a fixed white color, manual separation of teeth and gum was performed within the Trios 3 program as an alternative to the color-based separation method proposed in this study.

3. Registration Algorithm

Regarding the data derived from CBCT and IOS sources, both manifested as point clouds, an initial step involves the application of voxel down-sampling. This reduction in point cloud density serves a dual purpose: diminishing computational workload and refining registration precision. The automated registration procedure unfolds systematically, encompassing two distinct phases of alignment: a preliminary coarse alignment and a subsequent refined alignment. This division is grounded in the specific aim of achieving alignment between the two dataset sets, culminating in an origin-based correspondence. Notably, the coarse alignment phase is intentionally engineered to avoid extensive iterations, thereby bolstering registration efficiency. The subsequent refined alignment stage consummates the automated registration process, affording heightened precision via the implementation of a localized matching algorithm.
Initially, the RANSAC algorithm is employed to achieve a rough alignment of the two models’ positions during the initial matching process. The RANSAC algorithm consists of two main stages: the hypothesis process and the verification process. In the hypothesis process, a sample model is created by randomly selecting a subset of source data. The verification process then compares this model with the target data, recording the count of matches. This iterative procedure selects matched data points with match rates surpassing a predefined threshold as the output, as shown in Figure 7.
After achieving the initial approximate alignment, the focus shifts to fine alignment, aiming to attain a model sufficiently precise for clinical utilization. The fine alignment process employs the iterative closest point (ICP) algorithm, which minimizes point-to-point distances to narrow the gap between the objects. In particular, we employ the point-to-plane variant well suited for three-dimensional alignment. Figure 8a,b depict the principle of the point-to-plane algorithm and the alignment process. A virtual plane is created within the target data points, and the source data points adjust along the direction of their corresponding plane’s normal vector. This iterative process effectively brings the datasets into close proximity, minimizing interpoint distances.

4. Validation

Following automatic alignment, the alignment’s efficacy is assessed via a triad of evaluation methodologies—statistical, quantitative, and clinical. Bland–Altman analysis, a statistical tool, probes the alignment trend between disparate datasets. For gauging alignment accuracy, we computed the 3D Euclidean distances’ mean and standard deviation, which provided a measure of alignment precision. Lastly, to gauge the aligned data’s clinical viability, dentists employed passive evaluation methods.

4.1. Statistical Validation

Bland–Altman analysis, extensively applied in medical, healthcare, and chemical domains, is adopted here to assess the matching trend between the two datasets. The graph plots the difference and mean difference as axes, constructing a 95% confidence interval. As the datasets vary in three-dimensional coverage, the aligned data’s crown segment is coronally sliced, and cross-sectional areas in this plane are analyzed. Figure 9 showcases the Bland–Altman results for six randomly selected cases. The presented outcomes indicate an absence of a pronounced trend, with over 95% of the 20 points falling within the confidence interval. This suggests a propensity for the two datasets to align.

4.2. Quantitative Validation

To assess matching accuracy, the mean and standard deviation of 3D Euclidean distances are calculated for quantifying alignment precision. A color-mapping scheme was implemented to visualize inter-dataset distance discrepancies. Colors ranged from blue (0 mm minimum) to red (maximum). Figure 10 illustrates this technique: Figure 10a depicts the lingual perspective of the same teeth, while Figure 10b presents a distance histogram. The red dotted line within the histogram signifies the average distance, incorporating standard deviation and Fitness/Inlier_rmse. Table 2 outlines registration accuracy for a specific set of 10 randomly chosen cases. Remarkably, the mean distance is 0.234 mm, with a standard deviation near 0.132 mm.

4.3. Clinical Validation

To gauge the practicality of automatically registered data in clinical contexts, dentists employ passive evaluation methods utilizing aligned data. Figure 11a,b demarcate specific parameter locations for assessment. Figure 11a segments human teeth into four sections, utilizing ball cuff, U6MB, L6MB, and C measurements as parameters.
Figure 11c outlines the clinical evaluation process involving these parameters. It encompasses documenting parameter positions in each organized dataset and noting corresponding parameter distances. This dual-weekly assessment involves three orthodontists, ensuring reliability by averaging expert outcomes. Euclidean distances between parameter three-dimensional coordinates are measured collectively via the ‘MeshLab’ program, a commercial software.
Manual measurement results for the same model by clinical experts are shown in Figure 12. The outcomes are visually depicted in Figure 12. The assessments, performed by three distinct specialists across the same ten cases, are presented in the form of a box plot. The ultimate findings portray the average measurements obtained from these evaluations. The potential clinical applicability of the research methodology was gauged via the insights gleaned from the specialists’ evaluations.

5. Discussion

In the field of dentistry, the progression of CAD/CAM technologies has transformed the clinical landscape by replacing traditional alginate and rubber impression materials with intraoral scanners. Moreover, the adoption of low-dose CBCT imaging has enabled a cost-effective visualization of bone and teeth. The integration of computer simulation techniques has revolutionized treatment planning and the fabrication of dental appliances and prostheses, ushering in a digital era marked by the replacement of manual laboratory procedures with digital milling and 3D printing.
Despite the rapid technological advancements, certain limitations persist in terms of accuracy. Analogous to conventional diagnostic and treatment methods, errors have been reported in the use of digital tools. These errors may occur throughout the clinical pathway, including data acquisition, curation, and appliance fabrication. These limitations are particularly apparent when attempting to utilize various types of digital data to construct a comprehensive 3D model of a patient’s dental and craniofacial structures.
While CBCT provides images of bones and teeth, the resolution falls short for utilizing images of tooth crowns in appliance fabrication. Although efforts have been made to segment teeth from CBCT, the accuracy remains insufficient for clinical applications [10,28,29]. As a result, a critical necessity emerges for the registration of intraoral scan data and high-precision surface data with CBCT in clinical applications like the development of surgical guides and orthodontic clear aligners. This integration process, known as registration, relies on a common reference point established using the respective instruments. While the manual alignment of these two data types is possible [4,5,6,7], it presents challenges in terms of being labor-intensive and time-consuming. Furthermore, its accuracy depends on the operator’s proficiency [8,9], leading to low inter-examiner reliability. Hence, the quest for automated registration of Intraoral Scan (IOS) and CBCT data becomes of paramount importance. The unique characteristics of IOS and CBCT data call for customized data acquisition strategies that align with the restricted Region of Interest (ROI) defined by IOS.
Noh et al. [30] introduced a method for dental registration utilizing the iterative closest point algorithm, encompassing the matching of three distinct registration areas: buccal surfaces, lingual surfaces, and a combination of both. However, their approach lacked a procedure for the extraction of teeth models from the cone beam computed tomography (CBCT) data. Additionally, the research omitted details regarding the removal of gingival margin areas, leaving room for improvement in this aspect of the methodology. In contrast, Park et al. [31] presented a study employing a manual registration function and a point-based registration function to align CBCT and intraoral scanning (IOS) data. Their method required user interaction to select point pairs for matching. It is worth noting that the data used in their research were derived from an artificial skull model, which may not perfectly represent real patient data due to inherent differences. An intriguing approach was proposed by Deferm et al. [32], who introduced a novel soft tissue-based method for registering intraoral scans with CBCT scans. Their study commenced with the alignment of dentate jaws via the registration of the palatal mucosal surface, which was followed by a meticulous evaluation of accuracy at the individual tooth level. Subsequently, their unique methodology extended to the registration of fully edentulous jaws, which incorporated both the palatal and alveolar crest mucosal surfaces. However, the specifics of the iterative closest point (ICP) algorithm utilized in their research were drawn from commercial software, and the technical intricacies were not thoroughly elucidated. Piao et al. [8] contributed to the field by comparing multiple registration methods, including deep-learning-based registration, manual registration, surface-based registration, and point-based registration. These methods were integrated into commercial software packages employed in their research. As a result, this study emphasizes the utilization of these techniques, but detailed insights into their methodologies were not the central focus. Yang et al. [33] use a digital approach based on a single CBCT scan to transfer virtual intraoral scans to a physical mechanical articulator. This eliminates traditional procedures, streamlines workflows, and reduces chairside adjustments. The technique enables accurate intraoral scan mounting and virtual articulator parameter setting in prosthetic dentistry. However, it will require an external physical mechanical articulator, compared to the other computational techniques. Hamilton et al. [34] compare the registrations between IOS and the multiple-sized FOV of CBCT datasets. In their research, there is an observed increase in precision errors during intra-oral scan registration. Nevertheless, it is noteworthy that when an adequate number of well-distributed teeth are discernible within the small FoV CBCT, the precision of digital intra-oral scan registration seems to fall within clinically acceptable limits. However, the registration process was performed manually by a trained investigator. For applying deep-learning methods to integrate the CBCT and IOS, Lee et al. [35] introduce a study aiming to assess the precision of integrated tooth models (ITMs) generated through deep learning, which involves the fusion of intraoral scans and cone-beam computed tomography (CBCT) scans. The primary focus was on the three-dimensional (3D) analysis of root position in the context of orthodontic treatment. Additionally, this study aimed to juxtapose the fabrication process of ITMs using deep learning against the conventional manual method. However, the 3D segmentation mentioned in this research was performed using a commercial software package.
In this study, we leverage a retrained Mask-RCNN model on sagittal CBCT slices within the NHP posture, addressing limitations identified in previous investigations, particularly concerning the challenge of missing teeth [10,11]. Our method distinctly segregates upper and lower teeth via customized ROIs. Furthermore, we determine the sagittal position of the dental arch by concurrently employing a tooth arch extraction technique in conjunction with axial views [21]. On the other hand, for IOS data, we effectively separate it from the gingiva using a color-based KNN algorithm. Subsequently, our alignment strategy focuses on the tooth crown, capitalizing on the unique attributes derived from the extracted CBCT and IOS data. Notably, during the initial alignment phase, we opted for the RANSAC algorithm over PCA [12,13,14,15,16]. This choice was grounded in RANSAC’s resilience against outliers and missing data, resulting in a more reliable alignment process.
It is important to note that the careful use of this loop strategy contributes to the precision of the alignment process, enhancing the accuracy of the matched data. However, it is acknowledged that the matching time may be comparatively extended due to the iterative nature of the loop. Despite the inherent trade-off between precision and efficiency, our approach prioritizes achieving accurate alignment outcomes. The deliberate trade-off of increased processing time for accuracy is strategically balanced to yield results that align with the stringent demands of clinical applications, establishing the basis for practical integration of the method within the realm of dental practice.

6. Conclusions

This paper introduces an AI-driven dental system that integrates automated data extraction and matching techniques for IOS and CBCT data, thereby enhancing the diagnosis and treatment planning processes in dentistry. In CBCT data, a method for the precise segmentation of tooth-contact slices is proposed. This involves applying a retrained Mask-RCNN to sagittal images in NHP posture to isolate teeth. Dental arch positioning is achieved via axial-view tooth projections, while panoramic views are extracted from dental arches. The Mask-RCNN is further employed to distinguish upper and lower jaws via panoramic views and masks. For IOS data, conversion to point cloud format is followed by HSV color model utilization. A color-based KNN approach is applied to segregate teeth and gingiva. Addressing differing dataset scopes, sequential RANSAC and ICP algorithms are employed for matching and prioritizing tooth crown alignment. For validation purposes, a comparative assessment was conducted between the performance of the proposed method and expert-driven procedures. The discerned deviation between the proposed method and manual measurements was determined to be within acceptable bounds, thereby endorsing the potential viability of this method for practical deployment in clinical settings.

Author Contributions

Conceptualization, Y.-J.K. and A.K.; Validation, Y.-J.K., J.-H.A., N.J. and A.K.; Investigation, J.-H.A.; Resources, J.-H.A.; Writing—original draft, H.-K.L. and T.P.N.; Writing—review & editing, J.Y.; Visualization, H.-K.L. and T.P.N.; Supervision, J.Y.; Project administration, J.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was financially supported by the Ministry of Trade, Industry, and Energy (MOTIE), Korea, under the “170k closed section roll forming and free curvature bending technology development for electric vehicle body” (reference number 20022814) supervised by the Korea Institute for Advancement of Technology (KIAT). This work was also supported by the Industrial Strategic Technology Development Program-A program for win-win type innovation leap between middle market enterprise and small & medium sized enterprise (P0024516, Development and commercialization of a customized dental solution with intelligent automated diagnosis technology based on virtual patient data) funded By the Ministry of Trade, Industry & Energy (MOTIE, Korea) and the Korea Institute for Advancement of Technology (KIAT). Finally, this research was financially supported by the Ministry of Trade, Industry, and Energy (MOTIE), Korea, under the “Innovative Digital Manufacturing Platform” (reference number P00223311) supervised by the Korea Institute for Advancement of Technology (KIAT).

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Institutional Review Board of Hallym University (2022-04-018).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Data sharing is unavailable due to ethical restrictions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Davidowitz, G.; Kotick, P.G. The use of CAD/CAM in dentistry. Dent. Clin. 2011, 55, 559–570. [Google Scholar] [CrossRef] [PubMed]
  2. Moörmann, W.H. The evolution of the CEREC system. J. Am. Dent. Assoc. 2006, 137, 7S–13S. [Google Scholar] [CrossRef] [PubMed]
  3. Alkhayer, A.; Piffkó, J.; Lippold, C.; Segatto, E. Accuracy of virtual planning in orthognathic surgery: A systematic review. Head Face Med. 2020, 16, 34. [Google Scholar] [CrossRef] [PubMed]
  4. Amorim, P.; Moraes, T.; Silva, J.; Pedrini, H. In Vesalius: An interactive rendering framework for health care support. In Proceedings of the Advances in Visual Computing: 11th International Symposium, ISVC 2015, Las Vegas, NV, USA, 14–16 December 2015; Springer International Publishing: Cham, Switzerland, 2015; pp. 45–54. [Google Scholar]
  5. Yushkevich, P.A.; Piven, J.; Hazlett, H.C.; Smith, R.G.; Ho, S.; Gee, J.C.; Gerig, G. User-guided 3D active contour segmentation of anatomical structures: Significantly improved efficiency and reliability. Neuroimage 2006, 31, 1116–1128. [Google Scholar] [CrossRef]
  6. Fedorov, A.; Beichel, R.; Kalpathy-Cramer, J.; Finet, J.; Fillion-Robin, J.C.; Pujol, S.; Bauer, C.; Jennings, D.; Fennessy, F.; Sonka, M.; et al. 3D Slicer as an image computing platform for the Quantitative Imaging Network. Magn. Reson. Imaging 2012, 30, 1323–1341. [Google Scholar] [CrossRef]
  7. Lim, S.W.; Hwang, H.S.; Cho, I.S.; Baek, S.H.; Cho, J.H. Registration accuracy between intraoral-scanned and cone-beam computed tomography–scanned crowns in various registration methods. Am. J. Orthod. Dentofac. Orthop. 2020, 157, 348–356. [Google Scholar] [CrossRef]
  8. Piao, X.Y.; Park, J.M.; Kim, H.; Kim, Y.; Shim, J.S. Evaluation of different registration methods and dental restorations on the registration duration and accuracy of cone beam computed tomography data and intraoral scans: A retrospective clinical study. Clin. Oral Investig. 2022, 26, 5763–5771. [Google Scholar] [CrossRef]
  9. Flügge, T.; Derksen, W.; Te Poel, J.; Hassan, B.; Nelson, K.; Wismeijer, D. Registration of cone beam computed tomography data and intraoral surface scans–A prerequisite for guided implant surgery with CAD/CAM drilling guides. Clin. Oral Implant. Res. 2017, 28, 1113–1118. [Google Scholar] [CrossRef]
  10. Jang, T.J.; Kim, K.C.; Cho, H.C.; Seo, J.K. A fully automated method for 3D individual tooth identification and segmentation in dental CBCT. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 6562–6568. [Google Scholar] [CrossRef]
  11. Hyun, C.M.; Bayaraa, T.; Yun, H.S.; Jang, T.J.; Park, H.S.; Seo, J.K. Deep learning method for reducing metal artifacts in dental cone-beam CT using supplementary information from intra-oral scan. Phys. Med. Biol. 2022, 67, 175007. [Google Scholar] [CrossRef]
  12. Jolliffe, I. Principal Component Analysis; Springer: New York, NY, USA, 1986. [Google Scholar]
  13. Wikipedia. Principal Component Analysis. Available online: https://en.wikipedia.org/wiki/Principal_component_analysis (accessed on 5 May 2023).
  14. Mitra, N.J.; Nguyen, A. Estimating surface normals in noisy point cloud data. In Proceedings of the Nineteenth Annual Symposium on Computational Geometry, San Diego, CA, USA, 8–10 June 2003; pp. 322–328. [Google Scholar]
  15. Bailey, S. Principal component analysis with noisy and/or missing data. Publ. Astron. Soc. Pac. 2012, 124, 1015. [Google Scholar] [CrossRef]
  16. Schnabel, R.; Wahl, R.; Klein, R. Efficient RANSAC for point-cloud shape detection. In Computer Graphics Forum; Blackwell Publishing Ltd.: Oxford, UK, 2007; pp. 214–226. [Google Scholar]
  17. Wikipedia. Random Sample Consensus. Available online: https://en.wikipedia.org/wiki/Random_sample_consensus (accessed on 5 May 2023).
  18. He, K.; Gkioxari, G.; Dollár, P.; Girshick, R. Mask R-CNN. arXiv 2017, arXiv:1703.06870v3. [Google Scholar]
  19. Ahn, J.; Nguyen, T.P.; Kim, Y.J.; Kim, T.; Yoon, J. Automated analysis of three-dimensional CBCT images taken in natural head position that combines facial profile processing and multiple deep-learning models. Comput. Methods Programs Biomed. 2022, 226, 107123. [Google Scholar] [CrossRef] [PubMed]
  20. Li, P.; Wang, R.; Wang, Y.; Tao, W. Evaluation of the ICP algorithm in 3D point cloud registration. IEEE Access 2020, 8, 68030–68048. [Google Scholar] [CrossRef]
  21. Giavarina, D. Understanding bland altman analysis. Biochem. Medica 2015, 25, 141–151. [Google Scholar] [CrossRef]
  22. Bland, J.M.; Altman, D. Statistical methods for assessing agreement between two methods of clinical measurement. Lancet 1986, 327, 307–310. [Google Scholar] [CrossRef]
  23. Bunce, C. Correlation, agreement, and Bland–Altman analysis: Statistical analysis of method comparison studies. Am. J. Ophthalmol. 2009, 148, 4–6. [Google Scholar] [CrossRef]
  24. Liu, M.; Sheng, L.; Yang, S.; Shao, J.; Hu, S.M. Morphing and sampling network for dense point cloud completion. Proc. AAAI Conf. Artif. Intell. 2020, 34, 11596–11603. [Google Scholar] [CrossRef]
  25. Cignoni, P.; Rocchini, C.; Scopigno, R. Metro: Measuring error on simplified surfaces. In Computer Graphics Forum; Blackwell Publishers: Oxford, UK; Blackwell Publishers: Boston, MA, USA, 1998; pp. 167–174. [Google Scholar]
  26. Solow, B.; Tallgren, A. Natural head position in standing subjects. Acta Odontol. Scand. 1971, 29, 591–607. [Google Scholar] [CrossRef]
  27. Imandoust, S.B.; Bolandraftar, M. Application of k-nearest neighbor (knn) approach for predicting economic events: Theoretical background. Int. J. Eng. Res. Appl. 2013, 3, 605–610. [Google Scholar]
  28. Shaheen, E.; Leite, A.; Alqahtani, K.A.; Smolders, A.; Van Gerven, A.; Willems, H.; Jacobs, R. A novel deep learning system for multi-class tooth segmentation and classification on cone beam computed tomography. A validation study. J. Dent. 2021, 115, 103865. [Google Scholar] [CrossRef] [PubMed]
  29. Cui, Z.; Li, C.; Wang, W. ToothNet: Automatic tooth instance segmentation and identification from cone beam CT images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019, Long Beach, CA, USA, 15–20 June 2019; pp. 6368–6377. [Google Scholar]
  30. Noh, H.; Nabha, W.; Cho, J.H.; Hwang, H.S. Registration accuracy in the integration of laser-scanned dental images into maxillofacial cone-beam computed tomography images. Am. J. Orthod. Dentofac. Orthop. 2011, 140, 585–591. [Google Scholar] [CrossRef] [PubMed]
  31. Park, J.H.; Hwang, C.J.; Choi, Y.J.; Houschyar, K.S.; Yu, J.H.; Bae, S.Y.; Cha, J.Y. Registration of digital dental models and cone-beam computed tomography images using 3-dimensional planning software: Comparison of the accuracy according to scanning methods and software. Am. J. Orthod. Dentofac. Orthop. 2020, 157, 843–851. [Google Scholar] [CrossRef] [PubMed]
  32. Deferm, J.T.; Nijsink, J.; Baan, F.; Verhamme, L.; Meijer, G.; Maal, T. Soft tissue-based registration of intraoral scan with cone beam computed tomography scan. Int. J. Oral Maxillofac. Surg. 2022, 51, 263–268. [Google Scholar] [CrossRef] [PubMed]
  33. Yang, S.; Dong, B.; Zhang, Q.; Li, J.; Yuan, Q.; Yue, L. An Indirect Digital Technique to Transfer 3D Printed Casts to a Mechanical Articulator with Individual Sagittal Condylar Inclination Settings Using CBCT and Intraoral Scans. J. Prosthodont. 2022, 31, 822–827. [Google Scholar] [CrossRef] [PubMed]
  34. Hamilton, A.; Singh, A.; Friedland, B.; Jamjoom, F.Z.; Griseto, N.; Gallucci, G.O. The impact of cone beam computer tomography field of view on the precision of digital intra-oral scan registration for static computer-assisted implant surgery: A CBCT analysis. Clin. Oral Implant. Res. 2022, 33, 1273–1281. [Google Scholar] [CrossRef]
  35. Lee, S.C.; Hwang, H.S.; Lee, K.C. Accuracy of deep learning-based integrated tooth models by merging intraoral scans and CBCT scans for 3D evaluation of root position during orthodontic treatment. Prog. Orthod. 2022, 23, 15. [Google Scholar] [CrossRef]
Figure 1. Characteristics of (a) cone-beam computed tomography and (b) IOS data.
Figure 1. Characteristics of (a) cone-beam computed tomography and (b) IOS data.
Bioengineering 10 01326 g001
Figure 2. Teeth segmentation along sagittal views with Mask-RCNN.
Figure 2. Teeth segmentation along sagittal views with Mask-RCNN.
Bioengineering 10 01326 g002
Figure 3. Panoramic view extraction based on the defined dental arch on the axial view.
Figure 3. Panoramic view extraction based on the defined dental arch on the axial view.
Bioengineering 10 01326 g003
Figure 4. Combination of segmentation results. (a) Segmentation result on panoramic view for upper and lower teeth, (b) Separated upper and lower representative points, (c) 3D segmentation result for upper and lower teeth.
Figure 4. Combination of segmentation results. (a) Segmentation result on panoramic view for upper and lower teeth, (b) Separated upper and lower representative points, (c) 3D segmentation result for upper and lower teeth.
Bioengineering 10 01326 g004
Figure 5. ROI extraction for teeth data from entire IOS data.
Figure 5. ROI extraction for teeth data from entire IOS data.
Bioengineering 10 01326 g005
Figure 6. Process for requiring IOS data from gypsum model.
Figure 6. Process for requiring IOS data from gypsum model.
Bioengineering 10 01326 g006
Figure 7. Application of RANSAC algorithm to rough alignment for two sets of teeth data.
Figure 7. Application of RANSAC algorithm to rough alignment for two sets of teeth data.
Bioengineering 10 01326 g007
Figure 8. Concept of point-to-plane method for matching procedure between CBCT and IOS data: (a) Principle of point-to-plane method; (b) Source to access the plane created in the target.
Figure 8. Concept of point-to-plane method for matching procedure between CBCT and IOS data: (a) Principle of point-to-plane method; (b) Source to access the plane created in the target.
Bioengineering 10 01326 g008
Figure 9. Results of Bland–Altman method: Data on 6 randomly selected patients.
Figure 9. Results of Bland–Altman method: Data on 6 randomly selected patients.
Bioengineering 10 01326 g009aBioengineering 10 01326 g009b
Figure 10. Error map of registration result from lingual view (a) and the histogram of distance between the corresponding points of two data (b).
Figure 10. Error map of registration result from lingual view (a) and the histogram of distance between the corresponding points of two data (b).
Bioengineering 10 01326 g010
Figure 11. Position of features for evaluating consistency: (a) Number according to the position of the teeth and position of the teeth and positions of U6MB and L6MB; (b) Example of measurement parameter position of teeth; (c) Evaluation sequence on point cloud.
Figure 11. Position of features for evaluating consistency: (a) Number according to the position of the teeth and position of the teeth and positions of U6MB and L6MB; (b) Example of measurement parameter position of teeth; (c) Evaluation sequence on point cloud.
Bioengineering 10 01326 g011
Figure 12. Distance between landmarks measured by experts (dentists) on aligned models.
Figure 12. Distance between landmarks measured by experts (dentists) on aligned models.
Bioengineering 10 01326 g012
Table 1. Specifications of digital dentistry device.
Table 1. Specifications of digital dentistry device.
CBCTIntra-Oral Scanner (IOS)
Device brandImaging sciences international3shape
Device modelDigital i-CAT FLX MVTrios 3
Accuracy 0.3   mm (voxel size) 6.9 ± 0.9   µm
Measuring time~3 min/case~5 min/case
Measurement areaUpper part of the neckTeeth surface, gingiva
Table 2. Registration accuracy for 10 sample cases.
Table 2. Registration accuracy for 10 sample cases.
ValueCase NumberAverage
12345678910
Mean (mm)0.2260.2920.2150.2210.2170.2290.2490.2270.2170.2490.234
Std (mm)0.1250.2020.1080.1140.1120.1180.1570.1120.1180.1550.132
F/I2.8133.1672.6932.8802.8133.0252.4552.9892.8212.7462.840
Std: standard deviation; F/I: Fitness/Inlier_rmse.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kim, Y.-J.; Ahn, J.-H.; Lim, H.-K.; Nguyen, T.P.; Jha, N.; Kim, A.; Yoon, J. Novel Procedure for Automatic Registration between Cone-Beam Computed Tomography and Intraoral Scan Data Supported with 3D Segmentation. Bioengineering 2023, 10, 1326. https://doi.org/10.3390/bioengineering10111326

AMA Style

Kim Y-J, Ahn J-H, Lim H-K, Nguyen TP, Jha N, Kim A, Yoon J. Novel Procedure for Automatic Registration between Cone-Beam Computed Tomography and Intraoral Scan Data Supported with 3D Segmentation. Bioengineering. 2023; 10(11):1326. https://doi.org/10.3390/bioengineering10111326

Chicago/Turabian Style

Kim, Yoon-Ji, Jang-Hoon Ahn, Hyun-Kyo Lim, Thong Phi Nguyen, Nayansi Jha, Ami Kim, and Jonghun Yoon. 2023. "Novel Procedure for Automatic Registration between Cone-Beam Computed Tomography and Intraoral Scan Data Supported with 3D Segmentation" Bioengineering 10, no. 11: 1326. https://doi.org/10.3390/bioengineering10111326

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop