Next Article in Journal
Wind Speed Prediction Based on Error Compensation
Next Article in Special Issue
Fusion Object Detection and Action Recognition to Predict Violent Action
Previous Article in Journal
Swarm Metaverse for Multi-Level Autonomy Using Digital Twins
Previous Article in Special Issue
Handwritten Multi-Scale Chinese Character Detector with Blended Region Attention Features and Light-Weighted Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Robust H-K Curvature Map Matching for Patient-to-CT Registration in Neurosurgical Navigation Systems

1
School of Electronic and Electrical Engineering, Kyungpook National University, Daegu 41566, Republic of Korea
2
Research Center for Neurosurgical Robotic System, Kyungpook National University, Daegu 41566, Republic of Korea
*
Author to whom correspondence should be addressed.
This paper is an extension of a conference paper. A patient-to-CT registration method based on spherical unwrapping and H-K curvature descriptors for surgical navigation system.
Sensors 2023, 23(10), 4903; https://doi.org/10.3390/s23104903
Submission received: 12 April 2023 / Revised: 16 May 2023 / Accepted: 18 May 2023 / Published: 19 May 2023
(This article belongs to the Special Issue Visual Sensing and Sensor Fusion for Machine Intelligence)

Abstract

:
Image-to-patient registration is a coordinate system matching process between real patients and medical images to actively utilize medical images such as computed tomography (CT) during surgery. This paper mainly deals with a markerless method utilizing scan data of patients and 3D data from CT images. The 3D surface data of the patient are registered to CT data using computer-based optimization methods such as iterative closest point (ICP) algorithms. However, if a proper initial location is not set up, the conventional ICP algorithm has the disadvantages that it takes a long converging time and also suffers from the local minimum problem during the process. We propose an automatic and robust 3D data registration method that can accurately find a proper initial location for the ICP algorithm using curvature matching. The proposed method finds and extracts the matching area for 3D registration by converting 3D CT data and 3D scan data to 2D curvature images and by performing curvature matching between them. Curvature features have characteristics that are robust to translation, rotation, and even some deformation. The proposed image-to-patient registration is implemented with the precise 3D registration of the extracted partial 3D CT data and the patient’s scan data using the ICP algorithm.

1. Introduction

Image-guided surgery is a technology that helps doctors perform accurate surgical procedures by using augmented reality techniques to match the medical image of the surgical site to the actual surgical site [1,2,3,4]. Unlike conventional surgical methods that rely only on the experience and knowledge of the surgeons, it is possible to assist the operation by utilizing the image information of the patient’s surgical area in real time. In this system, the surgeon can operate on the patient while looking at the medical image matched with the surgical site on the monitor as well as the actual surgical site. A surgical navigation system is one method of image-guided surgery [5,6,7,8]. A surgical navigation system provides optimum trajectory to the surgical target, just as a vehicle navigation system facilitates the driver with information on the map and route to the destination. This system informs the location of the surgical site, the current location of the surgical tool, and whether the surgical tool is safely approaching the target lesion on a medical image. To implement this system, the relative positional relationship between the surgical site and the surgical tool must be tracked in real time. The coordinates of the surgical tool can be displayed in real time on the matched medical image using the obtained position and attitude information of the surgical tool. Therefore, image-to-patient registration [9,10,11], which is a coordinate system matching process between medical images, such as computerized tomography (CT) or magnetic resonance imaging (MRI), and real patient coordinates before surgery, is essential for an accurate surgical navigation system.
In the image-to-patient registration process, two different coordinates of the medical image and the actual patient are transformed into one coordinate system. Paired-point registration is one of the typical image-to-patient registration methods [12]. It utilizes corresponding points in the patient and medical image to match the patient’s CT/MRI coordinate system with the patient’s world coordinate system. If at least three points that correspond to each other between two 3D data are known, the rotation and translation information between the two data can be obtained for matching by calculating the relation between these corresponding points [13]. Skin-attached fiducial markers are usually used to obtain these corresponding points [14,15]. In a pre-operative process, fiducial markers are attached to the patient; then, a medical image is captured. Next, image-to-patient registration is performed in the operating room between the patient with the attached fiducial markers and the medical image. However, marker-based methods are inconvenient to use. In addition, if the pose of the marker attached to the patient when obtaining the CT or MRI image and the pose during the operation differs, matching error can also increase. To solve these problems, 3D data-based methods have been proposed [16]. They utilize the 3D surface measurement data of a patient to perform image-to-patient registration without fiducial markers [17]. In this method, a 3D surface measurement sensor is used to obtain the 3D surface data of the patient [18,19]. Then, image-to-patient registration is implemented by matching the 3D surface data with the corresponding part of the 3D data converted from the medical image. It is possible to project the CT/MRI data onto the world coordinate system during surgery by matching the 3D surface data of the patient with the CT/MRI data. After image-to-patient registration is completed, when the patient moves, the optical tracker keeps track of the marker for tracking attached to the patient rather than performing patient registration again. Therefore, the registration state can be continuously maintained by using the posture information of the tracked marker.
A 3D data precision matching algorithm is essential to transforming a patient’s 3D surface data and 3D CT/MRI data into a final coordinate system. Among several 3D matching algorithms, the iterative closest point (ICP) algorithm is the most representative and widely used one [20,21]. The ICP algorithm repeats the process of defining the closest points between two 3D data as corresponding points and minimizing the sum of the distances between the corresponding points for precision data registration. Conventional ICP algorithms without a proper initial location involve a lot of computation and can also suffer from incorrect matching results due to the local minimum problem [22]. The 3D data registration process can become faster and more accurate if a proper initial location is provided before performing the ICP algorithm. In this paper, a 3D registration method based on curvature matching to automatically find the proper initial location for the ICP algorithm is proposed in the head region for neurosurgery. This paper is an expanded version of the conference paper [23], and the basic concept can also be found in the conference paper. The proposed method utilizes the natural features of the skin surface, such as the nose and ears, eliminating the need for fiducial markers. The natural feature data are common between the patient and the CT/MRI image, and due to the relatively large rate of change in the data, these are easy to distinguish from other parts. Therefore, registration and matching errors can be reduced using these feature data. The proposed method finds the proper initial location for the ICP by converting 3D data to 2D curvature images and automatically performing curvature image matching. The proposed matching process is based on the characteristic that curvature features are robust to rotation, translation, and even some deformation [24,25]. To implement image-to-patient registration, the ICP algorithm between 3D CT data and 3D scan data is employed by utilizing curvature matching region information rather than the complicated 3D template matching methods [26,27].

2. Proposed Patient-to-CT Registration Method

In this paper, an automatic and robust image-to-patient registration method is proposed for neurosurgery using curvature map conversion and matching. Three-dimensional surface data to match CT/MRI data are obtained using a surface measurement sensor by measuring natural feature surfaces such as the nose, eyes, and ears on a patient’s head. The proposed registration method can avoid the local minimum problem, because suitable initial positions in the CT surface data to match the 3D surface data are automatically found. Figure 1 displays the schematic diagram of the proposed algorithm, which consists of three steps. First, the CT surface data of the patient’s head are transformed into a 2D image using spherical unwrapping, since the head’s surface data are similar to a sphere [28]. Similarly, the surface measurement data obtained with the sensor are also converted to a 2D image using spherical unwrapping and the mean radius computed from the CT surface data. In the second step, both 2D images are converted to H-K curvature images by calculating the partial differentiation of the peripheral pixel intensity.
After converting the H-K curvature image, the image matching process between two curvature images is performed. In the final step, a 3D region of interest (ROI) from the CT surface data is extracted by utilizing the curvature matching points obtained in the previous step. Since the curvature image is a projection of 3D data onto a plane, the 2D points can be converted into 3D coordinates with an inverse operation. The proper initial coordinates for running the ICP algorithm can be estimated on the CT surface data by inversely mapping the corresponding 2D coordinates to 3D coordinates. Automatic image-to-patient registration is implemented by matching the CT ROI from CT surface data and 3D surface data using the ICP algorithm.

2.1. Mapping 3D CT Data to 2D Image

A CT image is a set of 2D cross-section images obtained by repeatedly scanning a patient along the Z-axis. By aligning the corresponding Z-axis of the cross-section images and collecting all the points, 3D point cloud data can be reconstructed from the CT image, as shown in Figure 1, which shows the 3D CT data of a head phantom. The points for mapping to the 2D Image in the 3D CT data should be skin surface data, which are the outermost points. Therefore, a method is required to extract these points from the entire 3D CT data. To perform this, the 3D CT data are divided into the X-axis and Y-axis, and the outermost points are extracted based on each axis. By dividing the 3D CT data in half along the Y-axis at the center of 3D CT data, the outermost points of 3D CT data on the Y-axis are extracted with the maximum value operation on the left side of the separated 3D CT data and the minimum value operation on the right side of the separated 3D CT data. To increase the precision of the data, similarly, after dividing the 3D CT data in half along the X-axis at the center of 3D CT data, the outermost points of 3D CT data on the X-axis are also extracted with the maximum value operation on the front side of the separated 3D CT data and the minimum value operation on the back side of the separated 3D CT data. The outermost points of the total 3D CT data, which are skin surface data, are obtained by merging these two sets of extracted data.
The 3D shape of the CT data extracted from the patient’s head surface is roughly spherical. Therefore, it is more effective to convert the entire 3D CT data of the skin surface into a 2D image using spherical coordinate system conversion, called spherical unwrapping, rather than a simple plane transformation. According to a principle similar to equirectangular projection, which transforms the map of a globe into a 2D plane map, this method converts every point of spherical 3D surface data into a 2D depth image. Spherical unwrapping can be performed along any axis of the 3D data, and when the patient faces in the direction of the Y-axis in 3D CT data, spherical unwrapping based on the Y-axis is given by the following equations:
r i = x i 2 + y i 2 + z i 2
φ i = arctan y i x i
θ i = arcsin z i r i
The center coordinates of the 3D CT data are obtained by calculating the average coordinates of all the 3D CT data points. Assuming that this center coordinates represent the center of a sphere, the 3D CT data are translated in parallel so that the origin of the coordinate system is located at the center of the 3D CT data to perform spherical coordinate system transformations. The process of mapping 3D CT data to a 2D image is performed using Equations (1)–(3). As shown in Equation (1), distance r i between the center point (origin) and each point p i of 3D CT data is used as the image intensity value. φ i , obtained from the X- and Y-coordinate values of each point p i , is the width coordinate of the image, and θ i , obtained from the Z-coordinate value and r i value of each point p i , is the height coordinate of the image, as shown in Equations (2) and (3). All r i values are mapped to the calculated width and height coordinates of the transformed image. This image projection process shows how points in the 3D CT data are transformed into a 2D image in Figure 2 using the spherical unwrapping conversion equation. To obtain an unwrapped CT image at the desired resolution, φ and θ are multiplied by the target resolution value, respectively. Figure 2 also shows the result of spherical unwrapping on 3D CT data, and the unwrapped CT image is a one-channel depth image.

2.2. Mapping 3D Scan Data to 2D Image

After converting the 3D CT data to a 2D image, the next step is to convert the 3D scan data obtained with a surface measurement sensor to a 2D image for image matching. The 3D scan data are gathered by projecting a structured light pattern on the surface of the target and analyzing the output of the patterns. In order to achieve accurate results of image matching, the two images to be matched should be on a similar scale. Therefore, the image acquired by mapping the 3D scan data should have a scale similar to that of the unwrapped CT image. Since the 3D scan data are only a partial measurement of the facial surface, they are flat data, unlike 3D CT data, which are spherical. Therefore, it is possible to map the 3D scan data to a 2D image using a simple planar projection, but it is not easy to set a scale similar to that of the unwrapped CT image previously obtained.
To address this issue, the 3D scan data are also mapped to the 2D image using spherical unwrapping and other techniques. Although the 3D scan data are planar, they can be aligned with the axis of the 3D CT data, and the r m e a n of the 3D CT data, as the average radius of the sphere, can be used to roughly place them on the spherical surface of the 3D CT data, as shown in Figure 3. In the same way, 3D scan data can also be mapped to the 2D image using spherical coordinate system conversion. Once the 3D scan data are placed on the spherical surface of the 3D CT data, spherical unwrapping is performed to generate an image on a scale similar to that of the unwrapped CT image. The 3D scan data are the partial area data of the head, and they are only mapped to a portion of the whole image. To ensure accurate image matching, only the area where the 3D scan data are mapped is extracted as a template. Figure 3 shows the results of template extraction in the unwrapped 3D scan image corresponding to the nose and right ear.

2.3. H-K Curvature Image Conversion

Curvature refers to the rate of change indicating the extent to which a curve or a curved surface deviates from a flat plane. Principal curvature refers to the maximum and minimum curvatures among the curvatures on a curved surface. Mean (H) curvature represents the average value of principal curvature, while Gaussian (K) curvature represents the product value of principal curvature. Both curvatures are commonly used for surface shape classification [29]. Since these curvatures are only determined by the surface form, such curvature features are robust to rotation, translation, and even some deformation. As the depth values of 3D scan data can be primarily affected by noise and variations in head poses during scanning, a robust image using a characteristic of curvature is necessary. The intensity of the unwrapped image for 3D CT data and 3D scan data represents the distance between the origin of the coordinate system and the corresponding point. Thus, this value can represent the surface shape of 3D data, and the H curvature and K curvature of the corresponding image coordinates can be obtained by calculating the partial differentiation using the N × M mask operation on the image intensity [30]. This method allows robust curvature features to be extracted from the unwrapped image.
g i j ( x , y ) = a i j + b i j ( x x i ) + c i j ( y y j ) + d i j ( x x i ) ( y y j ) + e i j ( x x i ) 2 + f i j ( y y j ) 2 , ( i = 1 N , j = 1 M )
f x ( x i , y j ) = b i j , f y ( x i , y j ) = c i j , f x y ( x i , y j ) = d i j , f x x ( x i , y j ) = 2 e i j , f y y ( x i , y j ) = 2 f i j
H ( x , y ) = ( 1 + f y 2 ) f x x 2 f x f y f x y + ( 1 + f x 2 ) f y y 2 ( 1 + f x 2 + f y 2 ) 3 / 2
K ( x , y ) = f x x f y y f x y 2 2 ( 1 + f x 2 + f y 2 ) 2
Equations (4)–(7) describe the process of calculating H-K curvatures in the depth image. Equation (4) is a biquadratic polynomial equation that uses the intensities of the surrounding image. N and M represent the mask size, while x and y denote the coordinates of the image for which the curvature is to be obtained. To calculate the curvature, the biquadratic polynomial equation, Equation (4), is obtained by performing an N × M mask operation around the image coordinate, and the coefficients of Equation (4) are calculated using least squares fitting. By substituting the coefficients of Equation (4) into Equation (5), the H and K curvatures of the corresponding coordinates can be obtained using Equations (6) and (7), respectively. These curvature values change dramatically in natural features such as eyes, nose, and ears while remaining constant in other areas with less variation, which can further emphasize the features. To utilize the characteristics of natural features for image matching, both images from unwrapped 3D CT data and 3D data are converted to H-K curvature images with H-K curvature values as image intensity.
The H-K curvature images of the 3D CT data and 3D scan data are visualized as 3D meshes in Figure 4 and Figure 5, respectively. The reason for showing the curvature image as a mesh is that the curvature value is too small, and mesh representation shows the relative variance of the surrounding value better than the regular image. As shown in both figures, natural features such as nose and ears are emphasized in H-K curvature images, as these areas have distinct curvature values compared with the surrounding areas.

2.4. Curvature Image Matching

To perform image matching between two curvature images acquired from CT data and scan data, normalized cross-correlation (NCC) is used [31]. In signal processing, cross-correlation is a method for measuring the similarity of two waveforms by shifting one waveform relatively to the other. To apply cross-correlation in image processing for measuring the similarity between two images, first, the images should be normalized by subtracting the mean and dividing by the standard deviation. This method ensures that the correlation measure is independent of differences in the absolute values of the image intensities. Equation (8) shows the formula for calculating NCC for image matching.
γ ( u , v ) = Σ x , y [ f ( x , y ) f ¯ u , v ] [ t ( x u , y v ) t ¯ ] { Σ x , y [ f ( x , y ) f ¯ u , v ] 2 Σ x , y [ t ( x u , y v ) t ¯ ] 2 } 0.5
The process of image matching involves measuring linear variations and geometric similarity between two images by shifting the smallest image as a template, creating a relationship map between the two images, and selecting the largest values as the matching points. NCC is a method used to estimate the correlation between two images using normalization. It is independent of linear differences between the intensities of both images and is less influenced by the absolute values of image intensity. Thus, NCC is appropriate for matching relative shapes and is utilized for curvature image matching in this research. Since NCC is a kind of template matching, one image must be chosen as the template. In this case, the curvature image of the 3D scan data, which is smaller than the curvature image of the 3D CT data, is selected as the template image for curvature image matching. As shown in Figure 6, NCC is utilized to match the curvature images of the 3D scan data and CT data. Matching values for each datum are separately obtained using the H and K curvature images, and the coordinates with the largest average of the matching values of each datum are regarded as matching coordinates. The results of curvature image matching are displayed in Figure 6, where the matching area is shown by a bounding box in the unwrapped CT image for convenience.

2.5. CT ROI Extraction

After successfully matching the curvature images, the coordinates of the four matching points in the unwrapped CT image are calculated. These 2D coordinates and image intensity values can be converted to 3D coordinates using Equations (9) and (10), which are inverse operations of spherical unwrapping.
tan ( φ i ) = y i x i
sin ( θ i ) = z i r i
To calculate φ i , the corresponding width coordinate of the image is divided by the image width value, and to calculate θ i , the corresponding height coordinate of the image is divided by the image height value. r i represents the intensity of the corresponding image coordinate as shown in Equation (1) and is the distance between the origin and corresponding 3D points of CT data. These simultaneous equations are used to obtain the coordinates of the converted 3D points. Then, the 3D ROI is extracted from the 3D CT data using the four converted 3D matching coordinates. A boundary area is established based on the four converted coordinates, and the 3D point data included within this specific boundary are extracted from the 3D CT data. These extracted data represent the proper initial position for applying the ICP and are the CT ROI that will be matched to the 3D scan data. Figure 7 shows the 3D ROI extracted using the four converted 3D matching coordinates.

2.6. Accurate Surface Registration Using ICP Algorithm

The ICP algorithm is one of the representative ways to match different data, and in particular, it is mainly used for matching between 3D point cloud data. The ICP algorithm is based on finding the closest corresponding points between the two point clouds and minimizing the sum of the distances between them to find the correlation. Then, the data for matching are moved and rotated according to this correlation to add and match the existing data. The ICP algorithm is suitable for point cloud data registration, since it can achieve high accuracy with a simple calculation. However, when there is a significant difference between the two data sets or the matching area is small, coarse registration of the data should be performed before applying the ICP algorithm.
Coarse registration is a rough matching process that involves an approximate alignment of two data based on a proper initial location for the ICP algorithm. Using the ICP algorithm without this process can lead to convergence to a local minimum and failure of the matching process. In this study, the CT ROI, which is extracted in advance, is used instead of the entire 3D CT data to precisely match the 3D scan data using the ICP algorithm. This improves the computational speed of the ICP algorithm and makes matching more accurate because the local minimum problem is avoided at the time of matching. Figure 8 shows the results of matching two data using the ICP algorithm. The green point represents the 3D scan data, and the remainder is the CT ROI. Since the extracted CT ROI is used to provide the proper initial location for the ICP, coarse registration is not essential. However, for the stability of data matching, coarse registration is performed between two data before ICP matching. Then, as shown in Figure 8, the two 3D data are matched using the ICP algorithm.

3. Experimental Results and Discussion

3.1. Experimental Environment and Settings

The equipment utilized to obtain 3D scan data for the experiment was a 3D surface measurement sensor based on structured light, which is currently under development. This equipment comprised a projector and a camera. The principle of this 3D sensor is based on projection moire profilometry, in which a sine wave pattern is projected on the object under investigation by a projector and the object with the projected pattern is imaged by the camera. The 3D shape of the object is reconstructed by analyzing the deformation of the projected pattern in the acquired image. Detailed specifications of the surface measurement sensor are presented in Table 1. Unlike laser line scanners that use line-based scanning, this equipment scans the entire area in the field of view, allowing a wide range of 3D data to be obtained with just a few captures. Thus, it is suited to be used for image-to-patient registration in operating rooms. Furthermore, it offers advantages over other scanners in terms of accuracy and ease of use.
Figure 9 shows the experimental environment. As shown in Figure 9, a head phantom was used instead of a human head in the experiment. The 3D sensor was used to measure the surface of the natural features of the head phantom, such as eyes, nose, and ears, and the 3D CT data had the entire surface information of the head phantom. The 3D scan data consisted of approximately 20,000 points, and the 3D CT data consisted of approximately 350,000 points. The proposed algorithm was implemented using MATLAB.

3.2. Curvature Image Matching and CT ROI Extraction

In the proposed method, the depth image obtained using spherical unwrapping is converted into a curvature image. Then, matching points are obtained to extract the CT ROI using curvature image matching. Although the conversion process is cumbersome, it is an essential step for robust image matching using the characteristics of the curvature image. To prove this, as shown in Figure 10, image matching between depth images before conversion and image matching between curvature images were performed using NCC, respectively. The results of depth image matching are shown in Figure 10a, where matching failed due to the large difference in intensities between 3D scan data and 3D CT data. In contrast, matching performed on curvature images was successful and accurate, as shown in Figure 10b. These experimental results show the fact that curvature image matching is more robust than depth image matching.
Matching between depth images was not performed properly due to their sensitivity to changes in intensity caused by holes, rotations, and translations that occur during the unwrapping process. In contrast, curvature image matching uses calculated curvature as a relative relation of surrounding image intensity values. This means that some degree of rotation and translation, and even some deformation, can be ignored. Furthermore, by only emphasizing the targeted natural feature, robust image matching can be achieved. Figure 11 shows additional experimental results of curvature image matching. It can be seen that curvature image matching works well even if there is some deformation or rotation that can occur during the scanning process in 3D scan data.
Four matching points are obtained with curvature image matching, and these points are then converted into 3D points to extract the CT ROI based on the coordinates of the converted points. In this experiment, 30 sets of 3D scan data obtained with the 3D measurement sensor were used for CT ROI extraction. The accurate matching result of the proposed method ensures the precise extraction of the CT ROI. Part of the CT ROI extraction experimental results are shown in Figure 12.

3.3. Surface Registration Results Using ICP Algorithm

Image-to-patient registration is achieved by matching the 3D scan data with the extracted CT ROI. The results of an experiment performed to verify the proposed algorithm using several 3D scan data are shown in Figure 13 and Figure 14. ICP registration between the whole 3D CT data and the 3D scan data without preprocessing, such as coarse registration, failed due to the local minimum problem and scale difference, as shown in Figure 13. However, the proposed ICP registration between the CT ROI and the 3D scan data avoided the local minima during the matching process and was performed correctly, as shown in Figure 14.
Table 2 shows the ICP registration errors of the results in Figure 14. The mean ICP registration error, calculated as the average distance of all corresponding points between the two data, was about 473–840 μm, and the standard ICP registration error was about 263–804 μm. This registration error depended on the used 3D scan data.
In addition, when ICP matching was performed after manually providing appropriate initial location information for coarse registration, it took about 40 to 70 s due to the difference in scale between the two data. In contrast, the proposed method applied the ICP algorithm after extracting the CT ROI, so the ICP matching process was completed in about 6 s, and CT ROI extraction took about 11 s. Since the proposed registration was performed by converting to a similar scale, it was performed more accurately and with fewer operations.
As previously mentioned, H-K curvature can also be used for 3D surface shape classification. The 3D shape of the specific area can be distinguished using the inequality relation of the H-K curvature value. Therefore, various studies have been conducted to detect and extract natural features such as nose and eyes from 3D head data, leveraging the distinctive properties of H-K curvature [29,30]. This method can also exploit the characteristics of H-K curvature to identify natural features. Previous experiments used 3D scan data that had been taken without roll rotation when measuring the surface of the head phantom. However, by obtaining the direction vector of the natural feature region from 3D scan data using H-K classification, 3D scan data with some roll rotation can also be matched. Although some image matching algorithms that consider the template with roll rotation can be used, they take a long time to execute due to a large amount of computation [32,33,34]. In contrast, this method utilizes the already obtained H-K curvature image without the need for additional complicated operations.
Figure 15 shows the nose vector obtained by extracting the nose region, which is an elliptical convex, using the surface shape classification of H-K curvature in the curvature image of 3D scan data. The extracted nose region is represented as a binary image to simply verify the extraction. The nose vector was obtained using the maximum–minimum value and the rate of change in the intensity values of the unwrapped image corresponding to the nose region. In the unwrapped CT image, the nose vector is always vertically upward. Therefore, aligning the nose vector of the template image with this vector before NCC allows data with roll rotation to be used in the proposed algorithm. This improvement can increase the degree of freedom and convenience for the user.
Figure 16 shows the result of the ICP registration of 3D scan data that have roll rotation using the proposed vector alignment algorithm. Because the nose vector of the template image was aligned before NCC, the CT ROI was correctly extracted, and ICP registration was successfully completed. However, this has confirmed the possibility, even though it is not yet a complete algorithm, to improve the algorithm for other natural feature areas, such as the ear, and further experiments are needed in the future.

4. Conclusions

The study concentrated on coordinate matching between patient coordinates and image coordinates for image-to-patient registration, which is used in surgical navigation systems. Instead of using specific markers attached to the patient, the proposed method utilizes a surface measurement sensor and H-K curvature for coordinate matching to improve accuracy and convenience. The main contribution is making the image-to-patient registration process automatic. Using the proposed H-K curvature image-based registration method, the image-to-patient registration process using surface measurement data can be fully automated. Additionally, the local minimum problem of the ICP process is solved by providing a proper initial location, since the 3D ROI is extracted and used for matching, the area of data to be finally matched in ICP matching is reduced, and faster and more accurate matching results can be obtained. The proposed algorithm utilizes natural features of the patient’s face, such as eyes, nose, and ears, for matching coordinates between patient and CT data. To improve the computation speed, the 3D data are mapped to 2D depth images. Then, in order to achieve robust image matching, they are converted into H-K curvature images that emphasize the features of depth images, and image matching is performed. The three-dimensional ROI of CT data to be used for ICP matching can be obtained with the inverse operation of matched image points. Finally, the extracted CT ROI and surface measurement data are matched with the ICP algorithm. As a result of various matching experiments, in the proposed method, it is confirmed that neither of the data converge to the local minimum and that they match correctly. Further work will focus on experiments to compare the performance of the proposed method with that of traditional 3D data matching methods and to test the method on human data instead of head phantom data.

Author Contributions

This study was conceived by K.H.K. and M.Y.K., who also set up the experiments. K.H.K. conducted the experiments and developed the methodology. All authors analyzed and interpreted the data. K.H.K. and M.Y.K. wrote the manuscript. M.Y.K. funded the acquisition. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the National Research Foundation of Korea (NRF) grant funded by the Korean government (MSIT) (2022R1A2C2008133) and in part by Basic Science Research Program through NRF, funded by the Ministry of Education (2021R1A6A1A03043144).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Grimson, W.E.L.; Kikinis, R.; Jolesz, F.A.; Black, P.M. Image-guided surgery. Sci. Am. 1999, 280, 62–69. [Google Scholar] [CrossRef] [PubMed]
  2. Labadie, R.F.; Davis, B.M.; Fitzpatrick, J.M. Image-guided surgery: What is the accuracy? Curr. Opin. Otolaryngol. Head Neck Surg. 2005, 13, 27–31. [Google Scholar] [CrossRef] [PubMed]
  3. Gioux, S.; Choi, H.S.; Frangioni, J.V. Image-guided surgery using invisible near-infrared light: Fundamentals of clinical translation. Mol. Imaging 2010, 9, 237–255. [Google Scholar] [CrossRef] [PubMed]
  4. Keereweer, S.; Kerrebijn, J.D.; Van Driel, P.B.; Xie, B.; Kaijzel, E.L.; Snoeks, T.J.; Que, I.; Hutteman, M.; Van Der Vorst, J.R.; Mieog, J.S.D. Optical image-guided surgery—Where do we stand? Mol. Imaging Biol. 2011, 13, 199–207. [Google Scholar] [CrossRef]
  5. Widmann, G.; Widmann, R.; Widmann, E.; Jaschke, W.; Bale, R. Use of a surgical navigation system for CT-guided template production. Int. J. Oral Maxillofac. Implant. 2007, 22, 72–78. [Google Scholar]
  6. Lee, T.Y.T.; Zaid, W.S. Broken dental needle retrieval using a surgical navigation system: A case report and literature review. Oral Surg. Oral Med. Oral Pathol. Oral Radiol. 2015, 119, e55–e59. [Google Scholar] [CrossRef]
  7. Chen, X.; Xu, L.; Wang, Y.; Wang, H.; Wang, F.; Zeng, X.; Wang, Q.; Egger, J. Development of a surgical navigation system based on augmented reality using an optical see-through head-mounted display. J. Biomed. Inform. 2015, 55, 124–131. [Google Scholar] [CrossRef]
  8. Edström, E.; Burström, G.; Nachabe, R.; Gerdhem, P.; Terander, A.E. A novel augmented-reality-based surgical navigation system for spine surgery in a hybrid operating room: Design, workflow, and clinical applications. Oper. Neurosurg. 2020, 18, 496–502. [Google Scholar] [CrossRef]
  9. Eggers, G.; Mühling, J.; Marmulla, R. Image-to-patient registration techniques in head surgery. Int. J. Oral Maxillofac. Surg. 2006, 35, 1081–1095. [Google Scholar] [CrossRef]
  10. Suenaga, H.; Tran, H.H.; Liao, H.; Masamune, K.; Dohi, T.; Hoshi, K.; Takato, T. Vision-based markerless registration using stereo vision and an augmented reality surgical navigation system: A pilot study. BMC Med. Imaging 2015, 15, 1–11. [Google Scholar] [CrossRef]
  11. Liu, H.; Baena, F.R.Y. Automatic markerless registration and tracking of the bone for computer-assisted orthopaedic surgery. IEEE Access 2020, 8, 42010–42020. [Google Scholar] [CrossRef]
  12. Knott, P.D.; Batra, P.S.; Butler, R.S.; Citardi, M.J. Contour and paired-point registration in a model for image-guided surgery. Laryngoscope 2006, 116, 1877–1881. [Google Scholar] [CrossRef]
  13. Arun, K.S.; Huang, T.S.; Blostein, S.D. Least-squares fitting of two 3-D point sets. IEEE Trans. Pattern Anal. Mach. Intell. 1987, 9, 698–700. [Google Scholar] [CrossRef]
  14. Hong, J.; Hashizume, M. An effective point-based registration tool for surgical navigation. Surg. Endosc. 2010, 24, 944–948. [Google Scholar] [CrossRef]
  15. Eggers, G.; M¨ hling, J. Template-based registration for image-guided skull base surgery. Otolaryngol.—Head Neck Surg. 2007, 136, 907–913. [Google Scholar] [CrossRef]
  16. Ji, S.; Fan, X.; Paulsen, K.D.; Roberts, D.W.; Mirza, S.K.; Lollis, S.S. Patient registration using intraoperative stereovision in image-guided open spinal surgery. IEEE Trans. Biomed. Eng. 2015, 62, 2177–2186. [Google Scholar] [CrossRef]
  17. Marmulla, R.; Lüth, T.; Mühling, J.; Hassfeld, S. Markerless laser registration in image-guided oral and maxillofacial surgery. J. Oral Maxillofac. Surg. 2004, 62, 845–851. [Google Scholar] [CrossRef]
  18. Hellwich, O.; Rose, A.; Bien, T.; Malolepszy, C.; Mucha, D.; Krüger, T. Patient registration using photogrammetric surface reconstruction from smartphone imagery. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, 41, 829. [Google Scholar]
  19. Lee, H.; Kim, M.Y.; Moon, J.I. Three-dimensional sensing methodology combining stereo vision and phase-measuring profilometry based on dynamic programming. Opt. Eng. 2017, 56, 124107. [Google Scholar] [CrossRef]
  20. Zhang, Z. Iterative point matching for registration of free-form curves and surfaces. Int. J. Comput. Vis. 1994, 13, 119–152. [Google Scholar] [CrossRef]
  21. Yang, J.; Li, H.; Jia, Y. Go-icp: Solving 3d registration efficiently and globally optimally. In Proceedings of the IEEE International Conference on Computer Vision, Sydney, NSW, Australia, 1–8 December 2013; pp. 1457–1464. [Google Scholar]
  22. Li, H.; Hartley, R. The 3D-3D registration problem revisited. In Proceedings of the 2007 IEEE 11th International Conference on Computer Vision, Rio de Janeiro, Brazil, 14–21 October 2007; pp. 1–8. [Google Scholar]
  23. Kwon, K.H.; Lee, S.H.; Kim, M.Y. A patient-to-CT registration method based on spherical unwrapping and HK curvature descriptors for surgical navigation system. In Proceedings of the 2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI), Daegu, Republic of Korea, 16–18 November 2017; pp. 1–6. [Google Scholar]
  24. Zhang, D.; Lu, G.; Li, W.; Zhang, L.; Luo, N. Palmprint recognition using 3-D information. IEEE Trans. Syst. Man Cybern. Part C (Appl. Rev.) 2009, 39, 505–519. [Google Scholar] [CrossRef]
  25. Akagündüz, E. Scale and orientation invariant 3D interest point extraction using HK curvatures. In Proceedings of the 2009 IEEE 12th International Conference on Computer Vision Workshops, ICCV Workshops, Kyoto, Japan, 27 September–4 October 2009; pp. 697–702. [Google Scholar]
  26. Wang, P.; DeNunzio, A.; Okunieff, P.; O’Dell, W.G. Lung metastases detection in CT images using 3D template matching. Med. Phys. 2007, 34, 915–922. [Google Scholar] [CrossRef] [PubMed]
  27. Opromolla, R.; Fasano, G.; Rufino, G.; Grassi, M. A model-based 3D template matching technique for pose acquisition of an uncooperative space object. Sensors 2015, 15, 6360–6382. [Google Scholar] [CrossRef] [PubMed]
  28. Rieck, B.; Mara, H.; Krömker, S. Unwrapping highly-detailed 3d meshes of rotationally symmetric man-made objects. In Proceedings of the 2013 XXIV International CIPA Symposium, Strasbourg, France, 2–6 September 2013; pp. 259–264. [Google Scholar]
  29. Besl, P.J.; Jain, R.C. Segmentation through variable-order surface fitting. IEEE Trans. Pattern Anal. Mach. Intell. 1988, 10, 167–192. [Google Scholar] [CrossRef]
  30. Colombo, A.; Cusano, C.; Schettini, R. 3D face detection using curvature analysis. Pattern Recognit. 2006, 39, 444–455. [Google Scholar] [CrossRef]
  31. Yoo, J.C.; Han, T.H. Fast normalized cross-correlation. Circuits Syst. Signal Process. 2009, 28, 819–843. [Google Scholar] [CrossRef]
  32. Lin, Y.H.; Chen, C.H. Template matching using the parametric template vector with translation, rotation and scale invariance. Pattern Recognit. 2008, 41, 2413–2421. [Google Scholar] [CrossRef]
  33. Zhang, Z.; Chen, J.; Li, X.; Li, W.; Yuan, W. An image matching method based on fourier and LOG-Polar transform. Sens. Transducers 2014, 169, 61. [Google Scholar]
  34. Yang, H.; Huang, C.; Wang, F.; Song, K.; Zheng, S.; Yin, Z. Large-scale and rotation-invariant template matching using adaptive radial ring code histograms. Pattern Recognit. 2019, 91, 345–356. [Google Scholar] [CrossRef]
Figure 1. Schematic diagram of the proposed registration process.
Figure 1. Schematic diagram of the proposed registration process.
Sensors 23 04903 g001
Figure 2. Result of mapping 3D CT data to 2D image using spherical unwrapping.
Figure 2. Result of mapping 3D CT data to 2D image using spherical unwrapping.
Sensors 23 04903 g002
Figure 3. Results of mapping 3D scan data to 2D image using spherical unwrapping.
Figure 3. Results of mapping 3D scan data to 2D image using spherical unwrapping.
Sensors 23 04903 g003
Figure 4. Results of curvature image conversion of the unwrapped CT image.
Figure 4. Results of curvature image conversion of the unwrapped CT image.
Sensors 23 04903 g004
Figure 5. Results of curvature image conversion of the unwrapped image of 3D scan data.
Figure 5. Results of curvature image conversion of the unwrapped image of 3D scan data.
Sensors 23 04903 g005
Figure 6. Curvature image matching using NCC.
Figure 6. Curvature image matching using NCC.
Sensors 23 04903 g006
Figure 7. CT ROI extraction using matching points.
Figure 7. CT ROI extraction using matching points.
Sensors 23 04903 g007
Figure 8. Registration results using ICP algorithm.
Figure 8. Registration results using ICP algorithm.
Sensors 23 04903 g008
Figure 9. Three-dimensional measurement sensor under development and experimental environment.
Figure 9. Three-dimensional measurement sensor under development and experimental environment.
Sensors 23 04903 g009
Figure 10. Image matching result using NCC; (a) depth image matching and (b) curvature image matching.
Figure 10. Image matching result using NCC; (a) depth image matching and (b) curvature image matching.
Sensors 23 04903 g010
Figure 11. Results of curvature image matching; (a) using data with pitch rotation, (b) using data scanned on the side, and (c) using data with some deformation.
Figure 11. Results of curvature image matching; (a) using data with pitch rotation, (b) using data scanned on the side, and (c) using data with some deformation.
Sensors 23 04903 g011
Figure 12. Results of CT ROI extraction.
Figure 12. Results of CT ROI extraction.
Sensors 23 04903 g012
Figure 13. Results of ICP registration without preprocessing; (a) nose and eye data, and (b) right ear data.
Figure 13. Results of ICP registration without preprocessing; (a) nose and eye data, and (b) right ear data.
Sensors 23 04903 g013
Figure 14. Results of proposed ICP registration; (a) nose and eye data, (b) right ear data, (c) data of tip of nose, and (d) left ear data.
Figure 14. Results of proposed ICP registration; (a) nose and eye data, (b) right ear data, (c) data of tip of nose, and (d) left ear data.
Sensors 23 04903 g014
Figure 15. Nose region extraction and alignment using a nose vector.
Figure 15. Nose region extraction and alignment using a nose vector.
Sensors 23 04903 g015
Figure 16. Result of proposed ICP registration using vector alignment.
Figure 16. Result of proposed ICP registration using vector alignment.
Sensors 23 04903 g016
Table 1. Specifications of the surface measurement sensor.
Table 1. Specifications of the surface measurement sensor.
ValueUnit
Resolution2048 × 1088pixels
Accuracy134μm
Measurement distance25–35cm
Measurement area13 × 8cm
Table 2. ICP registration errors of each result.
Table 2. ICP registration errors of each result.
Figure 14aFigure 14bFigure 14cFigure 14d
Mean error473.3 μm834.9 μm839.7 μm759.1 μm
Std error262.8 μm803.6 μm654.6 μm406.8 μm
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kwon, K.H.; Kim, M.Y. Robust H-K Curvature Map Matching for Patient-to-CT Registration in Neurosurgical Navigation Systems. Sensors 2023, 23, 4903. https://doi.org/10.3390/s23104903

AMA Style

Kwon KH, Kim MY. Robust H-K Curvature Map Matching for Patient-to-CT Registration in Neurosurgical Navigation Systems. Sensors. 2023; 23(10):4903. https://doi.org/10.3390/s23104903

Chicago/Turabian Style

Kwon, Ki Hoon, and Min Young Kim. 2023. "Robust H-K Curvature Map Matching for Patient-to-CT Registration in Neurosurgical Navigation Systems" Sensors 23, no. 10: 4903. https://doi.org/10.3390/s23104903

APA Style

Kwon, K. H., & Kim, M. Y. (2023). Robust H-K Curvature Map Matching for Patient-to-CT Registration in Neurosurgical Navigation Systems. Sensors, 23(10), 4903. https://doi.org/10.3390/s23104903

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop