Next Article in Journal
Finite Element Method for Solving the Screened Poisson Equation with a Delta Function
Previous Article in Journal
An Analysis of Nonlinear Axisymmetric Structural Vibrations of Circular Plates with the Extended Rayleigh–Ritz Method
Previous Article in Special Issue
Dynamic Event-Triggered Sliding Mode Control of Markov Jump Delayed System with Partially Known Transition Probabilities
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

DRHT: A Hybrid Mathematical Model for Accurate Ultrasound Probe Calibration and Efficient 3D Reconstruction

by
Xuquan Ji
1,
Yonghong Zhang
2,
Huaqing Shang
3,
Lei Hu
2,
Xiaozhi Qi
3,* and
Wenyong Liu
1,*
1
School of Biological Science and Medical Engineering, Beihang University, Beijing 100083, China
2
School of Mechanical Engineering and Automation, Beihang University, Beijing 100083, China
3
Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
*
Authors to whom correspondence should be addressed.
Mathematics 2025, 13(8), 1359; https://doi.org/10.3390/math13081359
Submission received: 1 March 2025 / Revised: 1 April 2025 / Accepted: 9 April 2025 / Published: 21 April 2025
(This article belongs to the Special Issue Robust Perception and Control in Prognostic Systems)

Abstract

:
The calibration of ultrasound probes is essential for three-dimensional ultrasound reconstruction and navigation. However, the existing calibration methods are often cumbersome and inadequate in accuracy. In this paper, a hybrid mathematical model, Dimensionality Reduction and Homography Transformation (DRHT), is proposed. The model characterizes the relationship between the image plane of ultrasound and projected calibration lines and homography transformation. The homography transformation, which can be estimated using the singular value decomposition method, reduces the dimensionality of the calibration data and could significantly accelerate the computation of image points in ultrasonic three-dimensional reconstruction. Experiments comparing the DRHT method with the PLUS library demonstrated that DRHT outperformed the PLUS algorithm in terms of accuracy (0.89 mm vs. 0.92 mm) and efficiency (268 ms vs. 761 ms). Furthermore, high-precision calibration can be achieved with only four images, which greatly simplifies the calibration process and enhances the feasibility of the clinical application of this model.

1. Introduction

Currently, there are dozens of commercially available spinal surgery robots that rely on intraoperative imaging equipment, such as CT, for the guidance of robot-assisted spinal surgery [1,2,3]. However, these approaches involve radiation and cannot provide real-time access to the actual subcutaneous instrument location. Ultrasound scanning, with its real-time capabilities and absence of radiation, holds promise when combined with robotic systems to acquire real-time guidance [4,5].
The calibration of the spatial relationship between two-dimensional ultrasound images and three-dimensional space is a prerequisite for ultrasound-guided robotic-assisted surgery [6]. Three mainstream calibration methods exist: the constraint method, the image-based method, and the tracking method. The constraint method involves restricting the ultrasound probe motion using known and accurate motion relationships to stitch together ultrasound sequence frames. The motion constraint method often employs robots [7,8] or high-precision motors [9] to control the probe motion or uses mechanical constraints to ensure the probe captures fixed points [10]. The tracking method does not constrain the motion of the ultrasound probe but measures the position of the probe using tracking devices, such as optical tracking, electromagnetic tracking, and accelerometers [11], completing pose transformation through pre-calibrated transformation matrices.
The above methods are relatively complex or strict in terms of the equipment required. In contrast, the image-based methods, such as deep-learning-based matching methods [12,13,14,15,16] and Hough-transform-based image matching methods [17], do not impose constraints on the motion of the ultrasound probe nor measure the orientation of the probe. The image-based methods employ connections within the ultrasound images for probe motion. Due to the severe distortion in ultrasound images, feature extraction for determining motion relationships is more complex compared to images in natural environments, resulting in insufficient final accuracy, and thus these methods are mostly in the research phase.
The spatial calibration between the ultrasound probe and the localization device (referred to as ultrasound probe calibration) is a prerequisite for the tracking method, which mainly includes hand–eye calibration and multi-point calibration. Hand–eye calibration calculates the ultrasound probe calibration matrix by solving the matrix equation AX=XB, typically requiring movement at multiple positions. In 2020, Cai et al. [18] proposed a three-dimensional freehand ultrasound spatial calibration method based on independent general motion, also solved by the equation AX=XB, with an average angular error of 2.09° and an average distance error of 4.90 mm. However, this method often requires complex motion control and multiple trials, which can be time consuming and may accumulate errors due to the sequential nature of the process. Iommi et al. [19] published an evaluation study on three-dimensional ultrasound guidance, comparing two different 3D ultrasound modes, 3D freehand and 3D sweeping mode, to identify the most suitable mode for clinical guidance applications. Here, 2D ultrasound calibration was performed using pointer-based and image-based methods, with a final registration error of about 1 mm for both, while the 3D hand–eye calibration method had an error of around 2 mm. Although these 2D methods achieve reasonable accuracy, they do not fully address the challenges of 3D spatial calibration required for advanced robotic surgical guidance.
The multi-point calibration method is most widely used, which employs models such as the visual probe model, cross-line model, N-line model, and 3D-printed model. The calibration is achieved by calculating the three-dimensional spatial position of the projected points in the model. Shen et al. [20] proposed a rapid and simple automatic 3D ultrasound probe calibration method based on a 3D-printed phantom and untracked markers, requiring only a single marker without the need for tracking it throughout the calibration process. The method adopts a calibration object with known dimensions, and the coordinates during the ultrasound calibration process are directly from the designed values. Its final calibration and three-dimensional reconstruction error were 0.83 mm and 1.26 mm. While this method simplifies the calibration procedure, it relies on the accuracy of the 3D-printed model and may not account for variations in real-world imaging conditions. In 2020, Wen et al. [21] designed a calibration model combined with a visual probe for the spatial calibration of ultrasound probes. An infrared camera is used for tracking the spatial position of the probe tip, and the pixel positions of the probe in the two-dimensional ultrasound image are simultaneously extracted. The spatial position of the probe tip was modeled via Gaussian distribution, and an iterative nearest neighbor optimization algorithm was used to estimate the spatial transformation of the two sets of points. The calibration can be completed within 5 min, with an error of less than 1 mm. However, this approach depends on the precision of the infrared camera and the accuracy of pixel position extraction, which can be affected by environmental factors and image quality. Meza et al. [22] proposed a low-cost multimodal medical imaging system with stripe projection contouring and three-dimensional freehand ultrasound. They use a binocular camera system and annular marker to track the handheld ultrasound device. In the ultrasound calibration model, they introduced two scale factors in the x and y directions and then calculated homogeneous matrices through the LM algorithm to complete the calibration. However, the use of multiple sensors and the complexity of the LM algorithm may increase the computational load and affect the real-time performance of the system. Rong et al. [23] proposed a geometric calibration method for a freehand ultrasound system based on electromagnetic tracking. In their calibration process, a multi-layer N-line model and a fully automatic image segmentation algorithm are employed, achieving an average calibration error of 1.42 mm. Although this method improves automation, the electromagnetic tracking may be susceptible to interference from metallic objects in the operating environment. In 2021, Meza [24] proposed a three-dimensional multimodal medical imaging system based on freehand ultrasound and structured light. Additionally, the pose of the ultrasound probe is optically estimated based on convolutional neural networks. Finally, a CT-like scanning effect is achieved, and the layer thickness precision reached 0.12 mm. However, the reliance on convolutional neural networks requires extensive training data and computational resources and the generalization ability of the model needs to be further validated across different devices and clinical settings. In 2023, Sahrmann et al. [9] proposed a repeatable three-dimensional ultrasound automatic measurement system for skeletal muscles that utilizes force sensors for position control and ensures good contact between the probe and the skin. The Z-line model was used for calibration, and the three-dimensional coordinate calculation method was the same as above, with a final test three-dimensional reconstruction accuracy error of 0.94 mm (0.23%) and volume error of 0.08 mL (0.23%). While this system achieves high precision, it requires precise force control mechanisms, increasing the complexity and cost of the setup. In 2024, Harindranath et al. [25] introduced a manual 3D ultrasound imaging technique aided by an inertial measurement unit (IMU). This method involves motion-constrained sector scanning, where the ultrasound probe and IMU are mounted on a bracket capable of rotating in a single degree of freedom. The calibration process involves 100 s of random 3D spatial sensor data from the rotating IMU and an accuracy of less than 2 mm is achieved. Although this approach introduces a novel sensing modality, the calibration process is relatively lengthy and the accuracy may be limited by the precision and stability of the IMU over time.
In this paper, a three-tier 24-point calibration fixture based on the N-line model and a novel ultrasound probe calibration method were proposed. This method, which adopts homography transformation to characterize the relationship between the reduced two-dimensional points and the correspondences in the ultrasound image, effectively eliminates the scale factor and thus avoids the accumulation of errors.

2. Proposed Method

2.1. Overview of Ultrasound Probe Calibration

The commonly used ultrasound probe calibration model, illustrated in Figure 1, includes three coordinate systems: the global coordinate system {world}, the coordinate system of the tracking device attached to the probe {probe}, and the image coordinate system {I}. In ultrasound images, the pixel coordinates q ( u ,   v ) correspond to a global point Q(x, y, z), and the relationship between the two coordinates is:
[ x y z 1 ] = T w o r l d p r o b e T p r o b e I [ s x u s y v 0 1 ]
where s x and s y are the scale factors in the x and y directions and T w o r l d p r o b e and T p r o b e I are the homogeneous transformation matrix from {probe} to {world} and {I} to {probe}, respectively.
The above model neglects the local transformations of ultrasound images and only considers scale transformation. The purpose of ultrasound calibration is to determine the transformation matrix T p r o b e I and the scale factors s x and s y . The introduction of s x and s y increases the computational complexity and thus some researchers [26] have initially calculated s x and s y using correspondences of 3D and 2D distances, then followed with the computation of the transformation matrix T p r o b e I . However, the sequential approach may lead to the accumulation of errors.
Considering the characteristics of ultrasound imaging, the calibration points in the {probe} coordinate system form a plane in three-dimensional space. Therefore, the mapping relationship between image coordinates and three-dimensional coordinates can be accurately described by a homography matrix. In this paper, a three-tiered N-line ultrasound calibration model was designed and a novel calibration method for the ultrasound probe based on Dimensionality Reduction and Homography Transformation (referred to as DRHT) was proposed, as illustrated in Figure 2. In the calibration process, the spatial positions of both the ultrasound probe and the calibration model with visual markers can be tracked using a stereo camera. The N-line model is encoded in binary for line recognition. With the acquired ultrasound images and the constraints of the N-line model, the homography matrix can be computed from the projections of the calibration points, thereby completing the calibration process.
By employing the DRHT method, which initially reduces the three-dimensional points to two-dimensional image points, the computational complexity in the reconstruction phase is significantly reduced. It is only necessary to reconstruct the four vertices of the ultrasound image into three-dimensional points, after which the remaining points can be calculated through coordinate interpolation.

2.2. The Three-Layer N-Line Calibration Model

The traditional N-line model [27] consists of only 3 line segments, as depicted in Figure 3 by lines P 0 P 1 ¯ , P 1 P 2 ¯ , and P 2 P 3 ¯ . Assume the intersections of the ultrasound imaging plane and N-line model are Q 1 and their projections in the ultrasound image are q 1 (i = 0, 1, 2). Based on the principle of similar triangles, the coordinates of point Q 1 can be directly calculated by Equation (2).
| q 1 q 0 | | q 2 q 0 | = | P 1 Q 1 | | P 2 P 0 |
This results in a low utilization rate of the calibration points of only 1/3. To increase the number of usable calibration points and the utilization rate, thereby improving calibration accuracy while reducing the requirement for the calibration images, this paper extends the N-line model to a novel three-tiered 24-line model, as shown in Figure 3. Within each single calibration plane, there are 8 calibration line segments (the solid lines). Thin lines denote “0” and thick lines denote “1”. From left to right, each column is respectively 000, 001, 010, ……, 111, corresponding to decimal numbers 0, 1, 2, ……, 8. This design aims to facilitate identification. As the ultrasound probe may not scan all points each time, distinguishing lines by their thickness and using a specific encoding method can ensure partial points can still be identified and sorted.
The dashed lines represent the intersection of the ultrasound imaging plane with the N-line model, with the intersection points labeled as q 0 ~ q 23 . When all intersection points q 0 ~ q 23 are identified, a total of 18 points ( Q 1 Q 6 , Q 9 Q 14 , Q 17 Q 22 ) can be directly calculated using the expanded N-line model, achieving a utilization rate of 3/4 for the calibration points. Excluding the points at the both ends that cannot be calculated, for each of the identified q 1 q 6 , q 9 q 14 and q 17 q 22 , the following relationship holds:
| q i q i 1 | | q i + 1 q i 1 | = | P i Q i | | P i + 1 P i 1 | , i = 1 , 2 , , 22
In the above equation, | q i q i 1 | | q i + 1 q i 1 | represents the scale factor in three-dimensional space, which can be characterized by the pixel distances on the ultrasound image. This scale factor is crucial for establishing the proportional relationship between the actual physical dimensions and the corresponding pixel measurements within the imaging plane. With the proportional relationship, the three-dimensional coordinate Q i can be determined as:
Q i = P i + | q i q i 1 | | q i + 1 q i 1 | ( P i + 1 P i )
As the imaging region of ultrasound is sector-shaped, some points of the N-line model, such as q 0 and q 7 in Figure 3, may sometimes be lost, which would affect the point identification. Consequently, for the practical fabrication of the model, an encoding scheme based on the thickness of the lines is employed to facilitate the identification of calibration line segments. A thin line is defined as the binary digit “0”, and a thick line as the binary digit “1”. The sequence of lines, from left to right, corresponds to a binary series, which serves to determine the identified points.

2.3. Transformation of Calibration Point

From Equation (4), the intersections Q i in calibration tool coordinate system have been calculated. However, the ultrasound calibration requires the coordinates in the ultrasound probe coordinate system. Thus, we use a stereo camera to transform the calibration points Q i from a calibration tool coordinate system to an ultrasound probe coordinate system, as shown in Figure 1. Let the coordinates of the calibration point in the calibration tool and ultrasound probe marker coordinate system be Q i = [ x i y i z i ] T and Q m i = [ x m i y m i z m i ] T , respectively. Then, the coordinate Q m i can be transformed by the following equation:
[ Q m i 1 ] = T camera probe T camera calib 1 [ Q i 1 ]
where T camera calib and T camera probe are the poses of the calibration tool and ultrasound probe, which can be obtained simultaneously using the stereo camera. With the coordinates Q m i in the ultrasound probe marker coordinate system known, we can establish a new plane coordinate system and calculate the homography matrix of the two planes.

2.4. Establishment of Plane Coordinate System

Since ultrasound imaging is inherently planar, all points are theoretically coplanar within the ultrasound probe marker coordinate system. Therefore, a planar homography can be employed to represent the mapping from the two-dimensional image coordinates to the three-dimensional coordinates in the marker coordinate system. However, this requires an initial transformation from the three-dimensional marker coordinate system to the image coordinates system. Here, we project the three-dimensional points onto a plane within the marker coordinate system to achieve dimension reduction.
A least squares fitting method is utilized to fit a plane to the three-dimensional coordinates in the marker coordinate system. The general form of the plane equation can be given by:
z = a x + b y + c
where a, b, and c are constants.
As each three-dimensional point in the marker coordinate satisfies the above equation, the problem can be converted to a least square problem:
A X = b
where
A = [ x m 1 y m 1 1 x m 2 y m 2 1 x m n y m n 1 ] , X = [ a b c ] , b = [ z m 1 z m 2 z m n ]
and n is the number of identified points. By substituting the three-dimensional point coordinates under the marker coordinate system, a, b, c can be solved, and the normal vector of this plane is z = [ a b 1 ] T .
Then, a coordinate system is established within this plane by arbitrarily selecting two points:
l 1 = [ 0 0 c ] T l 2 = [ 10 0 10 a + c ] T
Given x = l 2 l 1 , the y can be computed from the cross product y = z × x . The rotation matrix R for the new coordinate system can be derived as follows:
R = [ x | x | y | y | z | z | ]
By arbitrarily designating a point as the origin, assumed to be point I 1 , the translation vector can be defined as:
t = R I 1
Finally, the homogeneous transformation matrix M P from the marker coordinate system to the established plane coordinate system can be represented as:
M p = [ R t 0 1 ]

2.5. Calculation of the Homography Matrix

With matrix M p , all three-dimensional points are projected onto a two-dimensional plane. Let the points within the two-dimensional plane be denoted as s i = [ x s i y s i ] T , then
[ x s i y s i k i 1 ] = M p [ x m i y m i z m i 1 ]
where k i is the z coordinate after projection onto the plane, which should ideally be 0.
After a projection of three-dimensional coordinates onto a two-dimensional plane, we now proceed to determine the homography matrix M h that maps the set of image points into the three-dimensional plane of N-line model. The assumptions for this calculation are as follows:
[ x s i y s i 1 ] = α M h [ x o i y o i 1 ] = α [ h 11 h 12 h 13 h 21 h 22 h 23 h 31 h 32 h 33 ] [ x o i y o i 1 ]
Here, α represents the scale transformation factor. Upon expansion and subsequent elimination of α by the third row, we have:
h 11 x o i h 12 y o i h 13 + h 31 x s i x o i + h 32 x s i y o i + h 22 x s i = 0 h 21 x o i h 22 y o i h 23 + h 31 x o i y s i + h 32 y o i y s i + h 33 y s i = 0
The problem can be formulated as a least squares minimization, leading to the construction of matrix A x = 0 , where:
A = [ x o 1 y o 1 1 0 0 0 x o 1 x s 1 x s 1 y o 1 x s 1 0 0 0 x o 1 y o 1 1 x o 1 y s 1 y o 1 y s 1 y s 1 x o n y o n 1 0 0 0 x o n x s n x s n y o n x s n 0 0 0 x o n y o n 1 x o n y s n y o 1 y s n y s n ]
x = [ h 11 h 12 h 13 h 21 h 22 h 23 h 31 h 32 h 33 ] T
A x = 0 is an overdetermined system of equations, which does not have an exact solution. Therefore, we estimate the solution through Singular Value Decomposition (SVD). Assuming there exists a set of values x that satisfy A x = 0 , it remains valid when multiplying x by any scalar k. Therefore, we might as well add a constraint x = 1 and formulate a constrained optimization problem:
arg min A x ,   s . t .   x = 1
Assume the Singular Value Decomposition (SVD) of matrix A is:
A = U D V T
Due to the norm-preserving property of orthogonal matrices, we can directly eliminate U from the SVD, which allows us to further simplify the expression to:
arg min A x = arg min U D V T x = arg min D V T x
By defining y = V T x , the optimization problem is equivalently transformed into the following form:
arg min A x = arg min D y
Matrix D is a diagonal matrix composed of the singular values of matrix A. When the singular values in the diagonal matrix are arranged in descending order, the condition for arg min D y to be minimized is y = [ 0 0 1 ] T .
Given that x = V y and considering that matrix V is composed of the eigenvectors of matrix A, it can be deduced that x corresponds to the last column of the eigenmatrix V. Finally, all calibration tasks for the ultrasound probe have been completed. The saved matrices M h and M p as calibration parameters, can be directly utilized in subsequent three-dimensional ultrasound reconstruction processes.

2.6. Ultrasonic Three-Dimensional Reconstruction

After completing the previous steps for ultrasound probe calibration, it is then possible to convert any two-dimensional point coordinate on the ultrasound image into a three-dimensional coordinate within the ultrasound probe marker coordinate system during subsequent processes, which is a fundamental feature required by all algorithms for three-dimensional ultrasound reconstruction. However, calibration methods such as the one illustrated in Equation (1) involve a computationally intensive process where each point must be individually multiplied by the calibration matrix to derive the three-dimensional coordinates. When handling a large series of ultrasound images, this results in a substantial computational load, which is not conducive to intraoperative use.
In contrast, the DRHT algorithm in this paper is centered around the concept of dimensionality reduction by projecting three-dimensional points onto two-dimensional points, followed by a homography transformation between two-dimensional images. Thus, it can be viewed as a transformation of planar images, which enables the quick computation of the coordinates for all points. As depicted in Figure 4, for an image with dimensions of n by m pixels, the initial step involves calculating the three-dimensional coordinates I 11 , I 1 n , and I m 1 of the vertices I 11 , I 1 n , and I m 1 of the ultrasound image.
Let the two-dimensional coordinates be represented as o i = [ x o i y o i ] T and the correspondences in the plane coordinate system be represented by s i = [ x s i y s i 0 ] T . According to the principles of homography transformation, the relationship between these coordinates can be expressed as:
[ k x s i k y s i k ] = M h [ x o i y o i 1 ]
Utilizing the transformation matrix M p from the ultrasound marker coordinate system to the plane coordinate system and given a three-dimensional point m i = [ x m i y m i z m i ] T in the ultrasound probe marker coordinate system, the transformation can be described as follows:
[ x m i y m i z m i 1 ] = M p [ x s i y s i 0 1 ]
After calculating the coordinates of points I 11 , I 1 n , and I n 1 , the coordinates of all other points can be obtained through interpolation based on these three points. To proceed, we calculate the unit vectors in the x and y directions of the three-dimensional image as follows:
v x = I 1 n I 11 n 1
v y = I m 1 I 11 m 1
Thus, the coordinate calculation for point I i j is outlined as follows:
I i j = I 11 + ( i 1 ) v y + ( j 1 ) v x
Through the above steps, it is possible to rapidly compute the three-dimensional coordinates of all two-dimensional points in the ultrasound image within the camera coordinate system.

3. Experiments

3.1. Ultrasound Probe Calibration Scenario Setup

To validate the method described in this paper, an ultrasound probe calibration scenario was constructed, as shown in Figure 5, which includes a binocular camera DHST-1 (Beijing Dahua Wangda Technology Co., Ltd., Beijing, China) with a resolution of 1600 × 1200 pixels and a frame rate > 40 fps; the positioning error is less than 0.3 mm. The ultrasound equipment is the DW360 (Dawei Medical Co., Ltd., Xuzhou, China), with a linear array probe selected and set to a detection depth of 9 cm. The laptop used for data processing is a DELL G7 7590 (Dell Inc., Round Rock, TX, USA), equipped with an Inter(R) Core(TM) i7-8750H processor (Intel Corporation, Santa Clara, CA, USA), an NVIDIA Geforce RTX2060 graphics card (NVIDIA Corporation, Santa Clara, CA, USA), and 16 GB of RAM. An image acquisition card UB530 (TCHD Digital Video Technology Development (Beijing) Co., Ltd., Beijing, China) is employed to capture images from the ultrasound device and transmit them in real-time to the workstation for immediate processing.
The calibration fixture was fabricated using a photo-curable resin material through 3D printing. The black parts of the visual markers were printed using black nylon and assembled afterward. The thin calibration lines were created with nylon threads of 0.2 mm in diameter, while the thick lines were wrapped by a 0.8 mm thick sleeve over the thin threads. To ensure the installation accuracy of the nylon threads, dumbbell-shaped notches were set on the installation columns. After assembly, the nylon threads were tightened and secured at both ends with screws.
The designed coordinates (x, y, z) of the line segment vertices under the calibration marker are presented in Table 1. From top to bottom are the eight segment vertices of the first, second, and third layers, corresponding to P0~P23 in Figure 3.

3.2. Image Coordinate Extraction

After acquiring the ultrasound images through image capture, the process of interactive point selection is carried out on a software platform developed using Visual Studio 2019, QT 5.12.9, OpenGL 4.5, OpenCV 3.3, Plus 2.8.0, and Vtk 8.0.1, as depicted in Figure 6. Initially, the user identifies the point to be selected on the interface shown in Figure 6a, then clicks on the center of the projected point on the image to complete the selection, repeating this process until all points are selected. As illustrated in Figure 6b, the first and second rows of markers are complete, with all eight points recognizable, making the selection process relatively straightforward in this case. However, in Figure 6c, the first row contains only seven points and the second row contains only six points. Under these circumstances, it is necessary to rely on the size of the projection to discern the points and prevent any errors in selection.
After the point selection, all identified calibration points from the images are consolidated. Utilizing the method described in this paper, the matrices M p and M h are calculated and saved as the calibration results.

3.3. Error Determination

Evaluating the quality of the model presented in this paper requires a comprehensive and objective approach. Merely comparing the results with those from the literature may not yield an unbiased assessment. Factors such as the number of calibration points, the quality of the equipment (ultrasound and navigation devices), the precision of 3D printing, and the errors in point selection can significantly affect the evaluation of the results. Therefore, we use the same dataset for calibration with different methods and then compare their respective calibration errors.
In this paper, we compare the calibration accuracy with the Plus Toolkit [28], an open-source toolkit for data acquisition, preprocessing, and calibration in ultrasound-guided interventions. Continuously updated since 2011, it has become one of the most authoritative open-source libraries in the field of ultrasound-guided imaging. The function used is Compute Image To Probe Transform By Linear Least Squares Method, and its calibration model is shown in Equation (1).
The calculation of error is determined as follows: two methods are used to calculate the spatial coordinates m i = [ x m i y m i z m i ] T . The first is by projecting the 2D calibration points to the 3D spatial coordinates, and the second is by directly using the N-line model. The average distance between these two sets of coordinates is taken as the calibration error, denoted as:
e = i = 1 n ( x m i x m i ) 2 + ( y m i y m i ) 2 + ( z m i z m i ) 2 n

3.4. Calibration Experiment

We conducted a total of three calibration experiments, collecting datasets consisting of 3, 9, and 14 groups, respectively. Each dataset included ultrasound images, the pose of the calibration fixture marker T camera calib in the camera coordinate system, and the pose of the ultrasound probe marker T camera probe . The ultimate calibration errors for the three experiments are presented in Table 2.
Analysis of the three sets of data revealed that the DRHT algorithm demonstrated an error of 0.86 mm, which was superior to the 1.05 mm error of the PLUS algorithm. The DRHT also showed an advantage in terms of variance. Consequently, for the same data, the DRHT outperforms the algorithms in the PLUS library in terms of accuracy. It is worth noting that the DRHT computation did not employ any nonlinear optimization; thus, there is potential for further improvement in accuracy if optimization algorithms such as the Levenberg–Marquardt (LM) method are utilized.
From another perspective, the calibration errors of the three experiments were not significantly different, with the error using three sets of data being nearly equivalent to that using fourteen sets of data. To further explore the robustness of the DRHT and to determine the minimum number of images required for calibration, we designed a second experiment.
In this experiment, we collected 20 sets of data and conducted 17 trials, using 1 to 17 sets of data for calibration and the remaining 3 sets for validation, with the validation data not used for calibration. To ensure the credibility of the experiment, both the calibration and validation data were randomly selected. The specific implementation involved reshuffling the 20 sets of data for each trial, using the first n sets for calibration in the nth trial, and the last 3 sets for validation. Ultimately, four metrics were measured: the calibration error and validation error for DRHT, as well as the calibration error and validation error for the PLUS library.
Figure 7 illustrates the results of the 17 calibration experiments plotted as curves. It can be observed that with only a single calibration image, the calibration error was less than 0.5 mm, yet the validation error approximated 1.2 mm, which means the calibration model was somewhat overfitted at this point. When more than four images were used, the calibration error increased and eventually stabilized. The average calibration error for DRHT was 0.8940 mm; for PLUS, it was 0.9212 mm. The validation error remained stable below 1.3 mm, with DRHT’s validation error at 1.0272 mm and PLUS’s at 1.0629 mm. In terms of both calibration and validation accuracy, DRHT outperformed PLUS. From another perspective, when both calibration methods use over four images, their calibration and validation errors fluctuate between 0.8 mm and 1.2 mm, showing similar trends. This is because manual selection of all calibration points introduces subjectivity, leading to result fluctuations. While automated calibration point selection could address this, it is beyond this paper’s scope, so will not be described here.
Despite the inherent randomness in calibration procedures and data, notable fluctuations in the validation error were observed, including instances where the error dipped below 0.8 mm in the 6th and 12th datasets. Nevertheless, a discernible trend emerged: as the quantity of calibration data increased, the variability in the calibration error diminished and eventually reached a stable state.
From the first experiment, it was evident that the DRHT model proposed in this paper was feasible in terms of validation accuracy. Even without using a nonlinear optimization algorithm, the results were still better than the calibration algorithm of the PLUS library. In the second experiment, a cross-validation method was employed to compute the calibration and validation errors for both algorithms. The results showed that with the three-tier 24-line calibration model proposed in this paper, only four images were needed to provide sufficient calibration information, thus completing the calibration. Compared to methods that required 100 images [25] or thousands of images [29] for calibration, this approach offered a significant advantage.

3.5. Three-Dimensional Reconstruction Experiment

In the three-dimensional reconstruction experiment, five groups of ultrasonic datasets were collected from fingers, spinal model bone, and 3D-printed parts, as shown in the first row of Figure 8. In each dataset, about 300 to 700 ultrasonic images were collected for ultrasonic three-dimensional reconstruction. The original images acquired were 1920 × 1080, and the area of interest was 700 × 508.
In the three-dimensional reconstruction experiment, the proposed reconstruction method and the PLUS method were used for ultrasonic three-dimensional reconstruction to evaluate the reconstruction performance of the algorithms. The reconstruction of the PLUS method is via the multiplication of points and calibration parameters from PLUS. The third row shows the 3D reconstruction results of the PLUS algorithm, and the fourth row shows the results of the algorithm in this paper. As can be seen from the figure, there is no significant difference between the reconstruction results of the two algorithms. Therefore, it can be preliminarily determined that the dimensionality reduction process of the DRHT algorithm in this paper has no significant effect on the accuracy of 3D reconstruction.
The overall reconstruction time is shown in Figure 9. The average 3D reconstruction time for the DRHT method is 268 ms, while that for the PLUS method is 761 ms. The results show that the reconstruction time of the proposed method is about 1/3 of the PLUS method, indicating that the reconstruction efficiency of the proposed method is significantly superior to that of the PLUS method.

4. Conclusions

Three-dimensional ultrasound reconstruction has long been a focal point of research because ultrasound can detect tissues beneath the skin without radiation. However, the information provided to physicians by 2D ultrasound imaging is quite limited and lacks a global perspective, which is not conducive to guiding surgical robots. In contrast, reconstructing a series of two-dimensional ultrasound images into a three-dimensional image can greatly expand the field of view, providing more comprehensive information for surgical procedures. In orthopedic surgery, ultrasound 3D reconstruction offers real-time and radiation-free navigation, boosts surgical precision, supports minimally invasive techniques, reduces risks, and improves efficiency. For tumor resection, it aids in preoperative planning, enables intraoperative navigation and precise excision, and is useful for postoperative evaluation. To achieve this, precise calibration of the ultrasound probe is a prerequisite. The DRHT method proposed in this paper is a calibration technique for ultrasound probes that simplifies computations and enhances accuracy.
The contributions of this paper are as follows: Firstly, we propose a novel ultrasound probe calibration method based on dimensionality reduction and homography transformation. This method effectively maps three-dimensional spatial points to two-dimensional points by fitting the ultrasound imaging plane and then uses homography transformation to characterize the relationship between the reduced two-dimensional points and the ultrasound image. This approach effectively eliminates the scale factor commonly found in traditional calibration models, avoiding the amplification of calibration errors caused by scale factor estimation inaccuracies. The effectiveness of the model is verified by comparing it with the calibration algorithms in the PLUS library. Secondly, we designed a three-tier 24-point calibration fixture based on the N-line model, encoded using binary principles for easy identification. A single image can effectively identify 18 calibration points, and experiments confirmed that only four images are needed to complete the calibration, greatly simplifying the ultrasound calibration process and reducing calibration costs. Ultimately, the average calibration error of the DRHT algorithm is 0.95 mm, and the average validation error is 1.02 mm. Since the calibration data are currently collected manually, further improvements in accuracy are expected if a robotic arm is used for data collection.
In future work, we plan to adopt a nonlinear optimization algorithm to optimize the DRHT algorithm, further enhancing the model’s accuracy. We will also develop an automatic calibration point recognition method to achieve automatic calibration, reducing the impact of manual point selection on calibration errors.

Author Contributions

Conception and design of the study: X.J.; development of methodology: X.J.; acquisition of data: Y.Z.; analysis and interpretation of data: H.S.; drafting the manuscript: X.J. and L.H.; technical support and material support: X.Q. and W.L.; supervision and funding acquisition: W.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Natural Science Foundation of China (52375035), the Beijing Natural Science Foundation—Fengtai Innovation Joint Fund (L241072), the Beijing-Tianjin-Hebei Basic Research Cooperation Special Project (J230020), the Guangdong Basic and Applied Basic Research Foundation under Grant (2024A1515030011), and the Shenzhen Science and Technology Program under Grants (JSGG20220831100202004, JCYJ20220818101412026).

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Acknowledgments

Thanks to Beijing Zhuzheng Robot Co., Ltd. for providing equipment and site support for this research.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Perfetti, D.C.; Kisinde, S.; Rogers-LaVanne, M.P.; Satin, A.M.; Lieberman, I.H. Robotic spine surgery: Past, present, and future. Spine 2022, 47, 909–921. [Google Scholar] [CrossRef] [PubMed]
  2. Liu, G.; Qi, X.; Li, M.; Gao, Y.; Hu, Y.; Hu, L.; Zhao, Y. An Automatic Cutting Plane Planning Method Based on Multi-Objective Optimization for Robot-Assisted Laminectomy Surgery. IEEE Robot. Autom. Lett. 2025, 10, 2343–2350. [Google Scholar] [CrossRef]
  3. Qi, X.; Meng, J.; Li, M.; Yang, Y.; Hu, Y.; Li, B.; Zhang, J.; Tian, W. An Automatic Path Planning Method of Pedicle Screw Placement Based on Preoperative CT Images. IEEE Trans. Med. Robot. Bionics 2022, 4, 403–413. [Google Scholar] [CrossRef]
  4. Guerra, F.; Guadagni, S.; Pesi, B.; Furbetta, N.; Di Franco, G.; Palmeri, M.; Morelli, L. Outcomes of robotic liver resections for colorectal liver metastases. A multi-institutional analysis of minimally invasive ultrasound-guided robotic surgery. Surg. Oncol. 2019, 28, 14–18. [Google Scholar] [CrossRef]
  5. Faoro, G.; Maglio, S.; Pane, S.; Iacovacci, V.; Menciassi, A. An artificial intelligence-aided robotic platform for ultrasound-guided transcarotid revascularization. IEEE Robot. Autom. Lett. 2023, 8, 2349–2356. [Google Scholar] [CrossRef]
  6. Pavone, M.; Seeliger, B.; Teodorico, E.; Goglia, M.; Taliento, C.; Bizzarri, N.; Querleu, D. Ultrasound-guided robotic surgical procedures: A systematic review. Surg. Endosc. 2024, 38, 2359–2370. [Google Scholar] [CrossRef]
  7. Groenhuis, V.; Nikolaev, A.; Nies, S.H.; Welleweerd, M.K.; de Jong, L.; Hansen, H.H.; Stramigioli, S. 3-D Ultrasound Elastography Reconstruction Using Acoustically Transparent Pressure Sensor on Robotic Arm. IEEE Trans. Med. Robot. Bionics 2020, 3, 265–268. [Google Scholar] [CrossRef]
  8. Davoodi, A.; Li, R.; Cai, Y.; Niu, K.; Borghesan, G.; Vander Poorten, E. A Comparative Study for Control of Semi-Automatic Robotic-Assisted Ultrasound System in Spine Surgery. In Proceedings of the 21st International Conference on Advanced Robotics (ICAR), Abu Dhabi, United Arab Emirates, 5–8 December 2023; pp. 303–310. [Google Scholar]
  9. Sahrmann, A.S.; Handsfield, G.G.; Gizzi, L.; Gerlach, J.; Verl, A.; Besier, T.F.; Röhrle, O. A system for reproducible 3D ultrasound measurements of skeletal muscles. IEEE Trans. Biomed. Eng. 2024, 71, 2022–2032. [Google Scholar] [CrossRef]
  10. Melvær, E.L.; Mørken, K.; Samset, E. A motion constrained cross-wire phantom for tracked 2D ultrasound calibration. Int. J. Comput. Assist. Radiol. Surg. 2012, 7, 611–620. [Google Scholar] [CrossRef]
  11. Mozaffari, M.H.; Lee, W.S. Freehand 3-D ultrasound imaging: A systematic review. Ultrasound Med. Biol. 2017, 43, 2099–2124. [Google Scholar] [CrossRef]
  12. Luo, M.; Yang, X.; Huang, X.; Huang, Y.; Zou, Y.; Hu, X.; Ni, D. Self Context and Shape Prior for Sensorless Freehand 3D Ultrasound Reconstruction. In Proceedings of the Medical Image Computing and Computer Assisted Intervention–MICCAI, Strasbourg, France, 27 September–1 October 2021; pp. 201–210. [Google Scholar]
  13. Luo, M.; Yang, X.; Wang, H.; Du, L.; Ni, D. Deep Motion Network for Freehand 3D Ultrasound Reconstruction. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention-MICCAI, Singapore, 18–22 September 2022; pp. 290–299. [Google Scholar]
  14. Li, Q.; Shen, Z.; Li, Q.; Barratt, D.C.; Dowrick, T.; Clarkson, M.J.; Hu, Y. Long-term Dependency for 3D Reconstruction of Freehand Ultrasound Without External Tracker. IEEE Trans. Biomed. Eng. 2023, 71, 1033–1042. [Google Scholar] [CrossRef] [PubMed]
  15. Chen, X.; Chen, H.; Peng, Y.; Liu, L.; Huang, C. A Freehand 3D Ultrasound Reconstruction Method Based on Deep Learning. Electronics 2023, 12, 1527. [Google Scholar] [CrossRef]
  16. Guo, H.; Chao, H.; Xu, S.; Wood, B.J.; Wang, J.; Yan, P. Ultrasound volume reconstruction from freehand scans without tracking. IEEE Trans. Biomed. Eng. 2022, 70, 970–979. [Google Scholar] [CrossRef] [PubMed]
  17. Chen, X.; Chen, H.; Peng, Y.; Tao, D. Probe Sector Matching for Freehand 3D Ultrasound Reconstruction. Sensors 2020, 20, 3146. [Google Scholar] [CrossRef]
  18. Cai, Q.; Lu, J.Y.; Peng, C.; Prieto, J.C.; Rosenbaum, A.J.; Stringer, J.S.; Jiang, X. Spatial Calibration for 3D Freehand Ultrasound Via Independent General Motions. In Proceedings of the 2020 IEEE International Ultrasonics Symposium (IUS), Las Vegas, NV, USA, 7–11 September 2020; pp. 1–3. [Google Scholar]
  19. Iommi, D.; Hummel, J.; Figl, M.L. Evaluation of 3D ultrasound for image guidance. PLoS ONE 2020, 15, e0229441. [Google Scholar] [CrossRef]
  20. Shen, J.; Zemiti, N.; Dillenseger, J.L.; Poignet, P. Fast and Simple Automatic 3D Ultrasound Probe Calibration Based on 3D Printed Phantom and an Untracked Marker. In Proceedings of the 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Honolulu, HI, USA, 18–21 July 2018; pp. 878–882. [Google Scholar]
  21. Wen, T.; Wang, C.; Zhang, Y.; Zhou, S. A novel ultrasound probe spatial calibration method using a combined phantom and stylus. Ultrasound Med. Biol. 2020, 46, 2079–2089. [Google Scholar] [CrossRef]
  22. Meza, J.; Simarra, P.; Contreras-Ojeda, S.; Romero, L.A.; Contreras-Ortiz, S.H.; Cosío, F.A.; Marrugo, A.G. A Low-Cost Multi-Modal Medical Imaging System with Fringe Projection Profilometry and 3D Freehand Ultrasound. In Proceedings of the 15th International Symposium on Medical Information Processing and Analysis, Medelin, Colombia, 3 January 2020; pp. 14–26. [Google Scholar]
  23. Rong, J.; Lu, Y.; Qin, S.; Zeng, R. Geometric calibration of freehand ultrasound system with electromagnetic tracking. Acad. J. Comput. Inf. Sci. 2020, 3, 117–128. [Google Scholar]
  24. Meza, J.; Contreras-Ortiz, S.H.; Romero, L.A.; Marrugo, A.G. Three-dimensional multimodal medical imaging system based on freehand ultrasound and structured light. Opt. Eng. 2021, 60, 054106. [Google Scholar] [CrossRef]
  25. Harindranath, A.; Shah, K.; Devadass, D.; George, A.; Banerjee Krishnan, K.; Arora, M. IMU-Assisted Manual 3D-Ultrasound Imaging Using Motion-Constrained Swept-Fan Scans. Ultrason. Imaging 2024, 46, 164–177. [Google Scholar] [CrossRef]
  26. Zheng, X.; Lei, L.; Zhao, B.; Li, S.; Wu, Y.; Hu, Y. A Feasibility Study of Renal Puncture Localization Method Based on 2D Ultrasound-3D CT Registration. J. Integr. Technol. 2020, 9, 29–39. [Google Scholar]
  27. Pagoulatos, N.; Haynor, D.R.; Kim, Y. A fast calibration method for 3-D tracking of ultrasound images using a spatial localizer. Ultrasound Med. Biol. 2001, 27, 1219–1229. [Google Scholar] [CrossRef]
  28. Lasso, A.; Heffter, T.; Rankin, A.; Pinter, C.; Ungi, T.; Fichtinger, G. PLUS: Open-source toolkit for ultrasound-guided intervention systems. IEEE Trans. Biomed. Eng. 2014, 61, 2527–2537. [Google Scholar] [CrossRef]
  29. Li, R.; Cai, Y.; Niu, K.; Vander Poorten, E. Comparative Quantitative Analysis of Robotic Ultrasound Image Calibration Methods. In Proceedings of the 2021 20th International Conference on Advanced Robotics (ICAR), Ljubljana, Slovenia, 6–10 December 2021; pp. 511–516. [Google Scholar]
Figure 1. Coordinate system relationships for probe calibration.
Figure 1. Coordinate system relationships for probe calibration.
Mathematics 13 01359 g001
Figure 2. DRHT calibration flow chart. A three-line, 24-layer calibration fixture is designed, and the poses of the ultrasound probe and calibration fixture can be obtained by visual markers. The 3D coordinates of calibration points are calculated via the N-line model and then reduced into a 2D plane. Finally, homography transformation between the reduced points and the ultrasound image plane is retrieved, thereby completing the calibration process.
Figure 2. DRHT calibration flow chart. A three-line, 24-layer calibration fixture is designed, and the poses of the ultrasound probe and calibration fixture can be obtained by visual markers. The 3D coordinates of calibration points are calculated via the N-line model and then reduced into a 2D plane. Finally, homography transformation between the reduced points and the ultrasound image plane is retrieved, thereby completing the calibration process.
Mathematics 13 01359 g002
Figure 3. Expanded N-line model.
Figure 3. Expanded N-line model.
Mathematics 13 01359 g003
Figure 4. Fast 3D reconstruction through image transformation.
Figure 4. Fast 3D reconstruction through image transformation.
Mathematics 13 01359 g004
Figure 5. Ultrasound probe calibration setup.
Figure 5. Ultrasound probe calibration setup.
Mathematics 13 01359 g005
Figure 6. Selection of calibration points. (a) The projected points of designed N-line model. (b) Recognition of N-line model projections, with all points recognizable. (c) Recognition of N-line model projections, with some points missing.
Figure 6. Selection of calibration points. (a) The projected points of designed N-line model. (b) Recognition of N-line model projections, with all points recognizable. (c) Recognition of N-line model projections, with some points missing.
Mathematics 13 01359 g006
Figure 7. Results of 17 calibration experiments.
Figure 7. Results of 17 calibration experiments.
Mathematics 13 01359 g007
Figure 8. Images of dataset model (first row), the sequence of cropped ultrasound images (second row), ultrasonic three-dimensional reconstruction results from Plus (Third row), and results from our model (fourth row). The 1–5 columns are the images and results of different objects.
Figure 8. Images of dataset model (first row), the sequence of cropped ultrasound images (second row), ultrasonic three-dimensional reconstruction results from Plus (Third row), and results from our model (fourth row). The 1–5 columns are the images and results of different objects.
Mathematics 13 01359 g008
Figure 9. The reconstruction time of the PLUS and proposed methods.
Figure 9. The reconstruction time of the PLUS and proposed methods.
Mathematics 13 01359 g009
Table 1. Coordinates of the vertex of the calibrated line segment in the calibrated marker coordinate system.
Table 1. Coordinates of the vertex of the calibrated line segment in the calibrated marker coordinate system.
Column01234567
Layers
129.529.529.529.529.529.529.529.5
18.523.528.533.538.543.548.553.5
−68.20.24−68.20.24−68.20.24−68.20.24
244.544.544.544.544.544.544.544.5
18.523.528.533.538.543.548.553.5
−68.20.24−68.20.24−68.20.24−68.20.24
359.559.559.559.559.559.559.559.5
18.523.528.533.538.543.548.553.5
−68.20.24−68.20.24−68.20.24−68.20.24
Table 2. Calibration errors of the three groups of experiments.
Table 2. Calibration errors of the three groups of experiments.
Image Num3914
Error of PLUS1.15 ± 0.88 mm1.12 ± 0.82 mm1.05 ± 0.63 mm
Error of DRHT1.09 ± 0.77 mm0.98 ± 0.76 mm0.86 ± 0.54 mm
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ji, X.; Zhang, Y.; Shang, H.; Hu, L.; Qi, X.; Liu, W. DRHT: A Hybrid Mathematical Model for Accurate Ultrasound Probe Calibration and Efficient 3D Reconstruction. Mathematics 2025, 13, 1359. https://doi.org/10.3390/math13081359

AMA Style

Ji X, Zhang Y, Shang H, Hu L, Qi X, Liu W. DRHT: A Hybrid Mathematical Model for Accurate Ultrasound Probe Calibration and Efficient 3D Reconstruction. Mathematics. 2025; 13(8):1359. https://doi.org/10.3390/math13081359

Chicago/Turabian Style

Ji, Xuquan, Yonghong Zhang, Huaqing Shang, Lei Hu, Xiaozhi Qi, and Wenyong Liu. 2025. "DRHT: A Hybrid Mathematical Model for Accurate Ultrasound Probe Calibration and Efficient 3D Reconstruction" Mathematics 13, no. 8: 1359. https://doi.org/10.3390/math13081359

APA Style

Ji, X., Zhang, Y., Shang, H., Hu, L., Qi, X., & Liu, W. (2025). DRHT: A Hybrid Mathematical Model for Accurate Ultrasound Probe Calibration and Efficient 3D Reconstruction. Mathematics, 13(8), 1359. https://doi.org/10.3390/math13081359

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop