Next Article in Journal
A Comprehensive Survey on Wi-Fi Sensing for Human Identity Recognition
Previous Article in Journal
A Study on Enhancing the Information Security of Urban Traffic Control Systems Using Evolutionary Game Theory
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Precise Calibration Method for the Robot-Assisted Percutaneous Puncture System

1
College of Mechanical and Electrical Engineering, Shaanxi University of Science & Technology, Xi’an 710021, China
2
Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
*
Author to whom correspondence should be addressed.
Electronics 2023, 12(23), 4857; https://doi.org/10.3390/electronics12234857
Submission received: 22 October 2023 / Revised: 14 November 2023 / Accepted: 27 November 2023 / Published: 1 December 2023
(This article belongs to the Section Computer Science & Engineering)

Abstract

:
The precision and stability of the Robot-Assisted Percutaneous Puncture (RAPP) system have become increasingly crucial with the widespread integration of robotic technology in the field of medicine. The accurate calibration of the RAPP system prior to surgery significantly influences target positioning performance. This study proposes a novel system calibration method that simultaneously addresses system hand–eye calibration and robot kinematic parameters calibration, thereby enhancing the surgery success rate and ensuring patient safety. Initially, a Closed-loop Hand–eye Calibration (CHC) method is employed to rapidly establish transformation relationships among system components. These CHC results are then integrated with nominal robot kinematic parameters to preliminarily determine the system calibration parameters. Subsequently, a hybrid algorithm, combining the regularized Levenberg–Marquardt (LM) algorithm and a particle filtering algorithm, is utilized to accurately estimate the system calibration parameters in stages. Numerical simulations and puncture experiments were conducted using the proposed system calibration method and other comparative methods. The experimental results revealed that, among several comparative methods, the approach presented in this paper yields the greatest improvement in the puncture accuracy of the RAPP system, demonstrating the accuracy and effectiveness of this method. In conclusion, this calibration method significantly contributes to enhancing the precision, operational capability, and safety of the RAPP system in practical applications.

1. Introduction

Percutaneous puncture surgery, as a minimally invasive medical technique, has found widespread acceptance in clinical settings [1,2]. Traditionally, this approach heavily relies on imaging technologies such as ultrasound, CT, or MRI to guide practitioners in manually manipulating the puncture needle and achieving the desired target position. While this method demands excellent hand–eye coordination and extensive operational experience, concerns about low accuracy and prolonged operation time may occur during puncture surgery [3]. The development of robot technology has led to the emergence of the RAPP system, which provides a more promising alternative for puncture surgery [4,5,6].
The RAPP system integrates cutting-edge visual tracking, 3D reconstruction, and motion control technology, enabling precise manipulation of surgical instruments and enhancing the accuracy of percutaneous puncture operations. Due to its outstanding positioning accuracy and motion stability, the RAPP system has gained popularity in medical interventions, specifically, in neurosurgery and orthopedics [7,8,9]. A typical RAPP configuration includes medical imaging equipment, a multi-degree-of-freedom serial robotic arm equipped with surgical instruments, an external measurement device, and a system control center. External measuring devices allow real-time tracking of surgical instruments and the patient’s body, assisting the doctor in determining the appropriate puncture location. The robotic arm, supervised by the system control center, is tasked with transporting the surgical instrument to the designated target puncture location. Obviously, the accuracy of the puncture surgery depends on the precision of the RAPP system. Puncture errors while using the RAPP system can be attributed to two factors: errors in robot kinematic parameters and hand–eye calibration. Within these factors, errors in kinematic parameters primarily stem from manufacturing and assembly tolerances, wear and deformation of the robot’s structure, and geometric irregularities of the robot (e.g., orthogonality and parallelism). These errors can lead to absolute positioning errors and trajectory-planning deviations of the robotic arm, affecting the subsequent accuracy of the RAPP system’s target positioning. Research has proven that errors in kinematic parameters are the primary cause of positioning errors [10]. Additionally, the hand–eye calibration determines the homogeneous transformation matrix between the robot and the external measurement device, as well as between the surgical instrument and the robot flange. Inaccuracies in hand–eye calibration can result in inconsistencies between the actual and observed positions of surgical instruments, which can then lead to errors in the RAPP system. Therefore, precise calibration of the robot’s kinematic parameters and a hand–eye calibration before surgery are crucial.
Considerable research has been conducted to address the calibration problems of robot systems. Various scholars have enhanced the absolute positioning accuracy of robot systems by employing different hand–eye calibration methods. Qin et al. [11] proposed a hand–eye, flange–tool and robot–robot calibration method for the dual-robot collaborative system, effectively achieving high–precision calibration with minimal data using a closed-form dual quaternion method and an iterative method based on the LM algorithm. Li et al. [12] introduced an extended robot–world and hand–eye calibration method, establishing a mathematical model for rigid transformation and utilizing sparse bundle adjustment for optimization, reducing the average robot measurement error to 0.13 mm. Zheng et al. [13] proposed a multi–loop calibration method involving simple motion strategies, an optical tracking system (OTS), and a closed-form solution for three-dimensional position capture of the surgical tool. These methods improve the robot system’s positioning accuracy through hand–eye calibration, but they neglect the positioning error caused by inaccurate robot kinematic parameters, preventing the achievement of optimal absolute positioning accuracy for the robot system.
Previous studies have also addressed the calibration of robot kinematic parameters. Boby et al. [14] proposed a combination of geometric and parametric methods for kinematic identification of industrial robots, employing a divide-and-conquer strategy to alleviate the possibility of ill-conditioned Jacobian matrices in the identification process. Mao et al. [15] transformed the kinematic calibration problem into a separable nonlinear least-squares problem, enhancing the robustness of the system calibration process and reducing the average running time. Wang et al. [16] introduced a closed-loop kinematic calibration method for 6R robots, utilizing point and distance constraints with the aid of machine vision to improve absolute position accuracy. Xu et al. [17] presented a novel method based on the improved manta-ray foraging optimization (MRFO) algorithm, offering advantages such as high accuracy and fast convergence speed. Omodei et al. [18] employed the least-squares algorithm, linear iterative algorithm, and extended Kalman filter (EKF) algorithm to identify the kinematic parameters of SCARA robots, comparing the effectiveness of these three identification algorithms and finding the extended Kalman filter algorithm to be the most effective. While these methods successfully calibrate the robot’s kinematic parameters, they solely consider the kinematic parameters of the robot and do not account for the estimation of the calibration parameters of the system’s hand–eye coordination, rendering them inadequate for RAPP systems. Hence, it is crucial to explore new approaches capable of simultaneously calibrating hand–eye and robot kinematic parameters to enhance the accuracy and efficiency of the RAPP system.
This paper presents a novel method for calibrating the RAPP system. The calibration process begins with a closed-loop hand–eye calibration (CHC) method to achieve preliminary hand–eye calibration. Subsequently, an error model for the system is established to simultaneously estimate both the robot’s kinematic parameters and hand–eye calibration parameters. Additionally, a new parameter-identification method is proposed based on this error model. To improve noise robustness and prevent overfitting, an L1 regularization scheme is incorporated into the LM algorithm for initial parameter identification. The preliminary identification result is employed as the initial value, and further optimization of the parameters is performed using the PF algorithm. Finally, an analysis of the experimental results is conducted.
As for the remainder of this paper, Section 2 describes the system architecture, the CHC method, the establishment of the error model, and the parameter identification method. In Section 3, a series of experimental results are presented to evaluate the system calibration method. The conclusions are provided in Section 4.

2. Materials and Methods

2.1. System Description

The experimental prototype of the RAPP system is illustrated in Figure 1. It comprises several components: a specialized surgical instrument designed to secure the puncture needle; a six-degree-of-freedom (6-DoF) robotic arm (Universal Robots UR5e, Odense, Denmark) with a repeatability of ±0.03 mm for precise puncturing; an optical tracking system (Northern Digital Inc. Polaris Vega ST, Waterloo, ON, Canada) with a maximum refresh rate of 60 Hz, serving as an external measurement device to track the reflective markers; an abdominal biopsy phantom (Sun Nuclea Model 071B, Melbourne, FL, USA); and a system control center based on a workstation (Lenovo ThinkStation K-C2 equipped with Intel i7-12700, 2.10 GHz, and 16 GB RAM). Both the surgical instrument and biopsy phantom are equipped with reflective markers that facilitate their real-time spatial position tracking by the OTS. This tracking capability enables preoperative planning and intraoperative navigation. The surgical instrument is mounted on the flange at the end of the robotic arm and can be precisely guided to the desired puncture location. The system control center communicates with the robotic arm and OTS via TCP/IP protocol and RJ45 interface, respectively.

2.2. Closed-Loop Hand–Eye Calibration Method

The coordinate systems in the RAPP are established as illustrated in Figure 2, where C b represents the coordinate system of the robot base, C e denotes the coordinate system of the end-effector, C t is the coordinate system of the surgical instrument, and C o is the coordinate system of the OTS.

2.2.1. Motion Strategy

In the CHC method described in this paper, specific translational and rotational motions are employed to collect the necessary data for hand–eye calibration. The corresponding motion strategies are elaborated below.
In the robot’s base coordinate system C b , an initial pose d 0 is determined for the end-effector. Translational motion involves guiding the end-effector to move a distance ε along the X-axis and Z-axis of C b in a positive direction, commencing from the initial pose. This movement only changes the position of the end-effector coordinate system while maintaining its orientation. At the initial pose and after each translational motion, the OTS captures and records the position data of three reflective markers fixed on the surgical instrument. The data is represented as follows:
P 1 = P 0 a P 0 b P 0 c P x a P x b P x c P z a P z b P z c .
Similarly, the process of rotational motion proceeds as follows: starting from the initial pose, the end-effector is directed to rotate an angle θ around its X, Y, and Z axes, respectively. In contrast to translational motion, rotational motion solely alters the orientation of the end-effector coordinate system without affecting its position. After each rotation around an axis, the OTS records the spatial positions of the three reflective markers on the surgical instrument, which can be described thus:
P 2 = P r x a P r x b P r x c P r y a P r y a P r y a P r z a P r z b P r z c .
In P 1 and P 2 , the superscripts a , b , and c represent the three reflective markers p a , p b , and p c , respectively. The subscripts 0 , x , and z indicate the initial pose and the pose after the translation along the x and z axes, while the subscripts r x , r y , and r z denote the offset pose after the rotational motion around the x, y, and z axes, respectively. Throughout the motion process, it is crucial to ensure that the markers on the surgical instrument remain within the field of view of the OTS for real-time tracking and recording.

2.2.2. Calibration of the Robot’s Base Coordinate System

In order to solve the transformation relationship between the coordinate systems of each component in the RAPP system, as depicted in Figure 2, the first step is to solve the homogeneous transformation matrix T b o . By utilizing the data collected in the process of translation movement, denoted as P 1 , the unit vectors of three axes of C b in C o can be calculated using the following formula:
v b x   = P x a P 0 a P x a P 0 a v b z   = P z a P 0 a P z a P 0 a v b y   = v b x   × v b z   .
Here, v b x , v b y , and v b z represent the unit vectors of the x, y, and z axes of C b , respectively. In addition, the concept of rotation matrix establishes the following transformation relationship:
v b x   = R b o 1,0 , 0 v b y   = R b o 0,1 , 0 v b z   = R b o 0,0 , 1 .
The rotation matrix R b o can be obtained through the following calculation:
R b o = R b o I = v b x v b y v b z .
By utilizing the data P 2 collected by rotational motion, the position of the end-effector in C o can be calculated and denoted as P   o e   . Let the position of the end-effector in C b be represented as P   b e   . With the definition of the matrix transformation, the translation vector from C o to C b can be expressed as
t b o = P o e R b o P b e .
Thus, the transformation matrix T b o can be defined as
T b o = R b o t b o 0 1 × 3 1 .

2.2.3. Calibration of the Surgical Instrument Coordinate System

The coordinate system C t is established using three reflective markers that are fixed on the surgical instrument [13]. For each pose of the rotational motion strategy (including the initial pose), the rotation matrix and translation vector from the surgical instrument coordinate system to the OTS coordinate system can be easily calculated based on the collected data, denoted as R t m o and t t m o , respectively. Here, m ∈ [1, 4] represents the index of different poses of the end-effector. Therefore, the transformation matrix from C t to C o under the m th pose can be given by
T t m o = R t m o t t m o 0 1 × 3 1 ,
Similarly, for the m th pose, the transformation matrix T e m b in-between the end-effector coordinate system and the base coordinate system can be calculated using the robotic forward kinematics with nominal D-H parameters and joint angles [19].
Within the closed loop formed by coordinate systems C b , C e , C t , and C o , the transformation matrices between these coordinate systems satisfy the following equation:
T t e = 1 4 m = 1 4 T e m b 1 T o b 1 T t m o .

2.3. Kinematic and Error Models of RAPP

The calibration of the robot’s kinematic parameters relies on the robot kinematics model. Traditionally, the Denavit–Hartenberg (D-H) model [20,21] has been widely used for this purpose. However, misalignment may occur due to machining and assembly errors when adjacent joints are parallel, which can result in singularities. To address this problem, this study employs an improved version of the D-H model known as the modified Denavit–Hartenberg (MD-H) model [22,23] for identifying robot kinematic parameters. The coordinate transformation between two adjacent link coordinate systems is expressed by
T i = c θ i c β i s θ i s α i s β i s θ i c α i c θ i s β i + s θ i s α i c β i a i c θ i s θ i c β i + s α i c θ i s β i c θ i c α i s θ i s β i c θ i s α i c β i a i s θ i c α i s β i s α i c α i c β i d i 0 0 0 1 ,
where c θ i represents c o s θ i , and s θ i represents s i n θ i . The remaining parameters are similar.
Thus, the transformation matrix from the surgical instrument coordinate system to the OTS one can be represented as
T = T b o T 1 T 2 T 3 T 4 T 5 T 6 T t e .
According to the data from reference [24], the nominal values of the MD-H parameters for the UR5e robot are listed in Table 1. However, deviations exist between the listed parameter values and the actual values due to machining and assembly errors. Among other factors, the nominal values of the hand–eye calibration parameters obtained through the CHC method are also subject to deviations arising from measurement inaccuracies. For error compensation for the position and orientation of the surgical instrument, it is necessary to establish a mathematical mapping relationship between the pose errors and the parameters errors.

2.3.1. Errors Caused by Deviations in Hand–Eye Calibration Parameters

The nominal values of T b o and T t e were determined using hand–eye calibration, allowing for the initial determination of the posture of coordinate systems C b and C t in RAPP. Considering the errors inherent in the hand–eye calibration process, the position errors of C b along the X, Y, and Z axes are represented by x b , y b , and z b , respectively, while the orientation errors of C b around the X, Y, and Z axes are represented by r x b , r y b , and r z b , respectively. Therefore, the position and orientation errors of the surgical instrument in C o caused by the errors of base coordinate system are represented by d b o and φ b o , respectively, and are defined as follows:
d b o = R b o x b , y b , z b T + r x b i × t t b + r y b j × t t b + r z b k × t t b ,
φ b o = R b o r x b , r y b , r z b T .
Note that the unit vectors along the X, Y, and Z axes are represented by i , j , and k , respectively. Additionally, t t b denotes the translation vector from C o to C o , which is defined as t t b = R e b t t e + t e b . Based on Equations (12) and (13), the errors of the surgical instrument in C o caused by the error of T b o can be abbreviated as
E b o = d b o T φ b o T = R b o R b o i × t t b j × t t b k × t t b 0 3 × 3 R b o x b y b z b r x b r y b r z b = J 1 X 1 .
Similarly, the position errors of C t are defined as x t , y t , and z t . And the orientation errors are represented by r x b , r y b , and r z b . Therefore, the errors for the surgical instrument caused by the errors of T t e can be derived by
d t e = R t o x t , y t , z t T ,
φ t e = R t o r x t , r y t , r z t T .
Then, the errors in both position and orientation are written in a concise form:
E t e = d t e T φ t e T = R t o 0 3 × 3 0 3 × 3 R t o x t y t z t r x t r y t r z t = J 2 X 2 .

2.3.2. Errors Caused by the Deviations of Robot Kinematic Parameters

Considering the errors, the transformation matrix in-between the end-effector frame and the base frame can be represented as
T e b + T e b = i = 1 6 T i + T i ,
where the total differential form of T i is
T i = T i θ i θ i + T i a i a i + T i d i d i + T i α i α i + T i β i β i .
Therefore, after neglecting higher order terms, we can write the right-hand side of Equation (18) as follows:
i = 1 6 T i + T i = i = 1 6 T i + i = 1 6 T 1 T i 1 T i T i + 1 T 6 .
Meanwhile, the left-hand side of Equation (18) has the following equivalence relationship:
T e b + T e b = T e b 1 r z e r y e x e r z e 1 r x e y e r y e r x e 1 z e 0 0 0 1 ,
where x e , y e , and z e represent the position errors of the end-effector, while r x e , r y e , and r z e represent the orientation errors. Then, Equations (19)–(21) can be used to establish the relationship between the error of the end-effector and the error of the robot’s kinematic parameters.
x e y e z e r x e r y e r z e = H θ a d α β ,
where θ , a , d , α , β represent the errors in the robot kinematic parameters, containing 26 parameters in total, i.e., H R 6 × 26 . Subsequently, the position and orientation errors of the surgical instrument caused by the errors in the robot kinematic parameters are derived as
d e b = R e o ( [ x e , y e , z e ] T + r x e ( i × t t e ) + r y e ( j × t t e ) + r z e ( k × t t e ) )
φ e b = R e o r x e , r y e , r z e T .
The relationship between the errors of the surgical instrument and the errors of the robot’s kinematic parameters can be expressed by
E e b = d e b T φ e b T = R e o R e o i × t t e j × t t e k × t t e 0 3 × 3 R e o H θ a d α β = J 3 X 3 .

2.3.3. Simplification of Error Model

Equation (25) reveals the involvement of 38 parameters in the identification process, resulting in a notably high condition number for the Jacobian matrix. A high condition number can adversely affect the efficiency and real-time performance of solving the identification problem [14]. Hence, a staged identification strategy is employed to estimate the hand–eye calibration parameters and the robot kinematic parameters separately.
In the first stage, the optimization of hand–eye calibration parameters is performed. By rearranging Equations (14) and (17), we can establish the relationship between the hand–eye calibration error and the surgical instrument pose error as follows:
E ( 1 ) = J ( 1 ) X ( 1 ) ,
where J ( 1 ) = J 1 J 2 , X ( 1 ) = X 1 T X 2 T T .
After the estimation and update of the first stage, it can be assumed that the remaining surgical instrument pose error is attributable to the errors in the robot’s kinematic parameters. Consequently, the second stage exclusively focuses on updating these parameters. The error model for this stage can be represented as
E ( 2 ) = J ( 2 ) X ( 2 ) ,
where J ( 2 ) = J 3 , X ( 2 ) = X 3 .
Note that, for the RAPP, the kinematic parameters after calibration are considered accurate for an extended period. Therefore, in this case, it is sufficient to perform only the first stage to complete hand–eye calibration before the surgery.

2.4. Parameter Estimation Algorithm

The system calibration error model establishes a relationship between the system calibration parameters and the pose errors of the surgical instrument. The RAPP calibration process is equivalent to solving a nonlinear least-squares regression problem. The commonly employed approach is to linearize the problem by disregarding the higher-order error terms, and estimate the parameters using the least-squares algorithm [25]. The Levenberg–Marquardt algorithm is extensively used in solving comparable problems due to its rapid convergence and efficiency. To improve the performance of the LM algorithm, a regularization term can be added to promote parameter sparsity, reduce overfitting risks, and enhance robustness to noise [26]. However, the regularized Levenberg–Marquardt (RLM) algorithm performs poorly in highly nonlinear systems with non-Gaussian noise due to its limitations in linear approximation. It tends to become stuck in stagnation in searching for solutions near the optimum. In contrast, the particle filter (PF) algorithm demonstrates remarkable capabilities in handling non-Gaussian noise and highly nonlinear problems. Nevertheless, a drawback of the PF algorithm is its sensitivity to initial values. Consequently, the RLM algorithm is employed for pre-identification of the system calibration parameters, with the resulting values serving as the initial values for the PF algorithm.

2.4.1. Pre-Identification Based on RLM Algorithm

The established error model can be written in the form of E = J X . In this equation, E denotes the vector of observation error, J is the Jacobian matrix, and X represents the corresponding parameter error. Based on this model, the LM algorithm utilizes the Jacobian matrix and the observation error vector to calculate the current calibration parameter error, denoted as X l . The system calibration parameters are then updated according to the parameter errors using Equation (23):
X l = J l T J l + λ 1 I 1 J l T E l .
Here, J l and E l correspond to the Jacobian matrix and error vector at the l t h iteration, respectively. By employing the updated parameters, we can calculate the new Jacobian matrix and error vector, and iteratively continue this process until convergence is achieved. On this basis, we propose an enhancement method. Specifically, we introduce L1 regularization penalties to decrease the magnitudes of the decision parameters, promoting model sparsity. Consequently, Equation (28) is transformed as follows:
X l = J l T J l + λ 1 I 1 J l T E l + λ 2 X l X l 2 + v ,
where, λ 1 and λ 2 are the regularization constants, I represents the identity matrix, and the noise term v is the additive noise. The system calibration parameters are updated iteratively until the termination condition is fulfilled; the resultant suboptimal calibration parameter values are denoted X s .

2.4.2. Accurate Identification Based on PF Algorithm

Particle filtering is a filtering technique utilized for state estimation and parameter identification problems, based on Monte Carlo sampling and importance weighting. As an extension of Bayesian filtering, particle filtering can effectively handle nonlinear and non-Gaussian systems and measurement models.
In particle filtering, a collection of particles is used to approximate the posterior probability distribution of the system calibration parameters. Each particle represents a hypothetical value of the parameters. Through random sampling, importance weighting, and resampling steps, these particles progressively approximate the target distribution.
The system transition equations can be described as
X k = X k 1 + U k ,
Y k = T X s + X k T X s .
Here, X k represents the system state value, U k represents the system noise, Y k R 4 × 4 is the transformation error matrix of the surgical instrument, and T is the forward kinematics operator of the system defined in Equation (11). Initially, particles X 0 i are generated in the state space using the prior probability p X 0 . Next, the state values and transformation error matrices of the i t h particle at time k can be calculated using the system transition equations below:
X k i = X k 1 i + U k ,
Y k i = T X s + X k i T X s .
Furthermore, the observation error value of the i t h particle can be derived from the transformation error matrix.
E k i = Y k i 14 Y k i 24 Y k i 34 a r c t a n Y k i 32 / Y k i 33 a r c t a n Y k i 31 / Y k i 32 2 + Y k i 33 2 a r c t a n Y k i 21 / Y k i 11 .
For each particle, the weight is determined by the probability density, as
ω k i = 1 2 π R e x p 1 2 E k i T R 1 E k i .
Here, R is the covariance matrix of the measurement. After calculating the weights of all particles, they are normalized as
ω k i = ω k i i = 1 n ω k i .
The current system state value can be calculated as
X k = i = 1 n ω k i X k i .
Afterwards, the particles are resampled to avoid degeneracy, thus recursively constructing a new set of particles.
With the above method, the calibration parameters can be accurately identified. In the following section, sequential experiments were conducted to verify the calibration of the RAPP under the new method.

3. Experiments

The flow chart of the RAPP system calibration is shown in Figure 3. To validate the effectiveness and correctness of the RAPP calibration method, three experiments were conducted using the prototype of the experimental system, including a hand–eye calibration experiment, a numerical simulation experiment for target puncture, and a robotic puncture on a biomimetic model. For the first experiment, the results of the hand–eye calibration of RAPP for different initial positions are clearly displayed. Next, experimental verification was conducted based on the error model established in Section 2. The algorithm was used to identify the system calibration parameters. Through analysis of parameter errors and robot positioning accuracy, an evaluation of the parameter identification algorithm is performed. Finally, puncture experiments are conducted on a biomimetic model based on the proposed calibration method.

3.1. Closed-Loop Hand–Eye Calibration Results

As shown in Figure 3, the first step in calibrating the RAPP system is to perform hand–eye calibration using the CHC method. By employing Equations (6)–(9), the homogeneous transformation matrices, T b o and T t e , can be calculated. These matrices define the relationship between the robot base and the OTS, as well as the relationship between the robot end-effector and the surgical instrument, respectively. This process aims to establish the spatial relationship among the system components and obtain accurate initial parameter values for subsequent parameter identification. Therefore, ensuring the correctness of the CHC method is a crucial prerequisite for precise system calibration. To verify its correctness and reliability, hand–eye calibration was conducted in four different initial poses within the robot’s workspace, and the results are presented in Table 2. The data in Table 2 clearly demonstrates that the target homogeneous transformation matrices obtained from hand–eye calibration across different initial poses are highly consistent, confirming the effectiveness of the CHC method.

3.2. Numerical Simulation Experiment for Target Puncture

To verify the effectiveness and accuracy of the proposed RLM-PF algorithm for identifying calibration parameters in RAPP, a numerical simulation research was conducted based on the CHC results. In puncture surgery, the accuracy of puncture can be affected by deviations in both the position and orientation of surgical instrument. Thus, Equations (38) and (39) can be utilized in this experiment to calculate the position and orientation errors of the surgical instrument, respectively, assuming the absence of additional measurement noise.
e p o s i t i o n = x a x t 2 + y a y t 2 + z a z t 2 ,
e o r i e n t a t i o n = r x a r x t 2 + r y a r y t 2 + r z a r z t 2 ,
where x a , y a , z a , r x a , r y a , r z a represents the actual pose of the surgical instrument measured by OTS, while x t , y t , z t , r x t , r y t , r z t represents the theoretical pose of the surgical instrument obtained through calculating the calibration parameters through forward kinematics. By comparing the position error and direction error, an analysis and evaluation of the identification algorithm are conducted.
According to the error model, there are a total of 38 parameters that need to be estimated in RAPP. Additionally, each measured pose of the surgical instrument can provide three equations, which requires the identification of at least 13 sets of equations. To improve the accuracy of parameter identification, as many measurement postures as possible were selected. In this experiment, we chose 144 different uniformly distributed poses within the robot workspace as samples. Among them, 96 samples were selected as training data for parameter errors identification, and the remaining 48 samples were used for comparison and validation of the identification results.
The system calibration parameters consist of the nominal kinematic parameters of the robot and the initial hand-eye calibration parameters obtained using the CHC method. Subsequently, the proposed parameter identification algorithm was employed to iteratively update the system calibration parameters 100 times, based on the error model. Figure 4 illustrates the relationship between the average position and orientation error and the number of iterations. The calibration results for the system calibration parameters are presented in Table 3 and Table 4. It is evident that the average position and orientation errors are greatly reduced after applying the identification algorithm, thereby confirming its effectiveness.
To further validate the effectiveness and superiority of the proposed system calibration method, we conducted comparative experiments using Zheng’s method [13], Boby’s method [14], and the most effective method confirmed in Omodei’s study [18]. These methods were employed to calibrate the RAPP system, and the position and orientation errors of 48 validation samples were calculated. Subsequently, the results were compared using the calibration method proposed in this paper. The comparison results, as presented in Figure 5 and Figure 6, clearly demonstrate that our proposed method outperforms the other comparative methods in terms of both position and orientation errors. The main reason for these larger errors in other methods is that they can only calibrate error sources in a specific aspect of the RAPP system and lack the capability to simultaneously eliminate positioning errors caused by robot kinematic parameters and hand–eye calibration. Table 5 presents the maximum, minimum, and average values of the position and orientation errors. After calibration using Zheng’s, Boby’s, and Omodei’s methods, the position errors were measured at 0.1788 mm, 0.2556 mm, and 0.3572 mm, while the orientation errors were determined to be 0.0064 rad, 0.0021 rad, and 0.0035 rad, respectively. By contrast, when employing the proposed calibration method algorithm for the RAPP system, the most significant improvement is observed in positioning accuracy, with the positioning error and orientation error reduced to 0.0418 mm and 0.0007 rad, respectively. The experimental results clearly demonstrate that the proposed system calibration method greatly enhances the absolute positioning accuracy of the RAPP system.

3.3. Robotic Puncture on Biomimetic Model

Actual robotic puncture was conducted to further validate the effectiveness of the proposed RAPP calibration method. As depicted in Figure 7a, a biomimetic model was utilized to simulate human soft tissue; it contained several black spheres of different diameters to represent the locations of lesions within the human body. Moreover, multiple reflective markers for surgical location and navigation were affixed to the surface of the biomimetic model. Firstly, the biomimetic model was CT-scanned to obtain and visualize its CT image sequence. Subsequently, the black spheres within the biomimetic model were employed as puncture targets to validate the suitability of the proposed calibration method for target localization. Various positions and orientations for the puncture path were planned in the medical image coordinate system C i , including the puncture target point P i t and the puncture starting point P i s (Figure 7b). The transformation matrix T i o between the medical image coordinate system and the OTS coordinate system, as well as the transformation matrix T p t between the needle tip coordinate system and the surgical instrument coordinate system, could be obtained through the marker registration method [27] and surgical instrument registration method, respectively [28]. The robot target pose could be calculated using the proposed calibration method and the planned puncture path. Once the puncture is completed, the needle tip should be kept stationary within the model, and the process followed by another CT scan. The scanned data is then employed to reconstruct the 3D models of the biomimetic model and the puncture needle (Figure 7c). Subsequently, the position of the needle tip in C i is determined through image threshold segmentation. The position error of the puncture experiment is defined as the distance between the needle tip and the planned puncture target point P i t .
During this experiment, 30 target punctures were performed using the same medical puncture needle (0.8 × 150 mm). The puncture errors with different calibration methods are presented in Figure 8. Table 6 lists the average, maximum, and minimum values of the puncture error. It is evident that the system calibration method proposed in this paper significantly enhances the accuracy of puncture, and thus is more effective, compared to other methods. The average puncture error is 0.4305 mm, which meets the requirements of clinical punctures [29,30].

4. Conclusions

In this study, a calibration method for RAPP is proposed, aiming to enhance its precise positioning capability. Initially, the system calibration parameters are efficiently determined using the CHC method. Based on the established error model, the parameter estimation method that combines the RLM algorithm and the PF algorithm was successfully applied to perform the system calibration. Several experiments were conducted to validate the effectiveness and robustness of the proposed calibration method. The experimental results demonstrate that the method significantly enhances the positioning accuracy of RAPP, thereby enhancing surgical efficiency and the success rate.
The calibration method proposed in this study is based on the MD-H model. Future research directions include applying this calibration algorithm to more flexible kinematic models and exploring the application of online calibration and compensation methods in the current version and subsequent generations of the RAPP system. These studies will contribute to further enhancing the performance and accuracy of the RAPP system.

Author Contributions

Conceptualization, J.L. and M.L.; methodology, J.L. and Q.Z.; validation, C.Q. and J.L.; investigation, J.L. and T.L.; writing—original draft preparation, J.L.; writing—review and editing, S.Z. and M.L.; supervision, S.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported in part by the National Key R&D Project of China (No. 2018YFA0704102, No. 2018YFA0704104), in part by the National Natural Science Foundation of China (No. 81827805), in part by Natural Science Foundation of Guangdong Province (No. 2023A1515010673), in part by Shenzhen Technology Innovation Commission (Nos. JCYJ20200109114610201, JCYJ20200109114812361, and JSGG20220831110400001), and in part by the Shenzhen Engineering Laboratory for Diagnosis & Treatment Key Technologies of Interventional Surgical Robots (XMHT20220104009).

Data Availability Statement

The data is unavailable due to privacy or ethical restrictions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Taylor, A.J.; Xu, S.; Wood, B.J.; Tse, Z.T.H. Origami Lesion-Targeting Device for CT-Guided Interventions. J. Imaging 2019, 5, 23. [Google Scholar] [CrossRef] [PubMed]
  2. Li, H.; Boiselle, P.M.; Shepard, J.; Trotman-Dickenson, B.; McLoud, T. Diagnostic accuracy and safety of CT-guided percutaneous needle aspiration biopsy of the lung: Comparison of small and large pulmonary nodules. AJR Am. J. Roentgenol. 1996, 167, 105–109. [Google Scholar] [CrossRef] [PubMed]
  3. Koethe, Y.; Xu, S.; Velusamy, G.; Wood, B.J.; Venkatesan, A.M. Accuracy and efficacy of percutaneous biopsy and ablation using robotic assistance under computed tomography guidance: A phantom study. Eur. Radiol. 2014, 24, 723–730. [Google Scholar] [CrossRef] [PubMed]
  4. Tacher, V.; de Baere, T. Robotic assistance in interventional radiology: Dream or reality? Eur. Radiol. 2020, 30, 925–926. [Google Scholar] [CrossRef] [PubMed]
  5. Perez, R.E.; Schwaitzberg, S.D. Robotic surgery: Finding value in 2019 and beyond. Ann. Laparosc. Endosc. Surg. 2019, 4, 51. [Google Scholar] [CrossRef]
  6. Yang, G.-Z.; Cambias, J.; Cleary, K.; Daimler, E.; Drake, J.; Dupont, P.E.; Hata, N.; Kazanzides, P.; Martel, S.; Patel, R.V.; et al. Medical robotics—Regulatory, ethical, and legal considerations for increasing levels of autonomy. Sci. Robot. 2017, 2, eaam8638. [Google Scholar] [CrossRef] [PubMed]
  7. Faria, C.; Erlhagen, W.; Rito, M.; De Momi, E.; Ferrigno, G.; Bicho, E. Review of robotic technology for stereotactic neurosurgery. IEEE Rev. Biomed. Eng. 2015, 8, 125–137. [Google Scholar] [CrossRef]
  8. Beasley, R.A.; Howe, R.D. Increasing Accuracy in Image-Guided Robotic Surgery Through Tip Tracking and Model-Based Flexion Correction. IEEE Trans. Robot. 2009, 25, 292–302. [Google Scholar] [CrossRef]
  9. Keric, N.; Eum, D.J.; Afghanyar, F.; Rachwal-Czyzewicz, I.; Renovanz, M.; Conrad, J.; Wesp, D.M.; Kantelhardt, S.R.; Giese, A. Evaluation of surgical strategy of conventional vs. percutaneous robot-assisted spinal trans-pedicular instrumentation in spondylodiscitis. J. Robot. Surg. 2017, 11, 17–25. [Google Scholar] [CrossRef]
  10. Wu, L.; Yang, X.; Chen, K.; Ren, H. A Minimal POE-Based Model for Robotic Kinematic Calibration with Only Position Measurements. IEEE Trans. Autom. Sci. Eng. 2014, 12, 758–763. [Google Scholar] [CrossRef]
  11. Qin, Y.; Geng, P.; Lv, B.; Meng, Y.; Song, Z.; Han, J. Simultaneous Calibration of the Hand-Eye, Flange-Tool and Robot-Robot Relationship in Dual-Robot Collaboration Systems. Sensors 2022, 22, 1861. [Google Scholar] [CrossRef]
  12. Li, W.; Dong, M.; Lu, N.; Lou, X.; Sun, P. Simultaneous Robot–World and Hand–Eye Calibration without a Calibration Object. Sensors 2018, 18, 3949. [Google Scholar] [CrossRef]
  13. Zheng, L.; Zhang, Z.; Wang, Z.; Bao, K.; Yang, L.; Yan, B.; Yan, Z.; Ye, W.; Yang, R. A multiple closed-loops robotic calibration for accurate surgical puncture. Int. J. Med. Robot. Comput. Assist. Surg. 2021, 17, e2242. [Google Scholar] [CrossRef]
  14. Boby, R.A.; Klimchik, A. Combination of geometric and parametric approaches for kinematic identification of an industrial robot. Robot. Comput. Manuf. 2021, 71, 102142. [Google Scholar] [CrossRef]
  15. Mao, C.; Chen, Z.; Li, S.; Zhang, X. Separable Nonlinear Least Squares Algorithm for Robust Kinematic Calibration of Serial Robots. J. Intell. Robot. Syst. 2020, 101, 2. [Google Scholar] [CrossRef]
  16. Wang, R.; Wu, A.; Chen, X.; Wang, J. A point and distance constraint based 6R robot calibration method through machine vision. Robot. Comput. Manuf. 2020, 65, 101959. [Google Scholar] [CrossRef]
  17. Xu, X.; Bai, Y.; Zhao, M.; Yang, J.; Pang, F.; Ran, Y.; Tan, Z.; Luo, M. A Novel Calibration Method for Robot Kinematic Parameters Based on Improved Manta Ray Foraging Optimization Algorithm. IEEE Trans. Instrum. Meas. 2023, 72, 1–11. [Google Scholar] [CrossRef]
  18. Omodei, A.; Legnani, G.; Adamini, R. Three methodologies for the calibration of industrial manipulators: Experimental results on a SCARA robot. J. Robot. Syst. 2000, 17, 291–307. [Google Scholar] [CrossRef]
  19. Horn, B.K.P. Closed-form solution of absolute orientation using unit quaternions. J. Opt. Soc. Am. A 1987, 4, 629–642. [Google Scholar] [CrossRef]
  20. Lee, J.-W.; Park, G.-T.; Shin, J.-S.; Woo, J.-W. Industrial robot calibration method using denavit—Hatenberg parameters. In Proceedings of the 17th International Conference on Control, Automation and Systems (ICCAS), Jeju, Republic of Korea, 18–21 October 2017; pp. 1834–1837. [Google Scholar]
  21. Li, Z.; Li, S.; Luo, X. An overview of calibration technology of industrial robots. IEEE/CAA J. Autom. Sin. 2021, 8, 23–36. [Google Scholar] [CrossRef]
  22. SM’hiri, S.A.; Ben Romdhane, N.M.; Damak, T. New Forward Kinematic Model of Parallel Robot Par4. J. Intell. Robot. Syst. 2019, 96, 283–295. [Google Scholar] [CrossRef]
  23. Li, X.; Hu, H.; Ding, W. Two Error Models for Calibrating SCARA Robots based on the MDH Model. MATEC Web Conf. 2017, 95, 08008. [Google Scholar] [CrossRef]
  24. Kebria, P.M.; Al-Wais, S.; Abdi, H.; Nahavandi, S. Kinematic and dynamic modelling of UR5 manipulator. In Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics (SMC), Budapest, Hungary, 9–12 October 2016; pp. 004229–004234. [Google Scholar]
  25. Veitschegger, W.K.; Wu, C.-H. Robot calibration and compensation. IEEE J. Robot. Autom. 1988, 4, 643–656. [Google Scholar] [CrossRef]
  26. Li, Z.; Li, S.; Bamasag, O.O.; Alhothali, A.; Luo, X. Diversified Regularization Enhanced Training for Effective Manipulator Calibration. IEEE Trans. Neural Netw. Learn. Syst. 2022, 34, 8778–8790. [Google Scholar] [CrossRef]
  27. Lin, Q.; Yang, R.; Cai, K.; Si, X.; Chen, X.; Wu, X. Real-time automatic registration in optical surgical navigation. Infrared Phys. Technol. 2016, 76, 375–385. [Google Scholar] [CrossRef]
  28. Yang, R.; Wang, Z.; Liu, S.; Wu, X. Design of an Accurate Near Infrared Optical Tracking System in Surgical Navigation. J. Light. Technol. 2012, 31, 223–231. [Google Scholar] [CrossRef]
  29. Liu, C.-Y.; Chen, K.-F.; Chen, P.-J. Treatment of liver cancer. Cold Spring Harb. Perspect. Med. 2015, 5, a021535. [Google Scholar] [CrossRef]
  30. Stevens, C.W.; Munden, R.F.; Forster, K.M.; Kelly, J.F.; Liao, Z.; Starkschall, G.; Tucker, S.; Komaki, R. Respiratory-driven lung tumor motion is independent of tumor size, tumor location, and pulmonary function. Int. J. Radiat. Oncol. Biol. Phys. 2001, 51, 62–68. [Google Scholar] [CrossRef]
Figure 1. Illustration of the experimental prototype of RAPP.
Figure 1. Illustration of the experimental prototype of RAPP.
Electronics 12 04857 g001
Figure 2. The relationship between coordinate systems in the RAPP system.
Figure 2. The relationship between coordinate systems in the RAPP system.
Electronics 12 04857 g002
Figure 3. Flow chart of the RAPP system calibration.
Figure 3. Flow chart of the RAPP system calibration.
Electronics 12 04857 g003
Figure 4. (a) The mean position error in the iteration process; (b) the mean orientation error in the iteration process.
Figure 4. (a) The mean position error in the iteration process; (b) the mean orientation error in the iteration process.
Electronics 12 04857 g004
Figure 5. Comparison of position errors based on various calibration methods [13,14,18].
Figure 5. Comparison of position errors based on various calibration methods [13,14,18].
Electronics 12 04857 g005
Figure 6. Comparison of orientation errors based on various calibration methods [13,14,18].
Figure 6. Comparison of orientation errors based on various calibration methods [13,14,18].
Electronics 12 04857 g006
Figure 7. Robotic puncture on biomimetic model: (a) experimental scene diagram; (b) planning puncture path; (c) 3D image of biomimetic model after puncture.
Figure 7. Robotic puncture on biomimetic model: (a) experimental scene diagram; (b) planning puncture path; (c) 3D image of biomimetic model after puncture.
Electronics 12 04857 g007
Figure 8. Puncture accuracy using different calibration methods. The red “×” signs are the outliers in the dataset [13,14,18].
Figure 8. Puncture accuracy using different calibration methods. The red “×” signs are the outliers in the dataset [13,14,18].
Electronics 12 04857 g008
Table 1. Nominal values of MD-H parameters for the UR5e robot.
Table 1. Nominal values of MD-H parameters for the UR5e robot.
Joint i θ i / r a d a i / m m d i / m m α i / r a d β i / r a d
100162.5 π / 2 /
20−425.0000
30−392.2000
400133.3 π / 2 /
50099.7 π / 2 /
60099.60/
Table 2. Hand–eye calibration results under different initial poses.
Table 2. Hand–eye calibration results under different initial poses.
Initial PosesTransformation Matrix
T b o T t e
499.9931 0.01193 319.9837 2.2762 2.1025 0.0172 0.0069 0.1977 0.9802 447.5726 0.9982 0.0576 0.0050 516.6255 0.0593 0.9786 0.1977 1745.9053 0 0 0 1 0.0044 0.0052 0.9998 93.8660 0.9465 0.3090 0.0008 21.8479 0.3230 0.9464 0.0064 16.0774 0 0 0 1
400.0114 9.9917 300.0114 2.2760 1.7999 0.4001 0.0078 0.1964 0.9805 446.8286 0.9984 0.0575 0.0033 516.9325 0.0555 0.9788 0.1965 1746.4952 0 0 0 1 0.0022 0.0097 0.9999 93.1943 0.9460 0.3230 0.0041 21.3521 0.3230 0.9464 0.0083 15.9634 0 0 0 1
450.0013 10.1078 350.0061 1.9999 2.3001 0.5001 0.0079 0.1971 0.9804 447.2125 0.9983 0.0578 0.0035 516.1168 0.0574 0.9787 0.1972 1747.1417 0 0 0 1 0.0045 0.0073 0.9999 93.9635 0.9461 0.3237 0.0019 21.2601 0.3237 0.9461 0.0084 16.1022 0 0 0 1
380.0051 30.0017 409.9930 2.3999 1.7999 0.3998 0.0085 0.1975 0.9803 446.6953 0.9982 0.0575 0.0035 515.8220 0.0599 0.9786 0.1976 1746.0214 0 0 0 1 0.0057 0.0063 0.9994 94.5844 0.9465 0.3247 0.0014 22.0379 0.3242 0.9459 0.0074 16.2610 0 0 0 1
Table 3. Robot kinematic parameters determined by identification.
Table 3. Robot kinematic parameters determined by identification.
Joint i θ i / r a d a i / m m d i / m m α i / r a d β i / r a d
10.0000162.97190.47281.57130
20.00170.0233−425.47580.0001−0.0018
3−0.00070.0238−392.50430.00870.0017
40.0012133.32260.01491.57380
5−0.0027100.02510.0208−1.56950
6−0.012999.6832−0.46130.02310
Table 4. Hand–eye calibration parameters determined by identification.
Table 4. Hand–eye calibration parameters determined by identification.
x / m m y / m m z / m m r x / r a d r y / r a d r z / r a d
E b o 446.9202−517.6303−1746.4345−1.6432−0.00101.4245
E t e −94.327422.556216.07891.58160.33691.5668
Table 5. Statistics of positioning error after calibration.
Table 5. Statistics of positioning error after calibration.
MethodDimensionError
MeanMaximumMinimum
Zheng et al. [13]Position (mm)0.17880.41970.0364
Orientation (rad)0.00640.00860.0041
Boby et al. [14]Position (mm)0.25560.45220.1328
Orientation (rad)0.00210.00420.0007
Omodei et al. [18]Position (mm)0.35720.69970.1535
Orientation (rad)0.00350.00840.0005
ProposedPosition (mm)0.04180.11310.0080
Orientation (rad)0.00070.00130.0003
Table 6. Statistics of positioning error after compensation.
Table 6. Statistics of positioning error after compensation.
MethodPuncture Error (mm)
MeanMaximumMinimum
Zheng et al. [13]0.84271.18420.4063
Boby et al. [14]0.66871.22740.4655
Omodei et al. [18]0.86601.11010.6883
Proposed0.43050.72370.2537
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, J.; Li, M.; Zeng, Q.; Qian, C.; Li, T.; Zhou, S. A Precise Calibration Method for the Robot-Assisted Percutaneous Puncture System. Electronics 2023, 12, 4857. https://doi.org/10.3390/electronics12234857

AMA Style

Li J, Li M, Zeng Q, Qian C, Li T, Zhou S. A Precise Calibration Method for the Robot-Assisted Percutaneous Puncture System. Electronics. 2023; 12(23):4857. https://doi.org/10.3390/electronics12234857

Chicago/Turabian Style

Li, Jinbiao, Minghui Li, Quan Zeng, Cheng Qian, Tao Li, and Shoujun Zhou. 2023. "A Precise Calibration Method for the Robot-Assisted Percutaneous Puncture System" Electronics 12, no. 23: 4857. https://doi.org/10.3390/electronics12234857

APA Style

Li, J., Li, M., Zeng, Q., Qian, C., Li, T., & Zhou, S. (2023). A Precise Calibration Method for the Robot-Assisted Percutaneous Puncture System. Electronics, 12(23), 4857. https://doi.org/10.3390/electronics12234857

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop