Next Article in Journal
Optimization Method of Assembly Tolerance Types Based on Degree of Freedom
Next Article in Special Issue
Robotic Systems and Navigation Techniques in Orthopedics: A Historical Review
Previous Article in Journal
Improving Abstractive Dialogue Summarization Using Keyword Extraction
Previous Article in Special Issue
An Overview of Minimally Invasive Surgery Robots from the Perspective of Human–Computer Interaction Design
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Oscillating Saw Calibration for Mandibular Osteotomy Robots

1
Image Processing Center, Beijing University of Aeronautics and Astronautics, Beijing 100191, China
2
Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing 100083, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(17), 9773; https://doi.org/10.3390/app13179773
Submission received: 20 June 2023 / Revised: 20 August 2023 / Accepted: 23 August 2023 / Published: 29 August 2023
(This article belongs to the Special Issue Surgical Robotics Design and Clinical Applications)

Abstract

:
Accurate oscillating saw tool calibration is an important task for mandibular osteotomy robots to perform precise cutting operations. However, in contrast to traditional tool calibration which just calibrates the tool center position (TCP) or the tool feed axis, both the position and the plane orientation of the saw should be carefully calibrated. Therefore, aiming at this problem, in this paper, we propose a method to carry out oscillating saw calibration by employing an optical stereo vision tracking system. At first, hand–eye calibration is conducted to ascertain the spatial pose of the vision frame within the manipulator’s base frame. Subsequently, employing a probe, the positions of the sawtooth points on the oscillating saw plane are captured within the vision frame. These positions are then translated to the manipulator’s end-effector frame using the positional elimination algorithm proposed in this paper. Finally, the pose of the oscillating saw plane within the manipulator’s end-effector frame is extrapolated from the positions of the three sawtooth points. The result shows that the position errors of the points on the oscillating saw plane are within 0.25 mm and the variance of the plane normal direction is 1 . 93 in the five experiments. This approach enables accurate calibration of the oscillating saw plane’s position and orientation within the manipulator’s end-effector frame. Furthermore, it mitigates the necessity of continual adjustments to the joint angles of the manipulator as required by the “six-point method”. However, this approach is hinged upon the availability of precision-oriented 3D positioning equipment.

1. Introduction

Robot-assisted mandibular osteotomy refers to an operation in which a robot cuts the bone object to a certain shape instead of the doctor holding the oscillating saw [1,2,3,4,5]. Before an osteotomy, doctors need to determine the planned osteotomy plane in the patient’s CT image. Then, they map the osteotomy plane from the three-dimensional CT space onto the base frame of the manipulator. The manipulator controls the oscillating saw to perform the bone cutting procedure along the planned osteotomy plane [6]. As depicted in Figure 1, the specific osteotomy steps of the oscillating saw involve repeated lateral motion between points A and B on the planned osteotomy plane, while simultaneously executing feeding movement along the planned osteotomy plane. Figure 1b illustrates the force situation during the feeding motion of the oscillating saw when the oscillating saw plane (abbreviated as OSP) is non-parallel to the planned osteotomy plane. The feeding direction of the OSP is oriented along the planned osteotomy plane. Consequently, during the feeding motion of the oscillating saw, a cutting seam is generated on the mandible’s surface that is not parallel to the planned osteotomy plane. As the oscillating saw plane progressively advances with the feeding, its surface experiences compressive forces from the bone on both sides of the real cutting seam. If the compression force is excessive, the oscillating saw may break. It is imperative to ensure the alignment of the OSP with the planned osteotomy plane during both lateral and feeding movements. Therefore, in robot-assisted mandibular osteotomy, it is necessary to accurately calibrate the pose of the OSP in robot end-effector frame, so that the oscillating saw can accurately reach the cutting pose and complete cutting.
Calibrating the OSP is an indispensable task in modern robotic surgical applications, serving as a form of tool calibration and demanding meticulous attention to plane attributes. Tool calibration needs to select an appropriate tool center point (abbreviated to TCP) and obtains its position (abbreviated to TCPP) and the tool center point frame (abbreviated to TCPF) in the robot end-effector frame [7].
Although there are many works in the literature on tool calibration, to the best of our knowledge, there is no published method for OSP calibration. There are two main reasons for this. Firstly, the existing literature mainly focuses on the tool calibration encompassing point and line attributes but does not include the tool calibration with plane attributes. Arc welding robots prioritize calibration accuracy for the TCPP over the TCPF [8], whereas drilling robots and medical puncture robots require a high calibration accuracy for both the feed-axis within the TCPF and the TCPP [9]. In stark contrast, mandibular osteotomy robots need to accurately calibrate the TCPP and each axis of the TCPF. Secondly, the methods of tool calibration are primarily classified into contact methods based on spatial constraints and non-contact methods based on 3D measuring equipment [9]. Contact-based calibration of the OSP is characterized by a notable intricacy in operational procedures and complexity. Conversely, non-contact calibration methodologies for the OSP, while offering operational simplicity, may occasionally encounter challenges in achieving precise positioning of sawtooth points on the plane. Thus, it becomes imperative to engage in dedicated research endeavors to further advance the domain of OSP calibration.
In this paper, we introduce a method to address OSP calibration for mandibular osteotomy robots. The challenge of OSP calibration is reformulated as determining the positions of three sawtooth points (abbreviated to TSPP) within the manipulator’s end-effector frame. This method consists of three steps. Firstly, head–eye calibration is conducted to obtain the pose of the 3D measuring equipment vision frame in the robot base frame. Secondly, the TSPP in the vision frame are obtained with the help of a specially designed probe, amenable to tracking by the 3D measuring equipment. The probe is meticulously crafted to ensure precise alignment of its terminus with the corresponding sawtooth point on the OSP. By employing the method of position elimination, the TSPPin the manipulator’s end-effector frame are ascertained. This approach demonstrates the capability to mitigate errors associated with head–eye calibration compared to the direct transformation of frames for obtaining TSPP. Thirdly, based on TSPP in robot end-effector frames, the pose of the OSP in the robot end-effector frame can be obtained. In our experiment, the results show that the position errors of the sawtooth points on the OSP are within 0.25 mm and the variance of the plane normal direction is 1 . 93 in the five experiments.
Compared with other tool calibration methods, the OSP calibration method proposed in this paper has the following advantages:
  • The challenge of OSP calibration is reformulated as the task of TSPP calibration. Presently, there is a notable absence of scholarly literature addressing the calibration of tools with plane attributes. However, the methodology presented in this paper demonstrates a capability for precision calibration of both the TCPP and each axis of the TCPF.
  • A novel tool calibration methodology utilizing 3D measuring equipment is introduced. This approach provides a resolution to the intricate operational challenges inherent in contact-based methods and addresses the limitations of positional accuracy in non-contact methodologies when undertaking TSPP calibration.

2. Related Work

2.1. Contact Methods

Contact methods primarily employ spatial constraints to accomplish tool calibration. Among these, the widely adopted approach is the six-point method [7], wherein the operator manipulates the TCP to a designated tip point across four distinct orientations to calibrate the TCPP. Subsequently, the TCP is directed towards two additional predetermined points to establish the TCPF. Xiong et al. simplified the least square of TCPP calibration based on the hypothesis that the TCPF is parallel to the robot end-effector frame [10]. In contact methods, the operator needs to judge whether the TCP coincides with the fixed tip point, which will introduce human error to the calibration of the TCPP and TCPF. To mitigate this challenge, Luo introduced a compensation approach that adjusts the robot’s pose to align TCP with the tip point, estimating the coincidental error between TCP and the designated point through visual analysis of the captured image [11]. There are some contact methods that ultilize a spatial circle [8], a plane [12] or a sphere [13,14] to constrain the robots’ TCP movement to improve the accuracy. Nevertheless, contact methods need to constantly adjust the pose of the robot to touch some points according to the task, inevitably amplifying the intricacies of the calibration procedure.

2.2. Non-Contact Methods

Non-contact methods mainly use 3D measuring equipment to complete tool calibration. The most commonly used 3D measuring equipment is lasers and binocular vision. Bai et al. used a laser tracking system to measure the target mounted on the end-effector of a robot [15]. Shen et al. used a scanner to reconstruct the sphere to calibrate the pose of the scanner frame in the robot end-effector frame [16]. Laser equipment has high precision, but the equipment cost is expensive. At present, binocular vision is applied to automatic robot calibration [17,18,19], which is also widely used in tool calibration. The utilization of binocular vision for tool calibration can be divided into two categories: marker based and marker-less. In terms of marker points, Liu et al. used binocular vision to identify the position of the marker points on the tool to complete tool calibration [20]. In terms of marker-less points, Johan used segmentation and feature point detection to automatically identify the TCP [21]. Zhang et al. used the difference between two images to automatically identify the TCP [9]. The non-contact method with a binocular camera can avoid the influence of human error. However, the performance of automatic detection algorithms and pose estimation is required to be higher.

3. Method

In this paper, a method is proposed to solve OSP calibration for a mandibular osteotomy robot. Figure 2 shows the overall framework. Initially, head–eye calibration is employed to ascertain the pose of the vision frame within the manipulator’s base frame. The obtained outcome is subsequently used in the TSPP calibration procedure. The TSPP calibration method employs a specialized probe to determine the position of the sawtooth points within the vision frame. Subsequently, by means of the position elimination method, the positions of these sawtooth points within the manipulator’s end-effector frame are deduced. Ultimately, based on the positions of the three sawtooth points within the manipulator’s end-effector frame, the pose of the oscillating saw plane within the manipulator’s end-effector frame is established.

3.1. Head–Eye Calibration

As shown in Figure 3, a schematic diagram of head–eye calibration, B is the manipulator’s base frame, E is the manipulator’s end-effector frame, O is the phantom target frame, and C is the vision frame.
The manipulator is moved to the pose m ( m = 1 , 2 , , M ) . The recording of the homogeneous matrix of the manipulator’s end-effector frame relative to the manipulator’s base frame is B T E m and the homogeneous matrix of the phantom target frame relative to the vision frame is C T O m .
C T B B T E m = C T O m O T E
Let p and q be two different groups of data selected from M groups of data.
O T C p C T B B T E p = O T C q C T B B T E q = O T E
C T O q is the inverse matrix of O T C q and E T B p is the inverse matrix of B T E p . Then,
C T O q O T C p C T B = C T B B T E q E T B p
Equation (1) can be solved in the form of A X = Z B [22,23]. Equation (3) can be solved in the form of A X = X B [24,25,26,27,28]. The calibration form of A X = X B has high stability [23]. This article mainly introduces the solution method of A X = X B .
In addressing the solution of A X = X B , two primary approaches have been prominently employed: the two-step method, involving sequential rotation and translation solutions, and the one-step method, which simultaneously resolves translation and rotation. Notable methodologies within the two-step paradigm encompass the linear least squares method introduced by Tsai [24], the method based on Lie group and Lie algebra proposed by Park [25], and the head–eye calibration method designed to mitigate camera calibration errors, as put forth by Horaud [26]. Within the one-step methodology, significant contributions include the double quaternion solution pioneered by Daniilidis [27], along with the solution equation established by Andreff [28], derived from the U V + V W = t formulation in system theory. To mitigate noise-induced errors, it is often necessary to add nonlinear optimization after obtaining the initial solution through the above method.

3.2. TSPP Calibration

TSPP calibration entails determination of the positions of three specific sawtooth points in the manipulator’s end-effector calibration. As delineated in Figure 4, denoted as p1, p2, and p3, these points are selected on the oscillating saw’s plane, serving as TSPP references. Conventional tool position calibration methods often rely on the four-point method. Nonetheless, this approach necessitates continuous manipulator pose adjustments by the operator to ensure that the tool center point (TCP) coincides with fixed spatial points. Clearly, manually adjusting the position of the spatially fixed point proves more straightforward than guiding the manipulator’s positioning to align with such points. Guided by this reasoning, this paper employs the aforementioned head–eye calibration outcomes to deduce the TSPP, thereby mitigating operational intricacies.
This paper provides original method and improved method. The original method directly uses the head–eye calibration result to convert the TSPP in the vision frame to the manipulator’s end-effector frame. Head–eye calibration has a calibration error; thus, directly using head-eye calibration is inappropriate. The improved method eliminates the influence of the position error of head–eye calibration on TSPP calibration. In this paper, the improved method is referred to as the positional elimination technique.

3.2.1. Original Method

Shown in Figure 4 is a schematic diagram of the TSPP calibration method based on head–eye calibration. B is the manipulator’s base frame. E is the manipulator’s end-effector frame. t o o l is the tool frame. Set the origin point of the tool frame to the current calculated point. For example, to calculate the position of p1 in the manipulator’s end-effector frame, the origin point of the tool frame should be set at p1. C is the vision frame and 3D measuring equipment can obtain the probe tip position in the vision frame. The probe consists of two parts: one is the phantom target that can be detected by the vision system and the other is the probe tip that can coincide with the sawtooth.
Move the manipulator to the pose n ( n = 1 , 2 , , N ) . E T B n represents N groups data of the transformation matrix from the manipulator’s base frame to the manipulator’s end-effector frame. E p t o o l n represents N group results of the translation matrix from the tool frame to the manipulator’s end-effector frame. C p t o o l n represents N group data of the translation matrix from the tool frame to the vision frame, which can be obtained by 3D measuring equipment when the probe tip coincides with the sawtooth.
E p t o o l n = ( E T B n B T C ) C p t o o l n
The final result E p t o o l r e s u l t is the average of N group data.
E p t o o l r e s u l t = n = 1 N E p t o o l n N

3.2.2. Improved Method

Expand Equation (4) to obtain the following formula:
E p t o o l n 1 = ( E R B n E p B n 0 1 B R C B p C 0 1 ) C p t o o l n 1
E p t o o l n = E R B n B R C C p t o o l n + E R B n B p C + E p B n
E R B n represents the rotation matrix from the manipulator’s base frame to the manipulator’s end-effector frame. E p B n represents the translation matrix from the manipulator’s base frame to the manipulator’s end-effector frame. B R C represents the rotation matrix from the vision frame to the manipulator’s base frame. B p C represents the translation matrix from the vision frame to the manipulator’s base frame.
From Equation (7), it can be seen that the position of the origin point of the tool frame in the manipulator’s end-effector frame is related to the rotation calibration error and position calibration error of head–eye calibration. This paper improves the original method to eliminate the position error of head–eye calibration.
The following formula is obtained by transforming Equation (7):
B R E n E p t o o l n = B R C C p t o o l n + B p C + B p E n
Let p and q be two different groups of data selected from N groups of data.
( B R E p B R E q ) E p t o o l r e s u l t = B R C ( C p t o o l p C p t o o l q ) + B p E p B p E q
E p t o o l r e s u l t can be obtained by the least square method. Due to the presence of three variables in Equation (9), a minimum of four data sets are required to derive a conclusive outcome. However, redundant data can yield results of greater accuracy.

3.3. Oscillating Saw Calibration

Let E p 1 , E p 2 , and E p 3 , respectively, represent the positions of points p1, p2, and p3 in manipulator’s end-effector frame, which is obtained by TSPP calibration. Then, points p1, p2, and p3 can be used to construct the oscillating saw frame.
Take the direction from p3 to p2 as the direction of the x-axis of oscillating saw frame. Then, x is used to represent the unit component of the x-axis of the oscillating saw frame in the manipulator’s end-effector frame.
x = n o r m a l i z a t i o n ( E p 2 E p 3 )
p4 is on the line between p2 and p3, and the line between p1 and p4 is perpendicular to the line between p2 and p3. Let E p 4 represent the position of p4 in the manipulator’s end-effector frame. The result of E p 4 can be obtained according to the following formula:
( E p 4 E p 3 ) ( E p 2 E p 1 ) = 0
Take the direction from p4 to p1 as the direction of the z-axis of the oscillating saw frame. z is used to represent the unit component of the z-axis of the oscillating saw frame in the manipulator’s end-effector frame.
z = n o r m a l i z a t i o n ( E p 1 E p 4 )
Select p1 as the TCP.
E T t o o l = x z × x z E p 1 0 0 0 1

4. Results

To assess the robustness of this tool calibration algorithm, a series of experiments were conducted, encompassing both simulation and real-world scenarios. The simulation experiment, elaborated upon in Section 4.1, primarily focuses on assessing the oscillating saw calibration method’s performance across various parameters. This experiment involves an analysis of how the type of head-eye calibration affects TSPP error, the impact of sample size on TSPP error, as well as the algorithm’s overall robustness. Notably, due to the uniformity of the position calibration method for the TSPP, the subsequent discussions refer to TSPP as a singular point. The ensuing real-world experiment, detailed in Section 4.2, outlines the system composition and presents the experimental outcomes.

4.1. Simulation Experiment

4.1.1. Data Acquisition Method and Error Evaluation Standard

Randomly generate 100 groups of data E T B k ( k = 1 , 2 , , 100 ) according to the DH parameters of the manipulator. The first 50 groups of data represent the pose of the manipulator required for head–eye calibration, and the last 50 groups of data represent the pose of the manipulator required for TSPP calibration. The real value of the transformation matrix of the target frame relative to the manipulator’s end-effector frame during head–eye calibration is randomly generated as E T O . The real value of the transformation matrix of the vision frame relative to the manipulator’s base frame during head–eye calibration is randomly generated as B T C . The real value of the transformation matrix of the tool frame relative to the manipulator’s end-effector frame during the TSPP calibration is randomly generated as E T t o o l . After that, the transformation matrix of the target frame relative to the vision frame in the process of head–eye calibration can be generated as C T O k ( k = 1 , 2 , , 50 ) and the transformation matrix of the tool frame relative to the vision frame in the process of the TSPP calibration can be generated as C T t o o l k ( k = 51 , 52 , , 100 ) .
Let ( r , t ) represent the axis angle and translation vector of C T O k ( k = 1 , 2 , , 50 ) and C T t o o l k ( k = 51 , 52 , , 100 ) . Let ( r , t ) represent the result after adding Gaussian noise.
r = r + d Δ r , t = t + d Δ t
where d is the noise gain, Δ r follows a Gaussian distribution with a mean value of 0 and a variance of ( 1 2 , 1 2 , 1 2 ) , and Δ t follows a Gaussian distribution with a mean value of 0 and a variance of ( 20 2 , 20 2 , 20 2 ) .
The error calculation method follows two principles. The first principle is that the error has clear physical significance. The second principle is that when there is no position error in the result of the head–eye calibration, the error between the original method and the improved method for TSPP calibration should be the same. To this end, this paper defines the error in the following way.
As shown in Figure 5, let p a and p b represent two different states during the process of obtaining the point position. B p t o o l a and B p t o o l b , respectively, represent the position of the tool frame origin point relative to the manipulator’s base frame in two different states. C p t o o l a and C p t o o l b , respectively, represent the position of the tool frame origin point relative to the vision frame in two different states. B R E a and B R E b , respectively, represent the rotation matrix of the manipulator’s end-effector frame relative to the manipulator’s base frame in two different states. B p E a and B p E b , respectively, represent the position of the manipulator’s end-effector frame relative to the manipulator’s base frame in two different states. B R C and B p C , respectively, represent the rotation matrix and translation matrix of the result obtained by head–eye calibration.
B p t o o l a = B R E a E p t o o l + B p E a = B R C C p t o o l a + B p C , B p t o o l b = B R E b E p t o o l + B p E b = B R C C p t o o l b + B p C
then
B p t o o l a B p t o o l b = ( B R E a B R E b ) E p t o o l + B p E a B p E b = B R C ( C p t o o l a C p t o o l b )
Theoretically, if there is no rotation error in head–eye calibration, the true value of B p t o o l a B p t o o l b can be obtained. B p t o o l a B p t o o l b can be obtained by the result of the tool frame origin point position in the manipulator’s end-effector frame and the forward kinematics of the manipulator. t represents the number of data collected for the process of the TSPP calibration. Thus, the the TSPP error can be defined as:
e r r o r = i = 1 t j = i + 1 t ( | ( B R E i B R E j ) E p t o o l + B p E i B p E j B R C ( C p t o o l i C p t o o l j ) | 2 ) ( 1 + t ) × t / 2

4.1.2. Influence of the Head–Eye Calibration Type on the TSPP Error

This paper analyzes the influence of different head–eye calibration methods on the TSPP error. Table 1 and Table 2 show a comparison of experimental results without optimization in the process of head–eye calibration and with optimization in the process of head–eye calibration. In the experiment, the number of head–eye calibration data groups is 50, the number of the data groups for the process of the TSPP calibration is 50, and the noise gain is d = 0.05.
It can be seen from Table 1 and Table 2 that no matter which head-eye calibration method is selected to obtain the initial value, the best result can be obtained as long as the nonlinear optimization process is added.

4.1.3. Influence of the Number of Samples on the TSPP Error

To analyze the influence of the number of collected data on the TSPP error, two experiments were conducted. Firstly, the influence of the number of head–eye calibration data on the TSPP error is explored. The number of acquisition data for head–eye calibration was changed from 5 to 50, in which the optimization process of head–eye calibration was added. The number of data for the TSPP was fixed at a value of 50, and the noise gain was d = 0.05. Secondly, the influence of the number of data for the TSPP calibration on the TSPP error is also explored. The number of data for the TSPP was changed from 5 to 50, in which the optimization process of head–eye calibration was added. The number of the acquisition data for head–eye calibration was fixed at a value of 50 and the noise gain was d = 0.05.
As can be seen from Figure 6, the error changes obviously when the number of head–eye calibration data is within the range of 40, and when it is more than 40, the error change is not obvious. There is a problem that the optimization process will not converge due to the influence of data. When the number of data collected for head–eye calibration is 25, the error cannot converge. It can be seen from Figure 6 that the error change is significant when the number of data collected for the TSPP is within 40; the error change is not obvious when the number of data collected for head–eye calibration is outside 40.

4.1.4. Influence of the Noise Amplitude on the TSPP Error

To verify the robustness of the algorithm, the influence of noise with different amplitudes on the TSPP error is explored. The number of data collected for head–eye calibration is 50, and the number of data collected for the TSPP is 50. Nonlinear optimization was added in the process of head–eye calibration and the noise amplitude d was changed from 0 to 0.25.
As can be seen from Figure 7, the improved method has a stronger anti-interference ability than the original method. When the TSPP error threshold is 0.5 mm, d should be less than 0.1.

4.2. Real Experiment

4.2.1. System Constitution

The experimental system was composed of a cooperative manipulator, an infrared binocular camera, an oscillating saw, and a host computer. In Figure 8, the configuration of the system is depicted. The collaborative manipulator employed is a six-degree-of-freedom Aubo-I5 model (AUBO Robotics, Beijing, China), while the oscillating saw is affixed to the manipulator’s end flange. An NDI Polaris infrared binocular camera (Northern Digital, Waterloo, ON, Canada) constitutes a vital component of the system. The custom-designed navigation probe, featuring four target balls affixed to a cylindrical structure, is integral to the setup. The cylinder further incorporates a small aperture on its end surface. As depicted in Figure 8, the said aperture aligns with the sawtooth tip, thus enabling the derivation of the sawtooth tip’s position within the NDI frame.

4.2.2. Pivot Calibration

Since the probe is self-designed, pivot calibration is required to convert the origin point of the probe frame to the small hole. Thus, when the small hole coincides with the point on the oscillating saw, the position of the point on the oscillating saw in the NDI frame can be obtained.
The pivot calibration [29] method was used to coincide the small hole with a fixed point in space, tilt the probe, and make the target ball rotate towards the NDI. By collecting l groups of data, the position of the small hole on the probe’s original frame can be obtained and the origin of the probe frame can be converted to the small hole. Then, a new probe frame can be obtained.
In the l groups of data collected, the position of the small hole in the NDI frame should be a fixed value, the error of pivot calibration is defined by using the pivot calibration result to obtain the dispersion degree of the position of the small hole in the NDI frame. C T p r o b i ( i = 1 , 2 , , l ) represents the transformation matrix from the prob frame to the NDI frame. The result of pivot calibration is p r o b p h o l e , which represents the position from the small hole to the prob frame. C p h o l e is a fixed value, and represents the position from the small hole to the vision frame. The error of pivot calibration is defined as Equation (18). The five experiments result is shown in Table 3.
C p h o l e = i = 1 l C T p r o b i p r o b p h o l e l , e r r o r = i = 1 l | C T p r o b i p r o b p h o l e C p h o l e | 2 l

4.2.3. Oscillating Saw Calibration Result

In each real experiment, 50 groups of data for head–eye calibration were collected, and 15 groups of data were, respectively, collected for each point on the oscillating saw. Table 4 shows the position errors of points p1, p2, and p3 on the oscillating saw in five real experiments.
The error calculation method of the TSPP was defined in the simulation experiment. As shown in Table 5, this paper analyzes the variance of the TCPP and the plane normal direction in five experiments.
The result with the smallest error is selected as the final result, which is represented for the first time in Table 4.
E p 1 = 155.029 27.868 275.502 , E p 2 = 149.768 32.089 275.215 , E p 3 = 152.579 22.2 273.782
The oscillating saw calibration result is shown below.
t o o l E T = 0.2708 0.2117 0.939 155.029 0.9527 0.1988 0.2299 27.868 0.1381 0.9569 0.2556 275.502 0 0 0 1
In this paper, the problem of the oscillating saw calibration is transformed into the problem of the TSPP. The position errors of the points on the oscillating saw plane are within 0.25 mm and the variance of the plane normal direction is 1 . 93 in the five experiments. Thus, it is considered that the result of the oscillating saw calibration meets the requirements. The TSPP method proposed in this paper does not need to constantly adjust the end of the manipulator, so the oscillating saw calibration method in this paper is more convenient.

5. Discussion

Oscillating saw calibration can be attributed to tool calibration with planar orientation information, which is transformed into the problem of TSPP calibration. This means that three sawtooth points on the oscillating saw can be regarded as the TCP, respectively (TSPP), and from this the frame of the oscillating saw can be determined.
To make up for time-consuming and labor-consuming traditional tool calibration methods, an optic stereo vision system is employed. The optic stereo vision system should be accurately registered to the manipulator’s base frame by head–eye calibration. In our simulation experiment, different head–eye calibration methods were tested, and the results showed that no matter which head–eye calibration method is used to obtain the initial value, the best result can be obtained as long as the nonlinear optimization process is employed.
To obtain a better TCP position, the data for the TSPP should be redundant, ensuring that the collected samples are evenly distributed in space, for example 40 groups. The concurrent simulation experiments also demonstrate that the positional elimination method proposed in this paper yields a higher precision compared to the direct transformation of frames for obtaining the TSPP.
In practice, 15 groups of data could be selected to reduce operations. In the five physical experiments, the obtained average error of the TCP was 0.25 mm, and the angular error of the tool plane’s normal vector was less than 1 . 93 . The experimental results meet the requirements for performing osteotomy surgery.
The method proposed in this paper is achieved by using an optical localization system to obtain the coordinates of three predefined sawtooth points on the oscillating saw. Traditional contact tool calibration methods [7,8,10,11,12,13,14] mainly calibrate the TCP point of the tool and involve continual adjustments of the manipulator’s pose to ensure the TCP contact with a fixed point in space. The traditional contact method is cumbersome to operate, and the method proposed in this paper obviates the need for continuous adjustments of the manipulator’s pose. Non-contact tool calibration methods [9,20,21] rely on precise localization of the TCP point by the visual system, and in this paper, a customized probe is employed to ensure the visual system’s accuracy in TCP point localization. In contrast to visual positioning techniques, the utilization of probe-based positioning offers a solution to the challenges posed by small sawtooth point dimensions and difficulties in identifying reflective materials. To the best of our knowledge, this is the first paper presenting a tool calibration method capable of accurately calibrating both the tool position and orientation.
The limitations of this method are (1) The need for a high-precision visual measurement system, such as the NDI Vecra optical localization tracking system used in this experiment (Northern Digital), with an RMS of 0.15 mm. (2) The oscillating saw needs to be kept in the initial position during the calibration process. It is not difficult to avoid causing the tool to swing and shift when measuring the coordinates of the sawtooth points with an optical probe.

6. Conclusions

This paper presents an indirect calibration method for the oscillating saw of a bone-cutting robot using a stereo vision system. The calibration process involves establishing the alignment between the manipulator’s base frame and the visual system’s frame. Subsequently, three sawtooth points on the fan-shaped saw are measured using a custom optical probe, with one of these points serving as the TCP. This enables the determination of the TCP position of the oscillating saw in the manipulator’s end-effector frame, as well as the plane information of the tool. Finally, the pose transformation matrix between the tool and the manipulator’s end-effector frame is calculated.
To address the inherent calibration error associated with the head–eye calibration process, this study incorporates position elimination derived from head–eye calibration outcomes, thereby enhancing the oscillating saw calibration methodology. As a result, the position error attributed to head–eye calibration is effectively mitigated. The experimental results demonstrate that the calibration errors can be reduced to as low as 0.25 mm and 1 . 93 , which are sufficient for clinical applications.
Compared to traditional tool calibration methods, the approach proposed in this paper is not only less time consuming but also less labor intensive. However, it should be noted that a high-precision optical localization system and a specialized optic probe for the sawtooth point are required. Despite this requirement, the benefits of this method outweigh the limitations, making it a promising technique for calibrating oscillating saws in bone-cutting robots.

Author Contributions

Conceptualization, C.M.; methodology, C.M. and D.L.; writing—original draft preparation, W.Y.; data curation, K.W.; visualization, H.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Beijing NSFC project (Grant No.7202103).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Shaofang, L.; Kahrs, L.A.; Werner, M.; Knapp, F.B.; Raczkowsky, J.; Schipper, J.; Ivanenko, M.; Worn, H.; Hering, P.; Klenzner, T. First Study on Laser Bone Ablation System at the Skull Base for Micro Surgery Based on Vision Navigation. In Proceedings of the 2007 Chinese Control Conference, Zhangjiajie, China, 26–31 July 2007; pp. 602–604. [Google Scholar] [CrossRef]
  2. Burgner, J.; Müller, M.; Raczkowsky, J.; Wörn, H. Ex vivo accuracy evaluation for robot assisted laser bone ablation. Int. J. Med. Robot. Comput. Assist. Surg. 2010, 6, 489–500. [Google Scholar] [CrossRef] [PubMed]
  3. Sotsuka, Y.; Nishimoto, S.; Tsumano, T.; Kawai, K.; Ishise, H.; Kakibuchi, M.; Shimokita, R.; Yamauchi, T.; Okihara, S.i. The dawn of computer-assisted robotic osteotomy with ytterbium-doped fiber laser. Lasers Med. Sci. 2014, 29, 1125–1129. [Google Scholar] [CrossRef] [PubMed]
  4. Baek, K.W.; Deibel, W.; Marinov, D.; Griessen, M.; Bruno, A.; Zeilhofer, H.F.; Cattin, P.; Juergens, P. Clinical applicability of robot-guided contact-free laser osteotomy in cranio-maxillo-facial surgery: In-vitro simulation and in vivo surgery in minipig mandibles. Br. J. Oral Maxillofac. Surg. 2015, 53, 976–981. [Google Scholar] [CrossRef] [PubMed]
  5. Ureel, M.; Augello, M.; Holzinger, D.; Wilken, T.; Berg, B.I.; Zeilhofer, H.F.; Millesi, G.; Juergens, P.; Mueller, A.A. Cold Ablation Robot-Guided Laser Osteotome (CARLO®): From Bench to Bedside. J. Clin. Med. 2021, 10, 450. [Google Scholar] [CrossRef] [PubMed]
  6. Zhang, C.; Liu, Y.; Zhang, Y.; Li, H. A hybrid feature-based patient-to-image registration method for robot-assisted long bone osteotomy. Int. J. Comput. Assist. Radiol. Surg. 2021, 16, 1507–1516. [Google Scholar] [CrossRef] [PubMed]
  7. Nof, S.Y. Handbook of Industrial Robotics; John Wiley & Sons: Hoboken, NJ, USA, 1999. [Google Scholar]
  8. Mizuno, T.; Hara, R.; Nishi, H. Method for Automatically Setting a Tool Tip Point. U.S. Patent 4,979,127A, 18 December 1990. [Google Scholar]
  9. Zhang, L.; Li, C.; Fan, Y.; Zhang, X.; Zhao, J. Physician-friendly tool center point calibration method for robot-assisted puncture surgery. Sensors 2021, 21, 366. [Google Scholar] [CrossRef] [PubMed]
  10. Xiong, S.; Bo-Sheng, Y.E.; Jiang, M. Study of Robot Tool Coordinate Frame Calibration. Mach. Electron. 2012, 6, 60–63. [Google Scholar]
  11. Luo, R.C.; Hao, W. Automated Tool Coordinate Calibration System of an Industrial Robot. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018. [Google Scholar]
  12. Zhuang, H.; Motaghedi, S.H.; Roth, Z.S. Robot calibration with planar constraints. In Proceedings of the 1999 IEEE International Conference on Robotics and Automation (Cat. No. 99CH36288C), Detroit, MI, USA, 10–15 May 1999; Volume 1, pp. 805–810. [Google Scholar]
  13. Ge, J.; Gu, H.; Qi, L.; Li, Q. An automatic industrial robot cell calibration method. In Proceedings of the ISR/Robotik 2014, 41st International Symposium on Robotics, Munich, Germany, 2–3 June 2014; pp. 1–6. [Google Scholar]
  14. Gu, H.; Li, Q.; Li, J. Quick Robot Cell Calibration for Small Part Assembly. In Proceedings of the 14th IFToMM World Congress, Taipei, Taiwan, 25–30 October 2015. [Google Scholar]
  15. Ying, B.; Hanqi, Z.; Roth, Z.S. Experiment study of PUMA robot calibration using a laser tracking system. In Proceedings of the Soft Computing in Industrial Applications, Binghamton, NY, USA, 25 June 2003. [Google Scholar]
  16. Shen, C.D.; Amzajerdian, F.; Gao, C.Q. Coordinates calibration method in a robotic remanufacturing measurement system based on linear laser scanner. Proc. SPIE 2009, 7382, 73824G. [Google Scholar]
  17. Wang, Z.; Liu, Z.; Ma, Q.; Cheng, A.; Liu, Y.h.; Kim, S.; Deguet, A.; Reiter, A.; Kazanzides, P.; Taylor, R.H. Vision-based calibration of dual RCM-based robot arms in human–robot collaborative minimally invasive surgery. IEEE Robot. Autom. Lett. 2017, 3, 672–679. [Google Scholar] [CrossRef]
  18. Zhang, X.; Song, Y.; Yang, Y.; Pan, H. Stereo vision based autonomous robot calibration. Robot. Auton. Syst. 2017, 93, 43–51. [Google Scholar] [CrossRef]
  19. Sakakibara, S. An Accurate Automatic Calibration Method of Robot Arm Realized by Three Laser Displacement Sensors. J. Robot. Soc. Jpn. 1994, 12, 1043–1048. [Google Scholar] [CrossRef]
  20. Liu, C.; Ban, R.; Guo, Y. Calibration method of TCP based on stereo vision robot. Infrared Laser Eng. 2015, 44, 1912–1917. [Google Scholar]
  21. Hallenberg, J. Robot Tool Center Point Calibration Using Computer Vision. Master’s Thesis, Linköping University, Linköping, Sweden, 2007. [Google Scholar]
  22. Li, W.; Lü, N.; Dong, M. Simultaneous Robot-World/Hand-Eye Calibration Using Dual Quaternion. Robot 2018, 40, 301–308. [Google Scholar]
  23. Tabb, A.; Yousef, K.M.A. Solving the robot-world hand-eye(s) calibration problem with iterative methods. Mach. Vis. Appl. 2017, 28, 569–590. [Google Scholar] [CrossRef]
  24. Tsai, R.Y.; Lenz, R.K. A new technique for fully autonomous and efficient 3D robotics hand/eye calibration. IEEE Trans. Robot. Autom. 1989, 5, 345–358. [Google Scholar] [CrossRef]
  25. Park, F.C.; Martin, B.J. Robot sensor calibration: Solving AX = XB on the Euclidean group. IEEE Trans. Robot. Autom. 2002, 10, 717–721. [Google Scholar] [CrossRef]
  26. Horaud, R.; Dornaika, F. Hand-eye calibration. Int. J. Robot. Res. 1995, 14, 195–210. [Google Scholar] [CrossRef]
  27. Daniilidis, K. Hand-Eye Calibration Using Dual Quaternions. Int. J. Robot. Res. 1999, 18, 286–298. [Google Scholar] [CrossRef]
  28. Andreff, N.; Horaud, R.; Espiau, B. On-line Hand-Eye Calibration. In Proceedings of the Second International Conference on 3-D Digital Imaging and Modeling, Ottawa, ON, Canada, 8 October 1999. [Google Scholar]
  29. Zhe, M.; Zhu, D.; Meng, Q.H. Accuracy assessment of an N-ocular motion capture system for surgical tool tip tracking using pivot calibration. In Proceedings of the IEEE International Conference on Information & Automation, Ningbo, China, 1–3 August 2016. [Google Scholar]
Figure 1. Comparison of two situations: (a) The OSP is parallel to the planned osteotomy plane. (b) The OSP is not parallel to the planned osteotomy plane.
Figure 1. Comparison of two situations: (a) The OSP is parallel to the planned osteotomy plane. (b) The OSP is not parallel to the planned osteotomy plane.
Applsci 13 09773 g001
Figure 2. Overall framework.
Figure 2. Overall framework.
Applsci 13 09773 g002
Figure 3. The head–eye calibration.
Figure 3. The head–eye calibration.
Applsci 13 09773 g003
Figure 4. The TSPP calibration method based on head–eye calibration.
Figure 4. The TSPP calibration method based on head–eye calibration.
Applsci 13 09773 g004
Figure 5. Error calculation for the TSPP.
Figure 5. Error calculation for the TSPP.
Applsci 13 09773 g005
Figure 6. Influence of the number of samples on the TSPP error. (a) The influence of the number of head–eye calibration data on the TSPP error. (b) The influence of the number of data for the TSPP calibration on the TSPP error.
Figure 6. Influence of the number of samples on the TSPP error. (a) The influence of the number of head–eye calibration data on the TSPP error. (b) The influence of the number of data for the TSPP calibration on the TSPP error.
Applsci 13 09773 g006
Figure 7. Relation between TSPP error and noise level.
Figure 7. Relation between TSPP error and noise level.
Applsci 13 09773 g007
Figure 8. Overview of the experiment platform.
Figure 8. Overview of the experiment platform.
Applsci 13 09773 g008
Table 1. The TSPP error without optimization in the process of head–eye calibration.
Table 1. The TSPP error without optimization in the process of head–eye calibration.
MethodOriginal Method (mm)Improved Method (mm)
TSAI [24]0.480.36
PARK [25]0.380.24
HORAUD [26]0.380.24
DANIILIDIS [27]0.410.26
ANDREFF [28]10.820.28
Table 2. The TSPP error with optimization in the process of head–eye calibration.
Table 2. The TSPP error with optimization in the process of head–eye calibration.
MethodOriginal Method (mm)Improved Method (mm)
TSAI [24]0.360.24
PARK [25]0.360.23
HORAUD [26]0.360.23
DANIILIDIS [27]0.360.23
ANDREFF [28]0.360.23
Table 3. Pivot calibration error.
Table 3. Pivot calibration error.
Number of TimesError (mm)
10.28
20.19
30.18
40.21
50.19
Table 4. The oscillating saw calibration error.
Table 4. The oscillating saw calibration error.
TimePointOriginal Method (mm)Improved Method (mm)
1p10.270.11
p20.200.11
p30.260.17
2p10.270.11
p20.200.10
p30.300.20
3p10.280.11
p20.200.10
p30.290.21
4p10.280.11
p20.190.10
p30.300.20
5p10.280.11
p20.190.10
p30.300.20
Table 5. The variance of the TCPP and the plane normal direction.
Table 5. The variance of the TCPP and the plane normal direction.
Variance of TCPP (mm)Variance of Plane Normal Direction ( )
0.1052951.93222
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Meng, C.; Li, D.; Yuan, W.; Wu, K.; Shen, H. Oscillating Saw Calibration for Mandibular Osteotomy Robots. Appl. Sci. 2023, 13, 9773. https://doi.org/10.3390/app13179773

AMA Style

Meng C, Li D, Yuan W, Wu K, Shen H. Oscillating Saw Calibration for Mandibular Osteotomy Robots. Applied Sciences. 2023; 13(17):9773. https://doi.org/10.3390/app13179773

Chicago/Turabian Style

Meng, Cai, Dingzhe Li, Weimin Yuan, Kai Wu, and Hongbin Shen. 2023. "Oscillating Saw Calibration for Mandibular Osteotomy Robots" Applied Sciences 13, no. 17: 9773. https://doi.org/10.3390/app13179773

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop