Next Article in Journal
Large Span Sizes and Irregular Shapes Target Detection Methods Using Variable Convolution-Improved YOLOv8
Previous Article in Journal
Time-Frequency Aliased Signal Identification Based on Multimodal Feature Fusion
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Kinematic and Joint Compliance Modeling Method to Improve Position Accuracy of a Robotic Vision System

1
School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
2
China National Heavy Duty Truck Group Co., Ltd., No. 777 Hua’ao Road, Innovation Zone, Jinan 250101, China
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(8), 2559; https://doi.org/10.3390/s24082559
Submission received: 19 March 2024 / Revised: 8 April 2024 / Accepted: 12 April 2024 / Published: 16 April 2024
(This article belongs to the Section Sensors and Robotics)

Abstract

:
In the field of robotic automation, achieving high position accuracy in robotic vision systems (RVSs) is a pivotal challenge that directly impacts the efficiency and effectiveness of industrial applications. This study introduces a comprehensive modeling approach that integrates kinematic and joint compliance factors to significantly enhance the position accuracy of a system. In the first place, we develop a unified kinematic model that effectively reduces the complexity and error accumulation associated with the calibration of robotic systems. At the heart of our approach is the formulation of a joint compliance model that meticulously accounts for the intricacies of the joint connector, the external load, and the self-weight of robotic links. By employing a novel 3D rotary laser sensor for precise error measurement and model calibration, our method offers a streamlined and efficient solution for the accurate integration of vision systems into robotic operations. The efficacy of our proposed models is validated through experiments conducted on a FANUC LR Mate 200iD robot, showcasing notable improvements in the position accuracy of robotic vision system. Our findings contribute a framework for the calibration and error compensation of RVS, holding significant potential for advancements in automated tasks requiring high precision.

1. Introduction

The advent of robotic vision systems (RVSs) has ushered in a new era of robotic capabilities, fundamentally transforming the scope and efficiency of automated tasks across various industries [1,2]. The intrinsic value of such systems lies in their ability to perform complex visual tasks with remarkable accuracy, from intricate assembly operations in manufacturing to delicate surgical procedures in medicine. High-precision RVSs enable robots to detect, recognize, and manipulate objects with a level of detail and accuracy previously unattainable, bridging the gap between robotic automation and tasks requiring human-like dexterity and visual acuity.
The position accuracy of an RVS is defined by the system’s capability to precisely locate a target within its field of view [3]. This accuracy is gauged by the system’s effectiveness in pinpointing an object’s position relative to its true location in the physical world. Enhancing this position accuracy necessitates robust modeling and calibration processes. These processes entail the acquisition of part point clouds via the vision system, followed by the transformation of these data points into a consistent representation of the target features within the Cartesian coordinate system. Therefore, meticulous modeling is paramount in improving the position accuracy of an RVS. Current methodologies for RVS modeling are bifurcated into kinematic and compliance models, each addressing different aspects of system behavior and contributing to the overall precision of the system [4].
The kinematic model primarily involves the identification of structural parameters of basic robotic movements [5] and the “hand eye” parameters [6,7] between the robot and camera coordinate systems. The D-H model proposed by Denavit and Hartenberg has become widely applied in kinematic modeling within the industrial robotics field, serving as a standard for an extended period [8,9]. Subsequent improvements to this model, such as those introduced by Hayati, incorporate angles between adjacent parallel joints [10]. Du Guanglong presents a novel approach for online robotic kinematic calibration, which, by integrating an Unscented Kalman Filter and an Iterative Particle Filter, achieves precise identification of robotic kinematic parameters without necessitating operational halts [11]. Joubair introduces a kinematic calibration approach utilizing distance and sphere constraints, significantly enhancing the position accuracy of a six-axis serial industrial robot within a specific target workspace [12]. The most common method for hand–eye calibration is based on estimating transformation matrices. Through reprojection error minimization, Koide directly utilizes calibration pattern images, without the need for explicit camera pose estimation [13]. Hua Jiang successfully demonstrates a robot hand–eye calibration approach utilizing an optimized neural network to accurately model the complex, nonlinear relationship between camera and robotic coordinates [14].
The robot compliance model focuses on the elasticity of the materials constituting the robot, addressing deformation errors under applied forces [15,16]. Due to its own weight and external loads, the robot experiences structural deformation in its links and joints [17]. Current research on the joint stiffness modeling of six-degree-of-freedom robots mainly analyzes deformations in joints 2 and 3, establishing simplified linear torsion spring models [18]. Abele utilized a linear torsion spring model and the Jacobian matrix to create a static compliance model for robots, enhancing tool path accuracy [19]. Dumas et al. developed a stiffness model for serial robots based on translational and rotational errors, experimentally validated on the Kuka KR240 robot, identifying compliance coefficients for its six joints [20]. P Kozlov used a CAD virtual experimental environment and finite element numerical analysis to derive the compliance matrices describing robot stiffness, achieving a comprehensive robot compliance model [21]. However, this model, which includes many redundant parameters, may not suit calibration involving numerous redundant parameters. Further, Klimchik et al. investigated the static calibration issue of heavily loaded industrial robots, employing the Virtual Joint Modeling method to establish a robot compliance model [22]. Du Liang developed an approach for calibrating compliance errors in robots by statistically analyzing the effects of gravity and elastostatic forces on individual joints. Utilizing single joint rotations and laser tracking measurements, the study identifies significant compliance errors and compensated for them, markedly improving robot accuracy and operational efficiency [23]. Tepper proposed a cost- and time-efficient approach for setting up a compliance model for industrial robots, utilizing an optimal design of experiments for variance-minimal Bayesian inference of gear stiffness parameters [24].
Overall, kinematic models aim to enhance RVS geometric accuracy with various solutions already available, while compliance models lack a unified consensus among researchers. There are four approaches to the compliance analysis of robots, incorporating Finite Element Analysis [25], the Virtual Joint Model [26], the nonlinear transmission model (NTM) [27], and the Rigid–Flexible Coupling Model (RFCM) [28]. Firstly, Finite Element Analysis simulates system parts to estimate deformation errors under different configurations but struggles with predicting errors from real shapes, material properties, and manufacturing assembly. Secondly, the Virtual Joint Model introduces excessive redundant parameters through adding 6-DOF springs at the joints, complicating identification. Thirdly, NTM corrects nonlinear errors of connectors in joint spaces, typically modeling end-effector spaces with high-order harmonic functions, which cannot solve the issues of interpretability and model under-fitting. Ultimately, RFCM considers the mechanical impact of robot self-weight and loads, but cannot indicate the mass and centroid errors of robot parts effectively.
As stated above, model calibration is indispensable for a newly installed or worn RVS. This study introduces a model-based kinematic and joint compliance approach, enhancing calibration and error compensation for precise positioning upon integrating vision systems into robots. Our contributions are as follows:
(1)
We design a 3D rotary laser sensor mountable on robot grippers to create a representative RVS, propose an error measurement method based on vision measurement, and prove the ability to improve the position accuracy of the RVS.
(2)
Existing RVS modeling methods separate the robot body and the hand–eye system calibration, leading to internal error accumulation. We proposed a unified kinematic model by trimming redundant structural parameters.
(3)
We introduced a joint compliance model by combining NTM and RFCM, comprehensively considering the joint friction of connectors, external loads, and link self-weight on joint compliance. The specific optimization is as follows:
(i)
We proposed an extended NTM, using second-order Fourier functions to fit spatial errors of the terminal three joints based on the Pieper Criterion in robotics. And the under-fitting issue in NTM is addressed.
(ii)
To resolve the issue of model hyper-parameters caused by unknown external load configurations, we approximated the load’s position and direction by using the hand–eye transformation matrix of the RVS.
(iii)
Through mechanical analysis, we simplified the link model for self-weight.
To address these issues and propose kinematic and joint compliance modeling and calibration methods, this paper focuses on the establishment and parameter identification of RVS models. The rest of the paper is organized as follows: Section 2 introduces a 3D vision sensor to build the RVS, and establishes a unified kinematic model. Section 3 establishes the joint compliance model for the RVS, including an extended nonlinear transmission model and compliance models for the loads’ and links’ self-weights. Section 4 proposes a parameter identification process for kinematic and joint compliance models. Section 5 completes two accuracy verification experiments on the FANUC LR Mate 200iD robot. Section 6 concludes the paper.

2. Method Framework and Unified Kinematic Model

2.1. Overview of Kinematic and Joint Compliance Model

Figure 1 presents an overview of the robotic vision system in our study. A 3D rotary laser sensor is designed in Figure 1a, which is integrated into the gripper of a robot in Figure 1b. Moreover, Figure 2 provides a visual exposition of the methodological architecture underpinning the proposed kinematic and joint compliance model. The flowchart presents a structured sequence of operations beginning with a unified kinematic model that integrates both robot body modeling and hand–eye system calibration. This integration is critical to reducing internal error accumulation and serves as the basis for further error compensation strategies, ensuring a strong foundation for the precision of robotic movements and vision system coordination.
The subsequent step in the process involves the joint compliance model which addresses the elasticity of robot materials and corrects for deformation errors. This model accounts for joint connector elasticity, external load compliance, and the self-weight of robotic links, which are imperative for the robot’s operational accuracy under varying loads and conditions. The framework illustrates how these compliance factors are systematically incorporated into the model, providing a method to compensate for the various forces and torques affecting the robot’s joints. The symbols used in this paper are presented below.

2.2. Geometric Model of a 3D Rotary Laser Sensor

The calibration of industrial robot models requires reliable error observation methods. One mainstream approach is measuring the error of robot motion in configuration space (C-space) using 3D stereo vision sensors. Depending on the installation position of the vision sensor, there are two types: eye-in-hand and eye-to-hand. The choice between these two depends on the practical application requirements, with little difference in the underlying mathematical principles.
First, a geometric model of the rotary laser sensor is established. The camera imaging process includes both a distortion model and a geometric model. The geometric model of the camera is based on the pinhole imaging principle, mathematically expressed in Equation (1).
f x X c = u u 0 Z c f y Y c = v v 0 Z c
where f x and f y represent the camera’s focal lengths on the image plane, and u 0 and v 0 are the pixel coordinates of the intersection point between the camera’s optical axis and the image plane. u , v and X c , Y c , Z c , respectively, express the spatial point in the image plane coordinate system and the camera coordinate system. The distortion model can be expressed as Equation (2).
u = u 1 + k 1 r 2 + k 2 r 4 + k 3 r 6 + 2 p 1 v + p 2 r 2 + 2 u 2 v = v 1 + k 1 r 2 + k 2 r 4 + k 3 r 6 + 2 p 2 u + p 1 r 2 + 2 v 2 r = u u 0 2 + v v 0 2
where k 1 , k 2 , k 3 represents the radial distortion coefficient and the tangential distortion coefficient is p 1 , p 2 . The radial distortion, influenced by Snell’s Law, addresses deviations from the optical axis, where light refracts to different positions, causing magnification discrepancies between actual and ideal imaging. And the tangential distortion coefficient compensates for errors caused by the nonparallel alignment of the lens and the imaging plane.
Subsequently, the structure of the 3D rotary laser sensor is designed in Figure 1a, and its schematic representation is illustrated in Figure 3. A line laser hits the mirror and the object surface in turn. Next, the light is captured by a fixed camera. Through the rotation of the mirror, multiple laser stripes can be illustrated in the image. The reflected light planes can be expressed as Equation (3).
n i X c Y c Z c T + 0 0 b i T = 0
where n i is the normal plane to the light plane, and b i is the bias. Then, Equations (1) and (3) are expressed in matrix form, as shown in Equation (4).
f x 0 u 0 u 0 f y v 0 v n x i n y i n z i X c Y c Z c + 0 0 b i = 0
Finally, the point cloud in camera frame {Cam} can be further calculated by 3D reconstruction Equation (5).
X c Y c Z c = f x 0 u 0 u 0 f y v 0 v n x i n y i n z i 1 0 0 b i

2.3. Unified Kinematic Model

Methods of robot body modeling and hand–eye calibration have already been extensively studied individually. However, this fragmented approach introduces redundant kinematic parameters, leading to the accumulation of system errors.
This section builds upon the serial robot MDH (Modified Denavit–Hartenberg) model by introducing camera and user frames at the beginning and end of the robot, respectively. It further establishes a unified kinematic model for RVS that conforms to a physically expressive model with low redundancy. Then, a full parameter optimization solution is performed on the system’s kinematic model.
The initial model of the measurement system is illustrated in Figure 1b. The transformation from the camera frame {Cam} to the user frame {User} is defined as T C U , which satisfies Equation (6).
q = T C U p
where p ,   q are the same targets described in {User} and {Cam}.
In order to separate the robotic model from the system, T C U can be divided into three parts. In Equation (7), T B U is the transformation matrix from the robot base frame {Base} to the {User}. T G B is the transformation matrix from the gripper frame {Gripper} to the {Base}. T C G is the transformation matrix from the {Cam} to the {Gripper}.
T C U = T B U T G B T C G = T 1 U i = 1 Joints T i + 1 i T C Joints + 1
where Joints is the number of robotic joints.
To reduce degrees of freedom, T B U and T C G are modeled based on the Euler angle, and T G B is constructed by the MDH model of the robot. The rotation and translation relationship is shown in Equations (8)–(10). In addition, parameters of the end joint are deleted because the 3D laser scanner is rigidly connected with the robot. Similarly, two redundant parameters near the robotic base are deleted, including θ 1 , d 1 .
T B U = Trans x 0 , y 0 , z 0 Rot z r 0 Rot y w 0 Rot x p 0
T C G = Trans x e , y e , z e Rot z r e Rot y w e Rot x p e
T G B = Trans a 1 , 0 , 0 Rot x α 1 i = 2 Joints 1 Rot z θ i Trans a i , 0 , d i Rot x α i
where θ , d , a , α represent the joint angle, link offset, link distance, and twist angle, respectively. x , y , z , r , w , p denote three translations and three rotations of different directions. In summary, the established unified kinematic model includes 30 parameters θ 1 = d 1 = 0 : η Ki = ( x 0 , y 0 , z 0 , r 0 , w 0 , p 0 θ i , d i , a i , α i , x e , y e , z e , r e , w e , p e ) T , i = 1 , 2 , , 5 .
Based on η Ki , the kinematic coordinates of the robot’s position (defined as the origin of the frame {Cam}) can be obtained. Then, the unified kinematic model can be expressed as q K i = q η Ki = T C U .

3. Joint Compliance Model

The unified kinematic model merely describes the geometric error of RVS, but is unable to cope with the elastic deformation. In general, global compliance models for various parts of the robot are established, based on elasticity theory used for nonrigid models. However, numerous redundant parameters will be introduced, causing under-fitting in calibration. According to [28], 90% of the elastic deformation in serial robotic arms concentrates at the joint connections. Thereby, a simplified form based on Hooke’s law of joint torsional elasticity is proposed in Equation (11).
δ θ = λ θ τ
where λ θ is the compliant matrix of joint torsion, θ is a vector concatenated joint angle θ , and τ is a vector concatenated joint torque τ .
Analyzing the impact of each component from the perspective of joint compliance is of great significance. In quasi-static conditions, joint deformation is primarily influenced by three types of forces: frictional forces, externally applied loads, and the self-weight of the robot links. Based on the principle of superposition in mechanics, the effects originating from the torque output by each of the robot’s drive motors can be decomposed into joint frictional forces caused by connectors τ C , end-effector load forces τ F , and link gravitational forces τ Link , as indicated in Equation (12).
τ = i Joints τ C i + τ F + i Joints τ Link i
By combining (10) and (11), the joint compliance error can be derived as Equation (13).
δ θ = i Joints δ θ C i + δ θ F + i Joints δ θ Link i
The joint frictional forces, originating from the relative rotation at joint connections, are influenced by several factors including motor encoders, gear reducers, and couplings. These forces are complex and challenging to explain through linear models. Consequently, this paper introduces a nonlinear transmission model to address the angle errors caused by frictional forces. Moreover, joint angle errors resulting from end-effector load forces and the gravitational forces of the robot links can be predicted using the theory of robot statics. Based on these predictions, a linear compliance model is established.

3.1. Extended Nonlinear Transmission Model

An integrated joint module encompasses a motor, a magnetic encoder, and a harmonic reducer, where the encoder, attached to the motor’s shaft end, gauges the motor angle, and the motor shaft’s linkage to the harmonic reducer’s waveform generator facilitates output with reduced rotational speed through its flexible wheel. This setup introduces an error chain from the encoder through the harmonic reducer (as shown in Figure 4), affecting the joint module’s angular measurement precision due to both components. Manufacturing considerations for magnetic encoders include addressing the positioning inaccuracies in the magnetic plate’s pole distribution and the relative positioning of the Hall sensor and the magnetic plate, which are critical for the encoder’s signal accuracy. Imperfections such as tilt and eccentricity introduce periodic, low-frequency errors, necessitating harmonic analysis for error correction.
In addition, the harmonic reducer operates on elastic deformation principles, with a setup comprising rigid and flexible gears and a wave generator [27]. The wave generator, by driving the flexible gear and inducing deformation waves, achieves dynamic transmission through periodic elastic deformation, engaging with the fixed rigid gear. Transmission errors, including radial and run-out errors from the wave generator, geometric and motion eccentricity from gear machining and installation, and backlash between gears, manifest as periodic errors with distinct low- and high-frequency components. These characteristics justify employing a second-order Fourier analysis for detailed error assessment and mitigation in the manufacturing process [29].
Since the angular error of a joint module is caused by the elastic deformation of its components and has continuous and integrable characteristics over a cycle, the angular error of the joint module can be represented as a signal with a 2π period, approximated by the Fourier series φ θ . As shown in Equation (14), the angular error of the connector δ θ C , caused by the periodic change in angle, is transmitted to the robot’s tool center point (TCP) through the robot’s motion Jacobian matrix J θ , and can ultimately be approximated as the positioning error δ q C induced by frictional forces.
δ q C = J θ δ θ C = J θ φ θ
In the equation, θ refers to the angle value measured by the robot’s magnetic encoder. Next, following the decomposition into a Fourier series, φ θ conforms to Equation (15).
φ θ ^ = φ θ ^ 1 φ θ ^ 2 φ θ ^ Joints T
where
φ θ ^ j = k 0 j + i = 1 N k ai j cos i w θ ^ j + i = 1 N k bi j sin i w θ ^ j
In the equation, N represents the order of the Fourier series, k is the amplitude of the i-th order component signal, and j denotes the joint index. For simplicity, w will be forced to 1.
To solve for k in Equation (16), taking a six-degree-of-freedom industrial robot as an example, it is necessary to calculate the error φ θ ^ j for each joint of the robot through inverse kinematics. Since inverse kinematics involves mapping from Cartesian space to joint space, to ensure the inverse kinematics solving process adheres to 3 3 , this paper restricts the three degrees of freedom in the joint space based on the Pieper Criterion (the axes of the last three joints of a six-degree-of-freedom robot always intersect at a single point, which has a minor impact on positional errors.), denoted as φ θ ^ 4 = φ θ ^ 5 = φ θ ^ 6 = 0 ,   N = 2 .
After simplification, φ θ can be expressed as Equation (17). And the parameters of the extended NTM are expressed as η C = k 0 1 : 3 , k a 1 1 : 3 , k b 1 1 : 3 , k a 2 1 : 3 , k b 2 1 : 3 .
φ θ ^ 1 : 3 = k 0 1 : 3 + k a 1 1 : 3 cos θ ^ + k b 1 1 : 3 sin θ ^ + k a 2 1 : 3 cos 2 θ ^ + k b 2 1 : 3 sin 2 θ ^

3.2. Compliance Error Caused by the External Load

After modeling the nonlinearities of the transmission system, this section focuses on modeling the linear part of joint deformation. Based on the theory of linear torsional springs, a fixed stiffness coefficient is assigned to each joint of the robot. To establish the mathematical relationship between the forces and deformations experienced by the robot’s end effector in various configurations of the robot and the external load, it is first necessary to understand the forces and deformations at each of the robot’s joints. Figure 5 shows the mechanical relationship.
In the robot’s base frame {Base}, assume that the force applied at the origin of the camera frame {Cam} is F C B    M C B T , and the force applied at the origin of the end actuated joint frame {Gripper} is F G B    M G B T . For the six-degree-of-freedom (6-DOF) serial industrial robot in this paper, we set G = 6. The relationship between them can be solved by force-and-torque transformation H B C 6 , as shown in Equation (18).
F 6 B M 6 B = H B C 6 F C B M C B
The transformation of forces under frame conversion H B C 6 is seen in Equation (19).
H B C 6 = E 3 0 t C 6 R C 6 + Δ C 6 E 3
where t C 6 and R C 6 are the translation and rotation matrices of T C G in Equation (9); specifically, t C 6 = x e , y e , z e T and R C 6 = Rot z r e Rot y w e Rot x p e . Δ C 6 is the error matrix from the load center of gravity to the origin of {Cam}. We assume that Δ C 6 is equal to zero, because it mainly affects the torque of the sixth joint with a relatively short lever arm, and has minimal impact on the other joints.
The notation t in Equation (19) represents the antisymmetric matrix of the three-dimensional vector t , and when t = t x t y t z T , [t] can be expressed using Equation (20).
t = 0 t z t y t z 0 t x t y t x 0
For the high-precision positioning requirements of RVS in noncontact scenarios, the gravity of the externally applied load acts in a vertically downward direction. The force acting on the center of gravity of the externally applied load can be described in the frame {Cam} as F C B M C B T = 0 0 m L g 0 0 0 T .
Based on the definition of the robot kinematics Jacobian matrix J θ , and the relationship between it and the static-force Jacobian matrix J F θ , the errors caused by the load δ q F and the load torque τ F can be derived, as shown in Equation (21).
δ q F = J θ δ θ F τ F = J T θ F 6 B M 6 B = J F θ F 6 B M 6 B
Based on the linear torsion spring theory, the compliance coefficient corresponding to each joint is denoted as λ θ . According to the linear torque–torsion relationship, the relationship between the torsional torque of each joint and its elastic deformation can be obtained as shown in Equation (22).
δ θ = λ θ τ = λ 1 λ Joints τ
where δ θ = δ θ 1 δ θ 2 δ θ Joints T represents the elastic deformation of each joint under torsional torque, λ θ denotes the compliance coefficient of each joint, and τ = τ 1 τ 2 τ Joint T is the torsional torque corresponding to each joint.
By combining Equations (16), (19), and (21), the pose error under the base frame {Base} of the 6-DOF industrial robot studied in this paper can be expressed as Equation (23).
δ q F = J θ λ θ J T θ H B C 6 F C B 0
For an RVS without external forces, the direction of the load aligns with the direction of gravity; that is, F C B F C B = 0 0 1 T = e 3 . By integrating the load weight F C B with the compliance matrix λ θ to form λ θ = F C B λ θ , Equation (24) can be derived. The parameters of the external load compliance model are expressed as η F = λ 1 , λ 2 , λ 3 , λ 4 , λ 5 , λ 6 .
δ q F = J θ F C B λ θ J T θ H B C 6 F C B F C B 0 = J θ λ θ J T θ H B C 6 e 3 0

3.3. Compliance Error Caused by Weight of Robot Link

Furthermore, the effect of the self-weight of the links on joint deformation is crucial. As the robot is in any position or posture, the joint torque generated by the self-weight of the links always exists, and the compliance error caused by this factor is always coupled with geometric parameter errors. The body structure of most industrial serial robots is similar to that of the FANUC LR Mate 200iD robot, allowing for a generalized analysis method. The centroid of the link relative to the origin of the robot’s base frame {Base} forms a cantilever structure, generating torque. For a particular joint, its compliance error accumulates unidirectionally, for example, joint 3 is affected by the self-weight of links 3, 4, 5, and 6. The torque that does not change with the robot’s posture can be represented by the kinematic model. Therefore, two scenarios can be disregarded: (1) the lever arm between the link and gravity is zero; (2) the rate of change in the lever arm between the link and gravity is zero.
As shown in Figure 5, the rotation center axis (z1) of link 1 is parallel to the gravity vector, so the self-weight’s torque effect on the joint can be ignored. The rotation center axes (z3, z4) of links 2 and 3 are approximately perpendicular to the gravity vector, and their self-weight effects cannot be ignored. When link 4 rotates around its rotation center axis (z4), the lever arm does not change, so its effect on joint 4 can be ignored. The frame origins of links 5 and 6 coincide (L5 = 0 mm), and link 6 is rigidly connected to the load, so the self-weight effects of links 5 and 6 are already included in Equation (22). Therefore, only the deformations produced by the self-weight of links 2, 3, and 4 on joints 2 and 3 need to be considered.
Define the distance from the centroid G2 of link 2 to its rotation center axis (z2) as A2, and the distance from the centroid to the rotation center axis (z3) of link 3 as A3. Consider links 3 and 4 as a whole, with their corresponding centroid as G3. The lever arms exerted by each link on the joint are shown in Table 1.
By summing the torque components experienced by each joint as listed in Table 1, Equation (25) can be derived in matrix form.
τ Link 23 = sin θ 2 sin θ 2 θ 3 0 0 0 sin θ 2 θ 3 G 2 A 2 + G 3 L 2 G 3 A 3 G 3 A 3 = N 23 θ s 1 s 2 s 2
By substituting Equation (21) into Equation (25), the end-effector position error caused by the self-weight of the links can be obtained as Equation (26).
δ q Link 23 = J 23 θ λ 2 0 0 λ 3 N 23 θ s 1 s 2 s 2 = J 23 θ λ 23 N 23 θ s 1 s 2 s 2
where s 1 = s 1 F C B 1 , s 2 = s 2 F C B 1 , and λ 23 = F C B diag λ 2 , λ 3 . And the parameters of the robot link self-weight compliance model are expressed as η Link = s 1 , s 2 .
In summary, combining Equations (7), (17), (24), and (26), the kinematic and joint compliance model can be expressed as Equation (27) in differential form.
δ q = δ q K i + δ q C + δ q F + δ q Link 23

4. Parameter Identification

The models proposed in Section 2 and Section 3 establish the mathematical relationship between the parameters to be identified and the RVS across different poses. However, it is difficult to identify all parameters simultaneously, especially since the extended NTM in Section 3.1 has high nonlinearity. Therefore, parameter identification is required in multiple stages in Figure 6.
The process begins with a measurement dataset to calibrate the unified kinematic model as the foundation for identifying the parameters of the joint compliance model. This step is crucial as it establishes the kinematic parameters that are the bedrock for subsequent procedures. Next, the process splits into two branches. In the first branch, the inverse kinematics are computed, which leads to the determination of the residual error of joint angles. This error is then addressed through Fourier fitting, which helps to fine-tune the extended nonlinear transmission model (NTM). In parallel, the second branch is focused on calibrating the whole kinematic and joint compliance model, which integrates the kinematics of the robot with the mechanical effects of transmission, load, and the robot’s own links. Finally, an accurate model is obtained, which represents a synergy of all the calibrated parameters and models.
The parameter Jacobian matrix J η is necessary for identification, based on the differential kinematics model. For a single measurement point, it includes three scalar equations, which can be represented as Equation (28).
Δ q 1 Δ q 2 Δ q n = q 1 η 1 q 1 η 2 q 1 η m q 2 η 1 q 2 η 2 q 2 η m q n η 1 q n η 2 q n η m Δ η 1 Δ η 2 Δ η m = J η Δ η
where Δ q is the position error that can be physically measured in the frame {User}, and it can be substituted by the measurement error Δ p in the frame {Cam}. Δ η is the errors associated with the model parameters.
The identification of η is a nonlinear estimation problem, which can be calculated by the Levenberg–Marquardt (L-M) algorithm. An iterative gradient descent algorithm is summarized as follows:
(1)
Calculate the parameter Jacobian matrix J η .
(2)
Calculate the update vector Δ η k of parameter:
Δ η k = J T η k J η k 1 J T η k Δ q
(3)
Update: Δ η k + 1 = η k + ζ Δ η k , k = k + 1
where ζ is the descent rate of each iteration and equals 0.005.
According to the steps in Figure 6, combined with Equation (28) and the L-M algorithm, the parameter identification is performed in three stages:
Stage 1: Calibrate the unified kinematic model and identify 30 unified kinematic parameters η K . After the calibration, the residual errors of each measurement point are recorded as Δ r x , Δ r y , Δ r z .
Stage 2: Calibrate the extended NTM. First, the residual errors of the unified kinematic model are projected into the joint space, based on the numerical inverse kinematics in Equation (29). Furthermore, the parameters of the extended NTM η C are identified by discrete Fourier transform.
Δ r x Δ r y Δ r z = q x θ 1 q x θ 2 q x θ 3 q y θ 1 q y θ 2 q y θ 3 q z θ 1 q z θ 2 q z θ 3 Δ θ 1 Δ θ 2 Δ θ 3 = J θ Δ θ
where Δ θ 1 , Δ θ 2 , Δ θ 3 are the angular residual errors of the point in the joint space (J1–J3). J θ is the numerical Jacobian matrix of the joint angles.
Stage 3: All 53 parameters of the kinematic and joint compliance model η = η Ki , η C , η F , η Link T are identified to obtain the optimal position accuracy.

5. Calibration Experiment of Kinematic and Joint Compliance Model

To assess the improvement in system position accuracy provided by our proposed method, we first calculated the geometric parameters of the 3D rotary laser sensor based on Zhang’s calibration method, and completed the local accuracy measurement verification of the four ceramic sphere calibrators. Then, we set up two experiments to validate the position accuracy of our model, as shown in Figure 7.
The first experiment was conducted with a calibrator composed of four ceramic balls, in order to assess the overall performance of the RVS. These four balls were measured by a Coordinate Measuring Machine (Hexagon Leitz PMM-Xi) with a precision of 0.5 μm, as shown in Table 2. The robot, carrying the rotating laser sensor, scanned the point cloud of the standard ceramic balls from different poses and calculated the center of the spheres. And a set of data points under the camera frame {Cam} were produced, which correspond to the four data points under the ceramic ball frame {Ceramic}.
The second experiment was conducted with a laser tracker to eliminate the influence of the 3D rotary laser sensor. We mounted the reflector of the laser tracker on the side panel of the sensor. Then, the robot was controlled to move to 7000 array positions. The array points under the laser tracker’s frame {LT} ere obtained, which correspond to the origin under the reflector’s frame {Reflector}.

5.1. Calibration of 3D Rotary Laser Sensor

In our previous work [30], the principle and feasibility of geometric feature measurement using the 3D rotary laser sensor were already validated and will not be elaborated upon here. Therefore, this section will focus on verifying the accuracy of the sensor in measuring ceramic spheres.
In the experimental setup, the 3D rotary laser sensor employed comprised an industrial camera with model number MV-CA013-20GM, offering a resolution of 1280 × 1024 pixels. The camera was capable of capturing images at a frame rate of 90 Hz, with a 16mm focal length providing a horizontal field of view (HFOV) of 21.7° and a vertical field of view (VFOV) of 17.5°. The sensor’s laser component had an output power of 50 mW at a wavelength of 405 nm. A rotating mirror, integral to the sensor, rotated at an angle of 20°, facilitating the sensor’s ability to create 120 distinct flight planes for data acquisition. The sensor’s optimal working distance was set between 115 and 135 mm, with a baseline distance of 69 mm. The system offered high resolutions, with a horizontal resolution of 0.5 mm and an even finer vertical resolution of 0.045 mm, ensuring detailed and precise data collection for the study.
As shown in Figure 8, a backlight chessboard was used to calibrate the 3D rotary sensor based on Zhang’s algorithm. The intrinsic parameters of the camera utilized in the 3D rotary laser sensor were meticulously calibrated to ensure the accuracy of the experimental data. The focal lengths in the x and y axes were determined to be f x = 3423.018 and f y = 3420.949, respectively. The principal point of the camera, which is the point on the sensor where the optical axis intersects, was located at coordinates u 0 , v 0 = 605.260, 580.506. The radial distortion coefficients k 1 , k 2 , k 3 , critical for correcting lens distortions, were found to be −0.163, 0.951, and −0.874. Additionally, the tangential distortion coefficients p 1 , p 2 were measured as 0.000107 and −0.00119, which are imperative for correcting the decentering distortion in the camera lens system. According to the calibration result of 3D rotary sensor, the re-projection error of the camera is 0.06 pixels, and the average fitting error of 120 light planes is 0.004 mm.
Then, the robot moved the 3D rotary sensor to measure 200 positions around the upper part of the four ceramic balls from different directions, following the layout method in [12]. Additionally, all positions were simulated in WeldPro and distributed as evenly as possible in Figure 9. According to 372 rounds of cycle measurements of the surface, the position of the balls’ center can be optimized using the Spherical Regression Algorithm. Next, the radius error distribution is concluded in Figure 10. The RMS (Root Mean Square) of the shape accuracy is 0.0114 mm and the Std. (standard deviation) can reach 0.0049 mm.

5.2. Accuracy Verification Based on 3D Rotary Laser Sensor

From the first experiment shown in Figure 7a, we validated the position accuracy of the RVS by measuring the four ceramic spheres with the 3D rotary laser sensor in Figure 8. According to the 200 measured points of ceramic spheres from different directions, 100 random positions were used for system calibration, and the remaining 100 for evaluation.
After the parameter identification process in Section 4, the kinematic and joint compliant model calibration results of RVS were calculated, including two frame transformations listed in Table 3, and the MDH parameters and joint compliance parameters in Table 4.
The results depicted in Figure 11, alongside the data presented in Table 5, provide a quantitative and visual assessment of our calibration method’s performance. Figure 11a shows a scatter plot of position errors before and after calibration at different points, with a noticeable reduction in error after calibration, signifying an improvement in accuracy post calibration. Figure 11b illustrates a histogram of the frequency of position errors, comparing our method with three others: MDH [31], NTM [27], and RFCM [17]. Our method demonstrates fewer occurrences of higher errors, with the majority of errors congregating towards the lower end of the scale. Figure 11c is a 3D bar chart, offering a visual comparison of error frequencies for various calibration methods at different directions. Our method consistently shows lower frequency counts for larger errors, further emphasizing its precision. Figure 11d is a violin plot providing a visual summary of error distribution for each method. It reveals that our method has a tighter distribution of errors, suggesting a higher level of precision and reliability. The majority of the data points are clustered near the lower end of the error scale, and the spread of data points is narrower, indicating fewer outliers and less variation in measurement error.
Table 5 shows that our method has achieved a reduction in the Root Mean Square (RMS) error, outperforming other methods. Specifically, our method exhibits a 25.7% RMS improvement over the MDH method and surpasses the NTM and RFCM methods by 18.2% and 17.1%, respectively. When measuring with the 3D rotary laser sensor, the reductions in standard deviation and the average and maximum errors corroborate the superior calibration performance of our method, suggesting that it is not only more accurate on average but also more consistent and reliable across a range of measurements.

5.3. Accuracy Verification Based on a Laser Tracker

To eliminate the influence of measurement errors from the 3D rotary laser sensor, we used a laser tracker as the accuracy verification device, for which we designed the measurement scheme shown in Figure 7b. We affixed the ball seat of the laser tracker onto the 3D rotary laser sensor to measure positions but not attitude data. Subsequently, the joint angles from the FANUC robot register and the positions of the reflector were collected.
The automated measurement process is illustrated in Figure 12. We utilized an Industrial PC (IPC) to control and collect path data from the FANUC robot and the laser tracker (API T3). Communication with the robot’s string registers via the IPC allows for the control of the robot to execute pre-defined simulation paths and read the six angles from the robot joint encoders. When the robot moves to position Pt., it sends a signal to the IPC. A 6.0 s data-reading pause is reserved to ensure the robot stabilizes. The laser tracker is set to Stable Point Mode, automatically locating the reflector mounted on the 3D rotary laser sensor when the robot stops steadily. The advantage of this scheme is that it allows for direct evaluation of the kinematic and joint compliance model without introducing measurement errors from the 3D rotary laser sensor. Meanwhile, due to the single-point tracking lacking the reflector’s attitude, it is necessary to modify the kinematic model: the r e , w e , p e of T C G in Equation (9) is set to 0.
Then, the laser tracker is used to measure 7000 points in a spatial array as reference data, as shown in Figure 13. Following the layout design in references [32,33], these 7000 points are arranged in seven robot orientations, with each pose set up in a 10³ configuration. The final distribution of the measurement points across the six joint spaces of the robot is depicted in Figure 14, with the least active joint reaching an approximate range of 50°.
As a result, 6998 effective position points were measured, of which 100 random points were selected for calibrating our model. Following the calibration procedure described in Section 4, T B U and T C G in the unified kinematic model are identified in Table 6. Meanwhile, the identification results of the MDH and joint compliant parameters are presented in Table 7.
Comparing the results of the two experiments in Table 4 and Table 7, it can be observed that there is a significant difference in the 15 parameters k 1 : 3 related to the extended NTM, while the differences in λ θ , s 1 , s 2 are smaller. It is speculated that the reason for this is the smaller data volume in the second experiment, leading to larger errors in k 1 : 3 due to Fourier analysis, and part of the transmission model error being transmitted to λ θ , s 1 , s 2 .
The accuracy of the kinematic and joint compliance modeling method is evaluated based on the 6998 points. The experimental results are depicted in Figure 15, demonstrating effectiveness in enhancing position accuracy. In Figure 15a, the scatter plot contrasts position errors at various points before and after calibration. It is clear that the position error decreases significantly, demonstrating the effectiveness of the calibration process. Figure 15b is a histogram that compares the frequency of position errors between different calibration methods. The method labeled as our method appears to achieve a higher concentration of lower magnitude errors, indicating superior performance in reducing the position error when compared to MDH, NTM, and RFCM. Figure 15c presents a 3D histogram, further emphasizing the distribution of position errors in different directions, and our method exhibits a distribution skewed towards lower errors, reinforcing its efficacy. Lastly, Figure 15d showcases a violin plot for the error distribution of each method, giving insight into the density distribution of the errors. The narrow and peaked distribution of our method suggests a tighter clustering of data points around a lower median error, while other methods display broader distributions, indicative of a wider range of error magnitudes. Overall, the data across these visuals collectively suggest that our model consistently outperforms the alternative methods in minimizing position errors.
According to Table 8, our proposed method achieved a reduction in RMS of 19.5% compared to the MDH method, and reductions of 13.8% and 9.1% compared to the NTM and RFCM, respectively. Furthermore, it exhibits optimal performance in terms of the distribution of errors and extreme values.

6. Discussion and Conclusions

In this paper, it is evident that the kinematic and joint compliance modeling method effectively enhances the position accuracy of robotic vision systems. Through an innovative integration of a 3D rotary laser sensor, unified kinematic modeling, joint compliance modeling, and a three-stage parameter identification process, this research offers a comprehensive solution to the challenges of system position accuracy. The two validation experiments—employing both a laser tracker and a 3D vision measurement system to assess the accuracy of the proposed models—robustly demonstrate the method’s superiority over existing models. Compared to the Modified Denavit–Hartenberg (MDH), nonlinear transmission model (NTM), and Rigid–Flexible Coupling Model (RFCM) methods, our approach proves its efficacy in enhancing the position accuracy of robotic vision system. The following discussion reviews the key aspects of the proposed model, evaluates its complexity related to previous methods, and emphasizes its potential benefits for industrial manufacturing cycles that require the highest precision.
The integration of kinematic and joint compliance factors introduces an increase in model complexity. This complexity is a direct consequence of our comprehensive approach to modeling, which accounts for factors often overlooked in simpler models, such as the elasticity of robotic joints, and the impact of external loads and links. While the proposed model demands more significant computational resources and a complex calibration process, it delivers substantial improvements in position accuracy by at least 9.1% according to the experiment results. These improvements are critical in applications where precision is paramount, outweighing the drawbacks associated with increased model complexity.
The practical implications of our model are particularly significant in high-precision industrial manufacturing cycles. The advanced accuracy offered by our modeling approach can lead to remarkable enhancements in product quality and a notable reduction in waste and rework. In industries such as industrial manufacturing, automotive assembly, and aerospace engineering, where the cost of inaccuracies can be exceptionally high, the potential savings and efficiency gains are substantial. Furthermore, the adaptability of our model to various robotic systems and its scalability across different manufacturing tasks underscore its versatility and broad applicability.
Future research will focus on streamlining the model’s complexity and enhancing its usability, striving to simplify the calibration process and reduce computational requirements without compromising the accuracy benefits. Furthermore, exploring the model’s application across a broader range of industrial scenarios will be crucial in fully realizing its potential to revolutionize precision in robotic automation.

Author Contributions

Conceptualization, J.X. and G.J.; methodology, F.Y.; validation, F.Y., J.X. and G.J.; formal analysis, F.Y. and Y.W.; investigation, F.Y.; data curation, G.J.; writing—original draft preparation, F.Y.; writing—review and editing, G.J., Y.W., X.C., and J.X.; visualization, F.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (52175478, 52205533).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

The authors declare that they consent to participate this paper.

Data Availability Statement

Data are contained within the article.

Acknowledgments

The authors would like to sincerely thank all other members of the research team for their contributions to this research. All individuals included in this section have consented to the acknowledgement.

Conflicts of Interest

Author Guangpeng Jia was employed by the company China National Heavy Duty Truck Group Co., Ltd. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Nomenclature

Joints   Number of robot joints
q            Position of end-effector
θ             Joints angle
d            Link offset
a             Link distance
α             Twist angle
δ i            Error of Parameter i
τ            Joint torque
A i           Distance from the gravity center of link i to rotation axis i
L i           Distance from the rotation axis i + 1 to rotation axis i
θ            Joints Concatenated θ over Joints samples
τ            Joints Concatenated τ over Joints samples
λ            Joints × Joints Compliant matrix
H a b c      6 × 6 Force and torque expressed from frame b to c in frame a
Rot i       S O ( 3 ) Rotation homogeneous transformation around axis i
Trans    S O ( 3 ) Translation homogeneous transformation
T a b        S E ( 3 ) Frame a expressed in frame b
J Jacobian matrix
η Parameter vector
φ   Fourier series
{ } Reference frame
[ ] Antisymmetric matrix of vector
Magnitude of vector

References

  1. Sergiyenko, O.; Alaniz-Plata, R.; Flores-Fuentes, W.; Rodríguez-Quiñonez, J.C.; Miranda-Vega, J.E.; Sepulveda-Valdez, C.; Núñez-López, J.A.; Kolendovska, M.; Kartashov, V.; Tyrsa, V. Multi-view 3D data fusion and patching to reduce Shannon entropy in Robotic Vision. Opt. Lasers Eng. 2024, 177, 108132. [Google Scholar] [CrossRef]
  2. Ivanov, M.; Sergyienko, O.; Tyrsa, V.; Lindner, L.; Flores-Fuentes, W.; Rodríguez-Quiñonez, J.C.; Hernandez, W.; Mercorelli, P. Influence of data clouds fusion from 3D real-time vision system on robotic group dead reckoning in unknown terrain. IEEE/CAA J. Autom. Sin. 2020, 7, 368–385. [Google Scholar] [CrossRef]
  3. Švaco, M.; Šekoranja, B.; Šuligoj, F.; Jerbić, B. Calibration of an industrial robot using a stereo vision system. Procedia Eng. 2014, 69, 459–463. [Google Scholar] [CrossRef]
  4. Johnston, G.L.H.; Orekhov, A.L.; Simaan, N. Kinematic Modeling and Compliance Modulation of Redundant Manipulators Under Bracing Constraints. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 4709–4716. [Google Scholar]
  5. Zhenhua, W.; Hui, X.; Guodong, C.; Rongchuan, S.; Sun, L. A distance error based industrial robot kinematic calibration method. Ind. Robot. Int. J. 2014, 41, 439–446. [Google Scholar] [CrossRef]
  6. An, Y.; Wang, X.; Zhu, X.; Jiang, S.; Ma, X.; Cui, J.; Qu, Z. Application of combinatorial optimization algorithm in industrial robot hand eye calibration. Measurement 2022, 202, 111815. [Google Scholar] [CrossRef]
  7. Jiang, J.; Luo, X.; Luo, Q.; Qiao, L.; Li, M. An overview of hand-eye calibration. Int. J. Adv. Manuf. Technol. 2022, 119, 77–97. [Google Scholar] [CrossRef]
  8. Hayat, A.A.; Chittawadigi, R.G.; Udai, A.D.; Saha, S.K. Identification of Denavit-Hartenberg parameters of an industrial robot. In Proceedings of the AIR’13: Proceedings of Conference on Advances in Robotics, Ropar, India, 5–8 July 2013; pp. 1–6. [Google Scholar]
  9. Lee, J.W.; Park, G.T.; Shin, J.S.; Woo, J.W. Industrial robot calibration method using denavit—Hatenberg parameters. In Proceedings of the 2017 17th International Conference on Control, Automation and Systems (ICCAS), IEEE, Jeju, Republic of Korea, 18–21 October 2017; pp. 1834–1837. [Google Scholar]
  10. Hayati, S.; Mirmirani, M. Improving the absolute positioning accuracy of robot manipulators. J. Robot. Syst. 1985, 2, 397–413. [Google Scholar] [CrossRef]
  11. Du, G.; Liang, Y.; Li, C.; Liu, P.X.; Li, D. Online robot kinematic calibration using hybrid filter with multiple sensors. IEEE Trans. Instrum. Meas. 2020, 69, 7092–7107. [Google Scholar] [CrossRef]
  12. Joubair, A.; Bonev, I.A. Kinematic calibration of a six-axis serial robot using distance and sphere constraints. Int. J. Adv. Manuf. Technol. 2015, 77, 515–523. [Google Scholar] [CrossRef]
  13. Koide, K.; Menegatti, E. General hand-eye calibration based on reprojection error minimization. IEEE Robot. Autom. Lett. 2019, 4, 1021–1028. [Google Scholar] [CrossRef]
  14. Hua, J.; Zeng, L. Hand-eye calibration algorithm based on an optimized neural network. Actuators 2021, 10, 85. [Google Scholar] [CrossRef]
  15. Ibaraki, S.; Theissen, N.A.; Archenti, A.; Alam, M.; Alex, N.; Theissen, E. Evaluation of kinematic and compliance calibration of serial articulated industrial manipulators. Int. J. Autom. Technol. 2021, 15, 567–580. [Google Scholar] [CrossRef]
  16. Cho, Y.; Do, H.M.; Cheong, J. Screw based kinematic calibration method for robot manipulators with joint compliance using circular point analysis. Robot. Comput.-Integr. Manuf. 2019, 60, 63–76. [Google Scholar] [CrossRef]
  17. Deng, K.; Gao, D.; Ma, S.; Zhao, C.; Lu, Y. Elasto-geometrical error and gravity model calibration of an industrial robot using the same optimized configuration set. Robot. Comput.-Integr. Manuf. 2023, 83, 102558. [Google Scholar] [CrossRef]
  18. Lim, H.K.; Kim, D.H.; Kim, S.R.; Kang, H.J. A practical approach to enhance positioning accuracy for industrial robots. In Proceedings of the 2009 ICCAS-SICE, Fukuoka, Japan, 18–21 August 2009; pp. 2268–2273. [Google Scholar]
  19. Abele, E.; Rothenbücher, S.; Weigold, M. Cartesian compliance model for industrial robots using virtual joints. Prod. Eng. 2008, 2, 339–343. [Google Scholar] [CrossRef]
  20. Dumas, C.; Caro, S.; Garnier, S.; Furet, B. Joint stiffness identification of six-revolute industrial serial robots. Robot. Comput.-Integr. Manuf. 2011, 27, 881–888. [Google Scholar] [CrossRef]
  21. Kozlov, P.; Klimchik, A. Simulation Study on Robot Calibration Approaches. In Proceedings of the 19th International Conference on Informatics in Control, Automation and Robotics (ICINCO 2022), Lisbon, Portugal, 14–16 July 2022; pp. 516–523. [Google Scholar]
  22. Klimchik, A.; Pashkevich, A. Serial vs. quasi-serial manipulators: Comparison analysis of elasto-static behaviors. Mech. Mach. Theory 2017, 107, 46–70. [Google Scholar] [CrossRef]
  23. Du, L.; Zhang, T.; Dai, X. Compliance error calibration for robot based on statistical properties of single joint. J. Mech. Sci. Technol. 2019, 33, 1861–1868. [Google Scholar] [CrossRef]
  24. Tepper, C.; Matei, A.; Zarges, J.; Ulbrich, S.; Weigold, M. Optimal design for compliance modeling of industrial robots with bayesian inference of stiffnesses. Prod. Eng. 2023, 17, 643–651. [Google Scholar] [CrossRef]
  25. Koehler, M.; Okamura, A.M.; Duriez, C. Stiffness control of deformable robots using finite element modeling. IEEE Robot. Autom. Lett. 2019, 4, 469–476. [Google Scholar] [CrossRef]
  26. Wang, X.; Sun, S.; Zhang, P.; Wu, M.; Zhao, C.; Zhang, D.; Meng, X. Model-based kinematic and non-kinematic calibration of a 7R 6-DOF robot with non-spherical wrist. Mech. Mach. Theory 2022, 178, 105086. [Google Scholar] [CrossRef]
  27. Xie, C.; Xu, H. Fault Diagnosis of Industrial Robots Based on Phase Difference Correction Method. J. Circuits Syst. Comput. 2023, 32, 2350013. [Google Scholar] [CrossRef]
  28. Judd, R.; Al Knasinski, A. technique to calibrate industrial robots with experimental verification. In Proceedings of the 1987 IEEE International Conference on Robotics and Automation, Raleigh, NC, USA, 31 March–3 April 1987; Volume 4. [Google Scholar]
  29. Nubiola, A.; Bonev, I.A. Absolute calibration of an ABB IRB 1600 robot using a laser tracker. Robot. Comput.-Integr. Manuf. 2013, 29, 236–245. [Google Scholar] [CrossRef]
  30. Yu, C.; Xi, J. Simultaneous and on-line calibration of a robot-based inspecting system. Robot. Comput.-Integr. Manuf. 2018, 49, 349–360. [Google Scholar] [CrossRef]
  31. Li, X.; Li, W.; Yin, X.; Ma, X.; Zhao, J. Camera-mirror binocular vision-based method for evaluating the performance of industrial robots. IEEE Trans. Instrum. Meas. 2021, 70, 1–14. [Google Scholar] [CrossRef]
  32. Toquica, J.S.; Motta, J.M.S.T. A methodology for industrial robot calibration based on measurement sub-regions. Int. J. Adv. Manuf. Technol. 2022, 119, 1199–1216. [Google Scholar] [CrossRef]
  33. Zhao, G.; Zhang, P.; Ma, G.; Xiao, W. System identification of the nonlinear residual errors of an industrial robot using massive measurements. Robot. Comput.-Integr. Manuf. 2019, 59, 104–114. [Google Scholar] [CrossRef]
Figure 1. Overview of robotic vision system: (a) Design of the 3D rotary laser sensor. (b) Unified kinematic model.
Figure 1. Overview of robotic vision system: (a) Design of the 3D rotary laser sensor. (b) Unified kinematic model.
Sensors 24 02559 g001
Figure 2. Modeling process of robotic vision system.
Figure 2. Modeling process of robotic vision system.
Sensors 24 02559 g002
Figure 3. Schematic diagram of the 3D rotary laser sensor.
Figure 3. Schematic diagram of the 3D rotary laser sensor.
Sensors 24 02559 g003
Figure 4. Basic composition of integrated joint modules.
Figure 4. Basic composition of integrated joint modules.
Sensors 24 02559 g004
Figure 5. Mechanical effects of external load and robot links.
Figure 5. Mechanical effects of external load and robot links.
Sensors 24 02559 g005
Figure 6. Process of parameter identification.
Figure 6. Process of parameter identification.
Sensors 24 02559 g006
Figure 7. Experiment setups: (a) calibration platform based on the 3D rotary laser sensor; (b) calibration platform based on a laser tracker.
Figure 7. Experiment setups: (a) calibration platform based on the 3D rotary laser sensor; (b) calibration platform based on a laser tracker.
Sensors 24 02559 g007
Figure 8. Geometric parameter calibration of the 3D rotary laser sensor.
Figure 8. Geometric parameter calibration of the 3D rotary laser sensor.
Sensors 24 02559 g008
Figure 9. Measurement poses around ceramic balls.
Figure 9. Measurement poses around ceramic balls.
Sensors 24 02559 g009
Figure 10. The radius error of the measured ceramic balls (4 balls × 50 poses × 372 rounds).
Figure 10. The radius error of the measured ceramic balls (4 balls × 50 poses × 372 rounds).
Sensors 24 02559 g010
Figure 11. Results of first experiment: (a) position errors before and after calibration; (b) count of position errors; (c) count of errors in different directions; (d) error distribution.
Figure 11. Results of first experiment: (a) position errors before and after calibration; (b) count of position errors; (c) count of errors in different directions; (d) error distribution.
Sensors 24 02559 g011
Figure 12. Process of automated measurement.
Figure 12. Process of automated measurement.
Sensors 24 02559 g012
Figure 13. Measurement points: (a) path planning for robots; (b) results in laser trackers (the color correspond to the orientation of robot).
Figure 13. Measurement points: (a) path planning for robots; (b) results in laser trackers (the color correspond to the orientation of robot).
Sensors 24 02559 g013
Figure 14. Distribution of the measurement points in the joint space: (a) Joint 1–Joint 2–Joint 3; (b) Joint 4–Joint 5–Joint 6.
Figure 14. Distribution of the measurement points in the joint space: (a) Joint 1–Joint 2–Joint 3; (b) Joint 4–Joint 5–Joint 6.
Sensors 24 02559 g014
Figure 15. Results of second experiment: (a) position errors before and after calibration; (b) count of position errors; (c) count of errors in different directions; (d) error distribution.
Figure 15. Results of second experiment: (a) position errors before and after calibration; (b) count of position errors; (c) count of errors in different directions; (d) error distribution.
Sensors 24 02559 g015
Table 1. Torque applied by the link.
Table 1. Torque applied by the link.
TorqueLink1Link2Link3~Link4Link5~Link6
τ 1 000
τ 2 G2A2sin(θ2)G3L2sin(θ2) + G3A3sin(θ2 − θ3)
τ 3 G3A3sin(θ2 − θ3)
τ 4 0
τ 5
τ 6
Table 2. Ceramic balls measured by CMM (mm).
Table 2. Ceramic balls measured by CMM (mm).
Ceramic Ballxyzr
1185.328408.4002303.87014.999
2240.396377.9432640.91015.002
3−196.154289.7492727.35015.001
4−246.181344.3402379.78014.998
Table 3. Calibration results of T B U and T C G based on the 3D rotary laser sensor.
Table 3. Calibration results of T B U and T C G based on the 3D rotary laser sensor.
TransformationTrans (x, y, z)/mmRot (r, w, p)/deg
T B U {Ceramic}→{Base}
   (User1)
500.580, 298.611, 2638.660−1.815, −170.153, 98.596
T C G {Gripper}→{Cam}7.096, 0.355, −348.8000.825, −24.750, 179.425
Table 4. Calibration results based on 3D rotary laser sensor.
Table 4. Calibration results based on 3D rotary laser sensor.
MDH ParametersJoint Compliance Parameters (×10−3)
Jointθ/degd/mma/mmα/deg k 0 1 : 3 0.0690, 0.250, −0.164
149.780−89.973 k a 1 1 : 3 −0.245, −0.261, −0.695
2−90.066−0.0206330.341179.937 k a 2 1 : 3 −0.177, −0.272, −0.0487
30.03720.010634.969−90.001 k b 1 1 : 3 −0.333, −0.196, −0.610
40.167−335.305−0.028690.037 k b 2 1 : 3 −0.0437, −0.199, −0.00706
5179.720−0.601−0.076690.103 λ θ [mm]diag(−1.159, 0.0640, 0.0582,
0.183, 0.852, −0.158)
6 s 1 , s 2 [mm]319.836, 1833.495
Table 5. Statistical comparison based on the first experiment.
Table 5. Statistical comparison based on the first experiment.
MethodModel SizeRMS (mm)Std. (mm)Avg (mm)Max (mm)
MDH300.07730.04850.06020.1947
NTM450.07020.04280.05570.1752
RFCM390.06920.04380.05370.1755
Ours530.05740.03500.04550.1699
Table 6. Calibration results of T B U and T C G based on laser tracker.
Table 6. Calibration results of T B U and T C G based on laser tracker.
TransformationTrans (x, y, z)/mmRot (r, w, p)/deg
T B U {LT}→{Base}
   (User2)
−1105.029, −955.794, −116.34421.846, −0.521, 0.166
T C G {Gripper}→{Reflector}44.200, −60.458, 104.009
Table 7. Calibration results based on laser tracker.
Table 7. Calibration results based on laser tracker.
MDH ParametersJoint Compliance Parameters (×10−3)
Jointθ/degd/mma/mmα/deg k 0 1 : 3 0.158, 0.403, −0.0281
150.202−89.979 k a 1 1 : 3 −0.120, −0.0430, −0.503
2−89.876−0.0121330.049179.950 k a 2 1 : 3 −0.0512, −0.0553, 0.144
3−0.02930.236135.343−89.973 k b 1 1 : 3 −0.206, 0.0217, −0.418
40.720−335.152−0.23290.016 k b 2 1 : 3 0.0819, 0.0188, 0.183
5−179.415−0.07860.12590.012 λ θ [mm]diag(−1.289, 0.0668, 0.0522,
0.138, 0.732, −0.117)
6 s 1 , s 2 [mm]307.383, 1841.103
Table 8. Statistical comparison based on the second experiment.
Table 8. Statistical comparison based on the second experiment.
MethodRMS (mm)Std. (mm)Avg (mm)Max (mm)
MDH0.04730.02820.03800.1319
NTM0.04460.02660.03580.1132
RFCM0.04240.02550.03390.1099
Ours0.03810.02550.03080.0955
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ye, F.; Jia, G.; Wang, Y.; Chen, X.; Xi, J. Kinematic and Joint Compliance Modeling Method to Improve Position Accuracy of a Robotic Vision System. Sensors 2024, 24, 2559. https://doi.org/10.3390/s24082559

AMA Style

Ye F, Jia G, Wang Y, Chen X, Xi J. Kinematic and Joint Compliance Modeling Method to Improve Position Accuracy of a Robotic Vision System. Sensors. 2024; 24(8):2559. https://doi.org/10.3390/s24082559

Chicago/Turabian Style

Ye, Fan, Guangpeng Jia, Yukun Wang, Xiaobo Chen, and Juntong Xi. 2024. "Kinematic and Joint Compliance Modeling Method to Improve Position Accuracy of a Robotic Vision System" Sensors 24, no. 8: 2559. https://doi.org/10.3390/s24082559

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop