Next Article in Journal
Reverberation-Robust Self-Calibration and Synchronization of Distributed Microphone Arrays by Mitigating Heteroscedasticity and Outlier Occurrence in TDoA Measurements
Next Article in Special Issue
Recent Advances in Trajectory Planning and Object Recognition for Robot Sensing and Control
Previous Article in Journal
Three-Dimensional Point Cloud Segmentation Algorithm Based on Depth Camera for Large Size Model Point Cloud Unsupervised Class Segmentation
Previous Article in Special Issue
Tool Frame Calibration for Robot-Assisted Ultrasonic Testing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Reprojection Error Analysis and Algorithm Optimization of Hand–Eye Calibration for Manipulator System

1
School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan 430074, China
2
Key Laboratory of Image Processing and Intelligent Control, Ministry of Education, Wuhan 430074, China
3
College of Engineering, Shantou University, Shantou 515063, China
*
Authors to whom correspondence should be addressed.
Sensors 2024, 24(1), 113; https://doi.org/10.3390/s24010113
Submission received: 27 October 2023 / Revised: 19 December 2023 / Accepted: 22 December 2023 / Published: 25 December 2023

Abstract

:
The Euclidean distance error of calibration results cannot be calculated during the hand–eye calibration process of a manipulator because the true values of the hand–eye conversion matrix cannot be obtained. In this study, a new method for error analysis and algorithm optimization is presented. An error analysis of the method is carried out using a priori knowledge that the location of the augmented reality markers is fixed during the calibration process. The coordinates of the AR marker center point are reprojected onto the pixel coordinate system and then compared with the true pixel coordinates of the AR marker center point obtained by corner detection or manual labeling to obtain the Euclidean distance between the two coordinates as the basis for the error analysis. We then fine-tune the results of the hand–eye calibration algorithm to obtain the smallest reprojection error, thereby obtaining higher-precision calibration results. The experimental results show that, compared with the Tsai–Lenz algorithm, the optimized algorithm in this study reduces the average reprojection error by 44.43% and the average visual positioning error by 50.63%. Therefore, the proposed optimization method can significantly improve the accuracy of hand–eye calibration results.

1. Introduction

With the development of robotics and sensor technology [1,2,3,4], robotic grasping has gradually become a significant function for robots. It involves identifying and locating a target object through a visual sensor. Therefore, many sensor calibration methods [5,6] have emerged, including hand–eye calibration algorithms [7]. To ensure the accuracy of visual information and to achieve coordinated hand–eye motion, it is essential to analyze the manipulator’s hand–eye visual calibration problem and to improve its accuracy. Hand–eye calibration establishes a transformation matrix between the pixel coordinate system of the camera and spatial coordinate system of the manipulator. By transforming the pixel coordinates into the coordinate system of the manipulator, the robot can calculate the motor movements necessary to reach the target position and to control the manipulator. Hand–eye calibration can be divided into eye-to-hand and eye-in-hand calibrations, depending on the camera installation position. This study focuses on experiments using an eye-in-hand installation.
Traditional calibration methods build calibration models based on preset imaging scenes and select optimal algorithms, including reference-based, active vision, and self-calibration methods, to calculate camera parameters based on scene geometric constraints. Based on the reference object camera calibration method, the corner points of the target image were extracted as control points, a system of equations corresponding to the pixels and spatial coordinates was established, and an optimization algorithm was used to calculate the parameters in which the shape and size of the reference object and other information were known. The reference object is divided into one-dimensional straight lines [8], two-dimensional plane calibration plates, and three-dimensional solid blocks, according to the spatial dimension. Because of their simple production and controllable accuracy, flat calibration plates are often used as targets in industrial applications instead of calibration blocks. Commonly used calibration-plate patterns include checkerboards, solid circles [9], and concentric rings. Recently, various templates have been proposed for this purpose [10]. The Zhengyou calibration method [11], based on a chessboard calibration board, is a classic representative of this type of method. This method has strong imaging constraints, a simple calibration process, and a high algorithm robustness. This type of method has strong imaging constraints, a simple calibration process, and high algorithm robustness; however, the production and maintenance cost of high-precision reference objects is high, and it is lost in situations where it is impossible to carry the significance of the reference objects. Camera calibration methods based on active vision obtain multiple images by accurately controlling special movements, such as the pure rotation and translation of the camera or target, and use controllable quantitative motion constraints to determine the internal and external information of the camera. This is an important aspect of self-calibration methods. Typical methods include calibrations based on pure rotational motion [12], three orthogonal translation motions, plane orthogonal motion, an infinite plane homography matrix, hand–eye calibration [13], and self-calibration based on projection reconstruction. Active visual calibration technology can linearly solve internal camera parameters with strong algorithm robustness. However, strict requirements for the control equipment limit their use and promotion. The camera self-calibration method does not require the setting up of a reference object or controlling its precise movement. It only uses the geometric consistency constraints of corresponding points in multiple image frames [14,15] to solve the basic camera matrix and does not rely on scene structure and motion information, including directly solving the Kruppa equation, methods based on absolute quadratic curves and absolute quadric surfaces [16,17], Pollefeys module-constrained calibration [18,19], and hierarchical stepwise calibration methods under variable internal parameters [20].
The focus of this study was a hand–eye calibration algorithm. Since 1989, Shiu [21] and Tscai [7] presented the hand–eye calibration problem. Domestic and foreign scholars have conducted considerable research. For example, Chen et al. proposed a noise-tolerant algorithm for robot sensor calibration using a planar disk in any three-dimensional direction [22]. Li et al. proposed a hand–eye calibration method for linear laser sensors based on three-dimensional reconstruction [23]. Zhang et al. proposed a calibration method of a hand–eye system with rotation and translation coupling [24]. The solutions to hand–eye calibration problems can be divided into two categories according to the solution order of the calibration matrix. The first solves the rotation and translation vectors of the matrix simultaneously. A typical algorithm is Andreff ‘s closed-loop solution of the hand–eye calibration equation based on the matrix direct product for the calibration of small-scale moving measurement scenarios [25]. Tabb proposed a hand–eye calibration algorithm based on an iterative optimization method and solved it using a nonlinear optimizer [26]. Jiang et al. proposed a method to calibrate the hand–eye of the EOD robot by solving the AXB = YCZD problem [27]. The second solves the rotation matrix of the calibration matrix first and then the translation vector of the calibration matrix. The most common method is the method proposed by Tsai and Lenz [7], which introduces the rotation axis–rotation angle system to describe the rotation motion. Liu et al. proposed a hand–eye calibration method for a robot vision measurement system [28]. Zou et al. performed a hand–eye calibration on arc welding robots and laser vision sensors through semi-definite programming [29]. Deng et al. proposed a hand–eye calibration method based on a monocular robot [30]; however, the above algorithm still has some space for optimization.
In the hand–eye calibration process of a real manipulator, the actual error of the calibration result cannot be calculated because the true value of the hand–eye transformation matrix cannot be obtained. Therefore, a new error analysis method is required. A reprojection error analysis method was proposed. The contribution of this study is to determine the reason for certain errors in each calibration algorithm; they ignore the calculation errors in the transformation matrix of the augmented reality (AR) marker coordinate system relative to the camera coordinate system, T m c   , making it difficult to obtain high-precision calibration results. The method proposed in this study innovatively utilizes the prior knowledge that the position marked by the AR is fixed in the calibration process to conduct error analysis and algorithm optimization. First, the coordinates of the center point of the AR mark are reprojected to the pixel coordinate system and are then compared with the real pixel coordinates of the center point of the AR mark obtained by corner detection or manual labeling to obtain the Euclidean distance between the two coordinates, which is the basis for error analysis. Finally, we fine-tuned the results of the hand–eye calibration algorithm to obtain the smallest reprojection error, thereby obtaining high-precision calibration results.

2. Coordinate System Definitions and Hand–Eye Calibration Equation

2.1. Coordinate System Definitions

In the robot arm hand–eye vision grasping system, the visual information obtained through the camera is described in the camera coordinate system. However, the reference coordinate system for robot arm motion planning is the robot arm base coordinate system. Therefore, to realize the vision of a robot arm, grasping usually requires converting the visual information into a robot arm base coordinate system for description. Let T e b   be the transformation matrix of the robot arm end coordinate system relative to the base coordinate system and T c e   be the transformation matrix of the camera coordinate system relative to the robot arm end coordinate system. Then, the transformation matrix of the camera coordinate system relative to the base coordinate system of the robotic arm is T e b   = T e b   · T c e   . Among them, T e b   can be calculated using the joint angles of the robotic arm and the forward kinematics equation, whereas the matrix T c e   is solved through the hand–eye calibration algorithm.
To solve the above hand–eye calibration problem, it is necessary to use a calibration object, which requires the attitude of the calibration object coordinate system relative to the camera coordinate system to be calculated in real time. AR markers were used in this study. The camera for the robotic arm visual grasping system was installed in an eye-in-hand manner, and the origins of the manipulator base, manipulator end, camera, and AR marker coordinate systems are defined as O b , O e , O c , and O m , respectively. The coordinate systems and their transformation relationships during the hand–eye calibration process are shown in Figure 1.
Let T m c be the transformation matrix of the AR marker coordinate system relative to the camera coordinate system. According to Figure 1, the transformation relationship between the AR marker coordinate system and base coordinate system of the manipulator T m b is
T m b = T   e b   ·   T   ·   T m c c e

2.2. Hand–Eye Calibration Equation

In Equation (1), T c e is fixed, which is the hand–eye transformation matrix to be solved. If the position of the AR marker relative to the base coordinate system of the manipulator is constant, T m b is fixed. For a certain state S i | i N of the manipulator, T m c i can be calculated using the size of the AR marker, the corner coordinates, and the internal parameters of the camera. Therefore, for a certain state S i of the manipulator, the above equation can be expressed as:
T m b = T   e b   ·   T   ·   T m c i c e ,   i N
There are two fixed unknown matrices, T m b and T c e , in the above equation: to solve these two unknown matrices, it is necessary to control the manipulator to move in two different states, and the position of the AR marker should remain unchanged during movement. Using these two states, the following equations were obtained:
  T m b = T   e b   · T   ·   T m c 1 c e   T m b = T   e b   · T   ·   T m c 2 c e
According to the above Equation, the following can be obtained
T   e b   ·   T   ·   T m c 1 c e = T m b = T   e b   ·   T   ·   T m c 2 c e
The above Equation can be converted to
e b T 2 1 ·   e b T 1 A · c e T X = c e T X · m c T 2 ·   m c T 1 1 B
Furthermore, the problem of solving the hand–eye transformation matrix T c e is transformed into a problem of solving the homogeneous Equation AX = XB, where A, X, and B are 4 × 4 homogeneous transformation matrices.
To further solve the homogeneous equation AX = XB, the homogeneous transformation matrix in Equation (5) is expressed in the form of a rotation matrix and translation vector as follows:
R A t A 0 1 A R t 0 1 X = R t 0 1 X R B t B 0 1 B
By expanding the above formula, the equations to be solved can be obtained as
R A ·   R = R   ·   R B R A I t = R · t B t A
In the above Equation, R A , R B , t A , and t B can be measured, and I is the unit matrix. There are many solutions for obtaining rotation matrix R and translation vector t from this equation set, such as the Tsai–Lenz algorithm [7] and the Horaud [31], Andreff [25], and Daniilidis [32] algorithms. Subsequently, we conducted simulation experiments and developed an error analysis method to analyze and optimize the above four hand–eye calibration algorithms.

3. Reprojection Error Analysis Method of Calibration Algorithms

3.1. Hand–Eye Calibration Algorithm Simulation Experiments

In this study, the ROS and Gazebo simulation platforms were used to build the simulation environment shown in Figure 2 to test the performance of the above four hand–eye calibration algorithms [7,25,31,32]. With reference to the manipulator employed in the practical experiment, the simulation employed the standard DH parameter method to conduct the kinematic modeling of the 7-degree-of-freedom manipulator. The link coordinate system of the manipulator was established as illustrated in Figure 3, and the respective DH parameters are listed in Table 1. The resolution of the RGB camera in the test environment was 640 × 480 pixels, and Gaussian noise with a mean value of 0 and a variance of 0.07 was added to the collected images. The internal parameters are Equations (19) and (20), which are the internal camera parameters used in the real experiment. In the simulation environment, the size of the AR marks was 10 × 10 cm. The true values of T c e and T m b for the simulation experiments are listed in Table 2.
In the simulation experiment, 22 different states of the robotic arm were manually set to ensure that the center point information of the AR mark could be detected in each state and that the joint angles of the robotic arm changed significantly between each state. Some of the collected images and AR marker center point detection results are shown in Figure 4. The calculation results for each calibration algorithm are listed in Table 3.
To quantitatively evaluate the performance of each calibration algorithm, the translation error e r r t of the hand–eye transformation matrix was defined as the two norms of the difference between the calculated value t c of the translation vector and the true value t r . The translation error was measured using the following Euclidean distance:
  e r r t = t c t r 2
Similarly, the rotation matrix R is first converted to a Euler angle E, and the Euler angle is expressed in vector form as E = r o l l , p i t c h , y a w T , and then the rotation error e r r R of the hand–eye conversion matrix can be defined as:
  e r r R = E c E r 2
According to the real values of the parameters in the simulation environment, the statistical results of the translation and rotation errors of the hand–eye transformation matrix calculated by each calibration algorithm are shown in Figure 5. It can be observed from the statistical figure that the translation error of the hand–eye conversion matrix calculated by the Tsai–Lenz and Andreff algorithms is significantly lower than that of the other two algorithms; however, the rotation error of the hand–eye conversion matrix calculated by the Tsai–Lenz algorithm is slightly higher than that of the other algorithms. Overall, the calibration accuracies of the Tsai–Lenz and Andreff algorithms are relatively high in the simulation environment.

3.2. Heuristic Error Analysis

As shown in Figure 5, each algorithm has a certain optimization space. The following presents a heuristic error analysis of the simulation results. During the eye-in-hand calibration process, the position of the calibration object is fixed relative to the base coordinate system of the manipulator. Therefore, in theory, T m b should be a fixed value; however, according to the coordinate transformation relationship shown in Figure 1, after calculating the hand–eye transformation matrix T c e , T m b can be calculated by the following equation
T m b = T   e b     ·   T   c e   ·   T   m c  
In the process of hand–eye calibration, it is necessary to collect multiple datasets; hence, the set C = T m b i , i N can be calculated. Because T m b , calculated by the above formula, will be affected by the parameter error in T c e , a simple conjecture can be established. If the fluctuation degree of the data in set C is smaller, the errors of T c e are smaller.
To facilitate the analysis of the degree of data fluctuation, box plots were used to visualize the fluctuation range of each parameter of the translation matrix t i in T m b i , and the results are shown in Figure 6. The red dotted line in the figure represents the real value, whereas the orange solid and green dotted lines represent the median and average of the data calculated using each algorithm, respectively. The left/right sides of the rectangle are the lower/upper quartiles, respectively; the size of the I-shaped area reflects the fluctuation range of the data; and the open circles represent the outliers.
The corresponding data fluctuation range of the Tsai–Lenz algorithm is small, which is in line with expectations. However, the Andreff algorithm has a large fluctuation range for the corresponding data, which is inconsistent with expectations. Therefore, it is unreasonable to determine the error of the hand–eye conversion matrix T c e by the fluctuation degree of the data in set C because the above conjecture ignores the influence of the error of T m c on the calculation result of T m b .

3.3. Reprojection Error Analysis

From the results of the heuristic error analysis, it can be observed that T c e and T m c may have errors; therefore, it is unreasonable to calculate T m b using Equation (10). Because the position of the AR marker is fixed during the calibration process, the following error analysis process assumes that T m b is known and fixed.
According to the coordinate transformation relationship shown in Figure 1, the pose representation of the AR marker in the camera coordinate system can be obtained as follows:
T m c = T   e c     ·   T   b e   ·   T   m b  
If the AR marker coordinate system O m a r k e r is defined as the world coordinate system, then T w c = T m c . According to the pinhole camera imaging model, the conversion relationship between the coordinates X w , Y w , Z w , the pixel coordinates u , v in the AR marker coordinate system and the z-axis coordinate Z c in the camera coordinate system can be obtained as follows:
  Z c u v 1 = M T w c X w Y w Z w 1 = M T m c X m Y m Z m 1 = M T e c T b e T m b X m Y m Z m 1
Based on the above definition, the coordinates of the origin O m a r k e r of the AR marker coordinate system in the world coordinate system X w , Y w , Z w = X m , Y m , Z m = 0 , 0 , 0 , and the pixel coordinates u , v of the AR marker center point can be calculated using the following formula:
Z c u v 1 = M T e c T b e T m b 0 0 0 1
In the above formula, M is the inherent property of the camera, T e c is the hand–eye conversion matrix to be calibrated, T b e can be calculated by the forward kinematics equation of the manipulator, and T m b is known and fixed.
Because the translation part of the homogeneous transformation matrix T m b reflects the coordinates of the AR marker center point in the base coordinate system of the manipulator, the function of Equation (13) is to remap the coordinates of the AR marker center point into the pixel coordinate system. For a certain position P i , i N , the manipulator moves during the calibration process, the AR mark image captured by the camera is denoted as i m g I , and the pixel coordinate after the reprojection of the AR marker center point in i m g i is denoted as q i = u i , v i T , and then:
q i = u i , v i T = p r o j T e c , T i b e , T m b   , i N
Because the real pixel coordinates Q i = U i , V i T of the AR marker center point in i m g I can be obtained by corner detection or manual labelling, the reprojection error e r r p r o j i corresponding to i m g i can be defined as the Euclidean distance between the real pixel coordinates of the AR marker center point and the reprojection coordinates:
  e r r p r o j i = Q i q i 2 = U i , V i T p r o j T e c , T b e i , T m b 2 ,   i N
If the manipulator moves to N positions during the hand–eye calibration process, the average reprojection error can be defined as
e r r _ p r o j a v g = i = 1 N e r r _ p r o j i / N
According to Equations (15) and (16), the reprojection error of each group of simulation experimental data was calculated. The results are shown in Figure 7. The horizontal line in the figure represents the average reprojection error of each calibration algorithm. The average reprojection error corresponding to the calculation results of the Tsai–Lenz and Andreff algorithms was small, and the fluctuation in the reprojection error of each group of data was relatively small. In addition, from the previous analysis results, the Euclidean distance errors of these two algorithms were relatively small, proving that the reprojection error reflects the accuracy of the calibration results to a certain extent. Generally, the smaller the reprojection error, the higher the accuracy of the calibration results. Because the real value of the hand–eye transformation matrix cannot be obtained in the process of the hand–eye calibration of a real manipulator, the Euclidean distance error of the calibration results cannot be calculated, and the reprojection error can be used as the evaluation standard for calibration accuracy.

3.4. Optimizing Calibration Algorithm by Minimizing Reprojection Error Analysis

From the statistical results of the Euclidean distance error of each algorithm in Figure 1, even if the hand–eye calibration is carried out in the simulation environment, the translation error of the hand–eye conversion matrix calculated by different calibration algorithms is also quite different. Both are greater than 2 mm, which indicates that each calibration algorithm still has a large optimization space.
In the process of hand–eye calibration in a simulation environment, the only error source is the pose calculation error of the AR marker. However, when the above four common algorithms are used for calibration, the calculated T m c is considered error-free, resulting in a certain error in the calibration results of each algorithm. In other words, the conventional hand–eye calibration algorithm pays more attention to versatility and does not use prior knowledge that the position of the AR marker is fixed in the calibration process; therefore, it is difficult to obtain high-precision calibration results. The definition of the reprojection error of the hand–eye calibration results makes full use of this prior knowledge. According to the previous analysis, the smaller the average reprojection error, the higher the accuracy of the calibration results by minimizing the reprojection error.
Based on the above analysis, the following exploratory experiments were carried out by controlling the variables to test whether smaller average reprojection errors can be obtained by adjusting the parameters in the calibrated hand–eye transformation matrix. In the experiment, the T m b used to calculate the reprojection error takes the real values in Table 2, and the translation parameters calibrated by the Tsai–Lenz algorithm in Table 3 are taken as the initial values. The three translation parameters x , y , and z were adjusted with a step length of 0.001 m, and the adjusted hand–eye transformation matrix was substituted into Equations (15) and (16) to calculate the average reprojection error for each group of samples in the simulation experiment. The step size is selected based on the order of magnitude of the translation matrix. If the step size is too large, the search may be too fast, and the optimal solution may be missed. When the step size is too small, it may cause increased computational overhead, especially in high-dimensional parameter spaces. The experimental results are shown in Figure 8. The purple dotted line represents the parameter value of the minimum point, whereas the black dotted line represents the average reprojection error of the minimum point.
As shown in Figure 8, the average reprojection error can be reduced to a certain extent by adjusting the translation parameters separately. The translation parameters calibrated by the Tsai–Lenz algorithm are used as the initial values, which can reduce the search space of the parameters and can help determine the translation parameters corresponding to the lowest point of the reprojection error. However, when z = 0.03858 m, the average reprojection error was minimized; however, this value deviated from the real value z r = 0.0345 m (Table 3). Therefore, translation parameters with higher accuracy cannot be guaranteed by adjusting x, y, and z alone.
Next, we used 0.001 m as the step length and simultaneously adjusted the x, y, and z parameters. The change rule for the reprojection error is shown in Figure 9. The colors of the data points in the figure reflect the size of the reprojection errors. The translation matrix that minimizes the average reprojection error is t m =   0.07059 , 0.00015 , 0.03558 T , and the corresponding average reprojection error e r r p r o j a v g = 0.69861. The corresponding translation error, e r r t = 0.00165, was calculated using Equation (8). Figure 5 and Figure 7 show that the translation error, e r r t = 0.0022, and the average reprojection, error e r r p r o j a v g = 2.96867, of the calibration results were calculated using the Tsai–Lenz algorithm. There is a set of translation parameters, t b = x b , y b , z b , that can minimize the average reprojection error, and the translation error of this set of parameters, e r r t , is less than that of the parameters calibrated using the Tsai–Lenz algorithm. In other words, the accuracy of the hand–eye calibration results can be improved by simultaneously adjusting the three parameters x ,   y , and z to minimize the reprojection error.
During the calibration process, after a certain position P i of the manipulator is determined, the four parameters U i , V i , T b e i , T m b in Equation (15) are determined accordingly. Therefore, T e c = T c e 1 determines the size of the reprojection error e r r p r o j i . According to Equation (16), if N positions are determined, then the magnitude of the average reprojection error, e r r p r o j a v g , is uniquely determined by T , c e and the mapping function of the hand–eye transformation matrix, T c e , to the average reprojection error, e r r p r o j a v g , can be defined as
e r r _ p r o j a v g = f T c e
Furthermore, T c e can be uniquely determined by the translation parameters t = x , y , z and the rotation parameter r = r o l l , p i t c h , y a w ; hence, the mapping function F of the translation and rotation parameters to the average reprojection error, e r r p r o j a v g , can be defined as follows:
e r r _ p r o j a v g = F t , r = F x , y , z , r o l l , p i t c h , y a w
Based on the above definition, this study transforms the optimization problem of the hand–eye conversion matrix into the problem of finding the minimum value of the objective function F and optimizes the hand–eye conversion matrix from the perspective of minimizing the reprojection error.
Next, with a step size of 0.0001 m, the three parameters x , y , and z were adjusted to search for the translation parameter that minimizes the function F . The optimal translation parameter t b = (−0.07058, 0.00039, 0.03483) and the corresponding average reprojection error, e r r p r o j a v g , is 0.36084, which is 87.845% lower than that of the Tsai–Lenz algorithm. The translation error, e r r t , is 0.0007836 m, which is 64.382% lower than that of the Tsai–Lenz algorithm.
In the above optimization process, only the translation parameters in the hand–eye conversion matrix were adjusted. After numerous simulation calibration experiments, it was deduced that the translation parameters had a greater influence on the accuracy of the calibration results than the rotation parameters, and the number of translation parameters was lower. Generally, adjusting only the translation parameters yields ideal calibration results. If the translation and rotation parameters are adjusted simultaneously, higher-precision calibration results can be obtained theoretically, but this process is highly time consuming.

4. Hand–Eye Calibration Algorithm Experiment

4.1. Calibration Process and Results

To realize the real-time calculation of the spatial posture of the end of the robotic arm, the standard DH parameter method is used to carry out the kinematic modeling of the 7-degree-of-freedom robotic arm in this paper, and the robotic arm link coordinate system is established as shown in Figure 3. The corresponding DH parameters are listed in Table 4. In the table, i represents the joint index, α i 1 denotes the rotation angle of link i relative to link i 1 , a i 1 is the length of the previous link, d i indicates the offset distance of the i -th joint, and θ i represents the rotational angle of each joint. Specifically, d b s = 0.3705   m , d s e = 0.3300   m , d e w = 0.3200   m , and d w f = 0.2255   m . In the experimental environment, the size of the AR mark was 10 cm × 10 cm. The RGB camera internal parameters are shown below:
f x , f y , u 0 , v 0 = 1373.72 , 1374.10 , 965.286 , 554.510
  k 1 , k 2 , p 1 , p 2 = 0.142 , 0.288 ,   0.002 ,   0.000
Under the premise that the internal parameters of the camera have been calibrated, the hand–eye calibration experiment is performed in a real environment. The experimental configuration is shown in Figure 10a, and the AR mark is placed on the workbench in front of the robotic arm. To obtain the real position of the AR marker, an auxiliary calibration tool is installed at the end of the manipulator, and then the manipulator is manually controlled to align the top of the calibration tool with the center point of the AR marker (as shown in Figure 10b). Finally, the forward kinematics equation of the manipulator is used to calculate the translation matrix of the center point of the AR marker relative to the base coordinate system of the manipulator. The translation matrix is t r = 0.53514   m 0.00406   m 0.25409   m T .
According to the experimental and error analysis results in the simulation environment, the optimized hand–eye calibration process is as follows.
  • The real translation matrix t r of the AR marker coordinate system relative to the base coordinate system of the manipulator was obtained using the auxiliary calibration tool, and the position of the AR marker remained unchanged.
  • The manipulator was controlled to move to 20 different states where the corner information of the AR marker could be detected, and the corresponding 20 groups of coordinate system transformation data were collected and recorded.
  • The mean value T m b   a v g of T m b   is calculated using the coordinate transformation data of each group, and the translation matrix in T m b   a v g is replaced by t r to obtain T m b   p r o j for calculating the reprojection error.
  • The Tsai–Lenz algorithm is used to calculate the initial value of the hand–eye transformation matrix T c   e   i n i t , and its translation parameters are automatically adjusted to minimize the average reprojection error. The optimized hand–eye transformation matrix is   T c   e   o p t i m i z e d .
Because it is difficult to obtain the true value of the rotation parameter in T m b in a real environment, the average value was used when calculating the reprojection error in the above calibration process. According to the above process, a hand–eye calibration experiment was carried out in a real environment, and the hand–eye conversion matrix T c e was calculated using the four traditional algorithms and the optimized algorithms mentioned above. The results are summarized in Table 4. In addition, in terms of algorithm calibration efficiency, because the calibration formula includes four common arithmetic and matrix operations, there is no requirement for computing power. With the current mainstream CPU (i7 10750H), the calibration time does not exceed 1 ms.

4.2. Reprojection Error Analysis

To evaluate the performance difference between the traditional algorithm and the optimized algorithm in a real environment, the reprojection errors corresponding to each hand–eye transformation matrix in Table 4 were calculated using the coordinate transformation data of each group. The results are shown in Figure 11, where the horizontal line indicates the average reprojection error for each method. Because the difference between the hand and eye conversion matrices calculated by the Tsai–Lenz and Horaud algorithms is very small, only the reprojection error corresponding to the Tsai–Lenz algorithm is shown in the figure. In a real environment, except for the Andreff algorithm, the performances of the other traditional algorithms are similar. It is worth mentioning that the average reprojection error of the optimized algorithm is reduced by 44.43% compared with the Tsai–Lenz algorithm. Furthermore, the variability observed in the connecting lines between discrete points in the graph served as an indicator of the robustness of the optimization algorithm. An observation of the figure readily reveals that the fluctuation level in the yellow connecting line, representing the optimization algorithm, is notably reduced compared with the other algorithms. This distinct pattern underscores the heightened robustness of the optimization algorithm compared with its counterparts.

4.3. Visual Positioning Error Analysis

In a real scene, the calibration results of each algorithm are used to test the positioning accuracy of the manipulator’s visual positioning. During the test, the AR marker was moved several times, and the state of the manipulator was adjusted to ensure that the corner information of the AR marker could be detected. Position Pc of the center point of the AR marker in the base coordinate system of the manipulator was then calculated using Equation (2). Finally, the manipulator was manually controlled, and an auxiliary calibration tool was used to obtain the real position Pr of the AR marker center point.
The positioning accuracy of the manipulator in a real scene was quantitatively evaluated, and the visual positioning error was defined as the two norms of the difference between the calculated value Pc and the real value Pr of the AR marker center point position.
In the actual test process, 10 datasets were collected, and the hand–eye conversion matrix calculated by each algorithm was used for the visual positioning of the manipulator. The error statistics are shown in Figure 12. The horizontal line represents the average visual positioning error for each algorithm. The optimized hand–eye calibration method can significantly reduce the visual positioning error of the manipulator. Compared with the traditional Tsai–Lenz algorithm, the average visual positioning error was reduced by 50.63%.

5. Summary

To ensure the precision of visual information obtained by a robot, this study delves into the hand–eye calibration algorithm of the manipulator. Commonly used hand–eye calibration algorithms are tested in a simulated environment, and an error analysis is conducted on the results. Subsequently, the reprojection error of the hand–eye calibration results are defined, and an optimization method for hand–eye calibration is proposed based on the eye-in-hand manipulator. Experimental validation is performed in a real environment using the manipulator visual grasping system. The results indicate a significant enhancement in calibration accuracy. Specifically, compared to the Tsai–Lenz algorithm, the proposed optimization algorithm reduces the average reprojection error by 44.43% and the average visual positioning error by 50.63%. Thus, the proposed optimization method markedly enhances the precision of hand–eye calibration results, elevating the overall performance of hand–eye calibration algorithms.

Author Contributions

Methodology, G.P. and Z.R.; software, Q.G.; validation, Q.G.; investigation, Z.R. and Q.G.; data curation, Z.R. and Q.G.; project administration, G.P.; structural optimization and writing improvement, Z.F. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Hubei Province Core Technology for Bridging Development Gaps Project (HBSNYT202213), the Hubei Province Unveiling Science and Technology Project (2021BEC008), and the Hubei Province Natural Science Foundation of China (No. 2019CFB526).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Macenski, S.; Foote, T.; Gerkey, B.; Lalancette, C.; Woodall, W. Robot Operating System 2: Design, architecture, and uses in the wild. Sci. Robot. 2022, 7, eabm6074. [Google Scholar] [CrossRef] [PubMed]
  2. Yang, L.; Yu, J.; Guo, Y.; Chen, S.; Tan, K.; Li, S. An electrode-grounded droplet-based electricity generator (EG-DEG) for liquid motion monitoring. Adv. Funct. Mater. 2023, 33, 202302147. [Google Scholar] [CrossRef]
  3. Markku, S.; Karayiannidis, Y.; Kyrki, V. A survey of robot manipulation in contact. Robot. Auton. Syst. 2022, 156, 104224. [Google Scholar]
  4. Lin, W.; Liang, P.; Luo, G.; Zhao, Z.; Zhang, C. Research of Online Hand–Eye Calibration Method Based on ChArUco Board. Sensors 2022, 22, 3805. [Google Scholar] [CrossRef] [PubMed]
  5. Wang, D.; Wang, L.; Wu, J.; Ye, H. An experimental study on the dynamics calibration of a 3-DOF parallel tool head. IEEE/ASME Trans. Mechatron. 2019, 24, 2931–2941. [Google Scholar] [CrossRef]
  6. Zhang, J.; Li, X.; Dai, X. Camera calibration method based on 3D board. J. Southeast Univ. (Nat. Sci. Ed.) 2011, 43, 543–548. [Google Scholar] [CrossRef]
  7. Tsai, R.Y.; Lenz, R.K. A new technique for fully autonomous and efficient 3 d robotics hand/eye calibration. IEEE Trans. Robot. Autom. 1989, 5, 345–358. [Google Scholar] [CrossRef]
  8. Ge, P.; Wang, Y.; Wang, H.; Li, G.; Zhang, M. Multivision Sensor Extrinsic Calibration Method with Non-Overlapping Fields of View Using Encoded 1D Target. IEEE Sens. J. 2022, 22, 13519–13528. [Google Scholar] [CrossRef]
  9. Bu, L.; Huo, H.; Liu, X.; Bu, F. Concentric circle grids for camera calibration with considering lens distortion. Opt. Lasers Eng. 2021, 140, 106527. [Google Scholar] [CrossRef]
  10. Chen, X.; Song, X.; Wu, J.; Xiao, Y.; Wang, Y.; Wang, Y. Camera calibration with global LBP-coded phase-shifting wedge grating arrays. Opt. Lasers Eng. 2021, 136, 106314. [Google Scholar] [CrossRef]
  11. Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef]
  12. Hartley, R.I. Self-calibration of stationary cameras. Int. J. Comput. Vis. 1997, 22, 5–23. [Google Scholar] [CrossRef]
  13. Li, M. Camera calibration of a head-eye system for active vision. In Proceedings of the Computer Vision—ECCV’94: Third European Conference on Computer Vision, Stockholm, Sweden, 2–6 May 1994; Springer: Berlin/Heidelberg, Germany, 1994; Volume I, p. 3. [Google Scholar]
  14. Hartley, R.I. Projective reconstruction and invariants from multiple images. IEEE Trans. Pattern Anal. Mach. Intell. 1994, 16, 1036–1041. [Google Scholar] [CrossRef]
  15. Pollefeys, M.; Van Gool, L.; Oosterlinck, A. The modulus constraint: A new constraint self-calibration. In Proceedings of the 13th International Conference on Pattern Recognition, Vienna, Austria, 25–29 August 1996; Volume 1. [Google Scholar]
  16. Triggs, B. Autocalibration and the absolute quadric. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Juan, PR, USA, 17–19 June 1997. [Google Scholar]
  17. Heyden, A.; Astrom, K. Flexible calibration: Minimal cases for auto-calibration. In Proceedings of the Seventh IEEE International Conference on Computer Vision, Kerkyra, Greece, 20–27 September 1999; Volume 1. [Google Scholar]
  18. Pollefeys, M.; Koch, R.; Van Gool, L. Self-calibration and metric reconstruction inspite of varying and unknown intrinsic camera parameters. Int. J. Comput. Vis. 1999, 32, 7–25. [Google Scholar] [CrossRef]
  19. Pollefeys, M. Self-Calibration and Metric 3D Reconstruction from Uncalibrated Image Sequences. Ph.D. Thesis, Katholieke Universiteit Leuven, Leuven, Belgium, 1999. [Google Scholar]
  20. Hartley, R.I.; Hayman, E.; de Agapito, L.; Reid, I. Camera calibration and the search for infinity. In Proceedings of the Seventh IEEE International Conference on Computer Vision, Kerkyra, Greece, 20–27 September 1999; Volume 1. [Google Scholar]
  21. Shiu, Y.C.; Ahmad, S. Calibration of wrist mounted robotic sensors by solving homogeneous transform equations of the form AX = XB. IEEE J. Robot. Autom. 1989, 5, 16–29. [Google Scholar] [CrossRef]
  22. Chen, W.; Du, J.; Xiong, W.; Wang, Y.; Chia, S.; Liu, B.; Cheng, J.; Gu, Y. A Noise-Tolerant Algorithm for Robot-Sensor Calibration Using a Planar Disk of Arbitrary 3-D Orientation. IEEE Trans. Autom. Sci. Eng. 2018, 15, 251–263. [Google Scholar] [CrossRef]
  23. Li, M.; Du, Z.; Ma, X.; Dong, W.; Gao, Y. A robot hand-eye calibration method of line laser sensor based on 3D reconstruction. Robot. Comput. -Integr. Manuf. 2021, 71, 102136. [Google Scholar] [CrossRef]
  24. Zhang, Y.; Qiu, Z.; Zhang, X. Calibration method for hand-eye system with rotation and translation couplings. Appl. Opt. 2019, 58, 5375–5387. [Google Scholar] [CrossRef]
  25. Andreff; Nicolas; Horaud, R.; Espiau, B. On-line hand-eye calibration. In Proceedings of the Second International Conference on 3-D Digital Imaging and Modeling (Cat. No. PR00062), Ottawa, ON, Canada, 8 October 1999. [Google Scholar]
  26. Tabb, A.; Yousef, K.M.A. Solving the robot-world hand-eye (s) calibration problem with iterative methods. Mach. Vis. Appl. 2017, 28, 569–590. [Google Scholar] [CrossRef]
  27. Jiang, J.; Luo, X.; Xu, S.; Luo, Q.; Li, M. Hand-Eye Calibration of EOD Robot by Solving the AXB = YCZD Problem. IEEE Access 2022, 10, 3415–3429. [Google Scholar] [CrossRef]
  28. Liu, Y.; Wang, Q.; Li, Y. A method for hand-eye calibration of a robot vision measuring system. In Proceedings of the 2015 10th Asian Control Conference (ASCC), Kota Kinabalu, Malaysia, 31 May–3 June 2015; pp. 1–6. [Google Scholar] [CrossRef]
  29. Zou, Y.; Chen, X. Hand–eye calibration of arc welding robot and laser vision sensor through semidefinite programming. Ind. Robot 2019, 45, 597–610. [Google Scholar] [CrossRef]
  30. Deng, S.; Mei, F.; Yang, L.; Liang, C.; Jiang, Y.; Yu, G.; Chen, Y. Research on the Hand-eye calibration Method Based on Monocular Robot. In Proceedings of the 2021 International Conference on Mechanical Engineering, Intelligent Manufacturing and Automation Technology (MEMAT), Guilin, China, 15–17 January 2021. [Google Scholar] [CrossRef]
  31. Horaud, R.; Dornaika, F. Hand-eye calibration. Int. J. Robot. Res. 1995, 14, 195–210. [Google Scholar] [CrossRef]
  32. Daniilidis, K. Hand-eye calibration using dual quaternions. Int. J. Robot. Res. 1999, 18, 286–298. [Google Scholar] [CrossRef]
Figure 1. Coordinate system and transformation relationship diagram in the process of hand–eye calibration. The coordinate systems for the manipulator base, manipulator end, camera, and AR marker are defined as O b , O e , O c , and O m . In the coordinate system in the figure, the red arrow represents the x-axis, the green arrow represents the y-axis, and the blue arrow represents the z-axis.
Figure 1. Coordinate system and transformation relationship diagram in the process of hand–eye calibration. The coordinate systems for the manipulator base, manipulator end, camera, and AR marker are defined as O b , O e , O c , and O m . In the coordinate system in the figure, the red arrow represents the x-axis, the green arrow represents the y-axis, and the blue arrow represents the z-axis.
Sensors 24 00113 g001
Figure 2. Hand–eye calibration simulation experimental environment built in ROS and Gazebo simulation platforms. The resolution of the RGB camera in the test environment is 640 × 480 pixels and the size of the AR mark is 10 cm × 10 cm.
Figure 2. Hand–eye calibration simulation experimental environment built in ROS and Gazebo simulation platforms. The resolution of the RGB camera in the test environment is 640 × 480 pixels and the size of the AR mark is 10 cm × 10 cm.
Sensors 24 00113 g002
Figure 3. Establishment of the coordinate system of the 7-DOF manipulator in this article. In the coordinate system in the figure, the red arrow represents the x-axis, the green arrow represents the y-axis, and the blue arrow represents the z-axis.
Figure 3. Establishment of the coordinate system of the 7-DOF manipulator in this article. In the coordinate system in the figure, the red arrow represents the x-axis, the green arrow represents the y-axis, and the blue arrow represents the z-axis.
Sensors 24 00113 g003
Figure 4. AR marker images and corner detection results collected in the simulation environment.
Figure 4. AR marker images and corner detection results collected in the simulation environment.
Sensors 24 00113 g004
Figure 5. Euclidean distance error statistics of each calibration algorithm (according to the real values of the parameters in the simulation environment in Table 2). The statistical figure reveals that the translation error in the hand–eye conversion matrix, as computed by the Tsai–Lenz and Andreff algorithms, is notably lower compared to the other two algorithms. However, it is noteworthy that the rotation error in the hand–eye conversion matrix, calculated using the Tsai–Lenz algorithm, is marginally higher than in the other algorithms.
Figure 5. Euclidean distance error statistics of each calibration algorithm (according to the real values of the parameters in the simulation environment in Table 2). The statistical figure reveals that the translation error in the hand–eye conversion matrix, as computed by the Tsai–Lenz and Andreff algorithms, is notably lower compared to the other two algorithms. However, it is noteworthy that the rotation error in the hand–eye conversion matrix, calculated using the Tsai–Lenz algorithm, is marginally higher than in the other algorithms.
Sensors 24 00113 g005
Figure 6. Visualization of fluctuation range of parameters in the translation matrix of T m b i . In the illustration, the authentic data are represented by the red dotted line, whereas the orange solid line and green dotted line correspond to the median and average values derived from each algorithm, respectively. The left and right sides of the rectangle signify the lower and upper quartiles, respectively. The extent of the I-shaped area conveys the fluctuation range of the data, and the outliers are denoted by open circles.
Figure 6. Visualization of fluctuation range of parameters in the translation matrix of T m b i . In the illustration, the authentic data are represented by the red dotted line, whereas the orange solid line and green dotted line correspond to the median and average values derived from each algorithm, respectively. The left and right sides of the rectangle signify the lower and upper quartiles, respectively. The extent of the I-shaped area conveys the fluctuation range of the data, and the outliers are denoted by open circles.
Sensors 24 00113 g006
Figure 7. Reprojection error comparison (simulation environment). The average reprojection error associated with the computations from the Tsai–Lenz and Andreff algorithms is minimal, and the variation in the reprojection error for each dataset is relatively modest. This observation aligns with the earlier analyses depicted in Figure 4.
Figure 7. Reprojection error comparison (simulation environment). The average reprojection error associated with the computations from the Tsai–Lenz and Andreff algorithms is minimal, and the variation in the reprojection error for each dataset is relatively modest. This observation aligns with the earlier analyses depicted in Figure 4.
Sensors 24 00113 g007
Figure 8. Change curves of reprojection error when adjusting x, y, and z, respectively.
Figure 8. Change curves of reprojection error when adjusting x, y, and z, respectively.
Sensors 24 00113 g008
Figure 9. Variation of reprojection error when adjusting x, y, and z parameters simultaneously.
Figure 9. Variation of reprojection error when adjusting x, y, and z parameters simultaneously.
Sensors 24 00113 g009
Figure 10. Hand–eye calibration experiment configuration and AR marker center point acquisition. (a) Hand–eye calibration experiment configuration. (b) AR marker center point acquisition.
Figure 10. Hand–eye calibration experiment configuration and AR marker center point acquisition. (a) Hand–eye calibration experiment configuration. (b) AR marker center point acquisition.
Sensors 24 00113 g010
Figure 11. Comparison of reprojection errors. In the real-world context, excluding the Andreff algorithm, traditional algorithms demonstrate similar performance levels. Notably, the optimized algorithm stands out with a significant 44.43% reduction in the average reprojection error compared to the Tsai–Lenz algorithm, showcasing its enhanced efficacy in practical scenarios.
Figure 11. Comparison of reprojection errors. In the real-world context, excluding the Andreff algorithm, traditional algorithms demonstrate similar performance levels. Notably, the optimized algorithm stands out with a significant 44.43% reduction in the average reprojection error compared to the Tsai–Lenz algorithm, showcasing its enhanced efficacy in practical scenarios.
Sensors 24 00113 g011
Figure 12. Visual positioning error comparison. The optimized hand–eye calibration method demonstrated a significant reduction in visual positioning error compared to the traditional Tsai–Lenz algorithm, achieving an impressive 50.63% average error reduction. This highlights the efficacy of the optimized approach in substantially enhancing the manipulator’s visual positioning accuracy in real-world scenarios.
Figure 12. Visual positioning error comparison. The optimized hand–eye calibration method demonstrated a significant reduction in visual positioning error compared to the traditional Tsai–Lenz algorithm, achieving an impressive 50.63% average error reduction. This highlights the efficacy of the optimized approach in substantially enhancing the manipulator’s visual positioning accuracy in real-world scenarios.
Sensors 24 00113 g012
Table 1. DH parameter table of 7-DOF robot arm in this article.
Table 1. DH parameter table of 7-DOF robot arm in this article.
i α i 1 ( d e g ) a i 1 ( m ) d i ( m ) θ i ( d e g ) θ i m i n ( d e g ) θ i m a x ( d e g )
1−900 d b s θ 1 −180180
29000 θ 2 −110110
3900 d s e θ 3 −180180
4−9000 θ 4 −107107
5−900 d e w θ 5 −180180
69000 θ 6 −110110
700 d w f θ 7 −180180
Table 2. True values of parameters in a simulation environment.
Table 2. True values of parameters in a simulation environment.
T r a n s l a t i o n   M a t r i x   t r   ( m ) R o t a t i o n   M a t r i x   R r E u l e r i a n   A n g l e   E r   ( r a d )
T c e 0.0700 0.00000 0.03450 0 1 0 1 0 0 0 0 1 0 0 π / 2
T m b 0.45000 0.00000 0.00100 0 1 0 1 0 0 0 0 1 π / 2 0 π / 2
Table 3. Hand–eye transformation matrix calculated by calibration algorithms in a simulation environment.
Table 3. Hand–eye transformation matrix calculated by calibration algorithms in a simulation environment.
C a l i b r a t i o n
A l g o r i t h m
T c e   C o m p u t a t i o n
T r a n s l a t i o n   M a t r i x R o t a t i o n   M a t r i x
T s a i L e n z [7] 0.07058 0.00085 0.00325 0.00203 0.99999 0.00337 0.99999 0.00204 0.00317 0.00318 0.00336 0.99999
H o r a u d [31] 0.07160 0.00116 0.00315 0.00190 0.99999 0.00326 0.99999 0.00191 0.00272 0.00273 0.00326 0.99999
A n d r e f f [25] 0.07205 0.00032 0.00343 0.00185 0.99999 0.00323 0.99999 0.00186 0.00250 0.00251 0.00322 0.99999
D a n i i l i d i [32] 0.07198 0.00116 0.00315 0.00161 0.99999 0.00371 0.99999 0.00162 0.00215 0.00216 0.00371 0.99999
Table 4. Hand–eye transformation matrix from calibration algorithms in a real environment.
Table 4. Hand–eye transformation matrix from calibration algorithms in a real environment.
C a l i b r a t i o n
A l g o r i t h m
T c e   C o m p u t a t i o n
T r a n s l a t i o n   M a t r i x R o t a t i o n   M a t r i x
T s a i L e n z [7] 0.03740 0.09599 0.04845 0.99981 0.01313 0.01423 0.01346 0.99963 0.02330 0.01392 0.02348 0.99962
H o r a u d [31] 0.03740 0.09599 0.04845 0.99981 0.01315 0.01422 0.01348 0.99963 0.02330 0.01391 0.02349 0.99962
A n d r e f f [25] 0.03658 0.09808 0.06191 0.99981 0.01258 0.01464 0.01292 0.99965 0.02307 0.01434 0.02325 0.99962
D a n i i l i d i [32] 0.03746 0.09580 0.04834 0.99981 0.01179 0.01497 0.01214 0.99965 0.02313 0.01469 0.02330 0.99962
O p t i m i z e d 0.03540 0.09349 0.05795 0.99981 0.01313 0.01423 0.01346 0.99963 0.02330 0.01392 0.02348 0.99962
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Peng, G.; Ren, Z.; Gao, Q.; Fan, Z. Reprojection Error Analysis and Algorithm Optimization of Hand–Eye Calibration for Manipulator System. Sensors 2024, 24, 113. https://doi.org/10.3390/s24010113

AMA Style

Peng G, Ren Z, Gao Q, Fan Z. Reprojection Error Analysis and Algorithm Optimization of Hand–Eye Calibration for Manipulator System. Sensors. 2024; 24(1):113. https://doi.org/10.3390/s24010113

Chicago/Turabian Style

Peng, Gang, Zhenyu Ren, Qiang Gao, and Zhun Fan. 2024. "Reprojection Error Analysis and Algorithm Optimization of Hand–Eye Calibration for Manipulator System" Sensors 24, no. 1: 113. https://doi.org/10.3390/s24010113

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop