**1. Introduction**

The topic of robot calibration is well-established, yet it is still a significant factor identified by end-users as being negatively impactful for robot usability and utility [1]. Calibration is followed by registration of robot frame to world frame so the accurate encoder angles can be obtained from inverse kinematic and fed to the robot's controller. Both procedures have a profound impact on robot performance and, as pointed out in [2], "it is impossible to distinguish the end-effector error contributed either by" incorrect model parameters or by inaccurate registration transformation.

Various methods of calibrating a robot's kinematic chain have been developed (e.g., [3–5]). Many of these methods rely on intrinsic kinematic models (e.g., [6–9]), which minimize complicated, nonlinear error functions (unless only linearized error models are considered, which may exchange uncertainty for mathematical simplicity) in at least *N*-dimensional space, where *N* is the number of controllable joints in the serial kinematic chain. Calibrations based on extended modeling (i.e., beyond rigid kinematics) include compensating for thermal effects [10], and elastostatic [11] and higher order errors [12]. Likewise, examples of non-kinematics-based calibrations can be seen in [13,14]. There are also compensation techniques that can handle both kinematic and non-kinematic errors, but they require steady calculations and application of corrections during on-line operations [15–17], or dynamically selected pre-calculated, hand–eye calibrations from a table [18].

Robot calibration procedures depend on theoretical models of the mechanical system's forward kinematic. For a serial open chain robot, the Product of Exponentials (POE) based on screw theory—is thought to be one of the most versatile models that can handle singularities in the popular Denavit–Hartenberg (DH) parameter model [19]. For robots with revolute joints only, each joint is parametrized by a three-dimensional (3D) unit vector indicating axis of rotation, and a 3D vector of any point on the axis line. Calibration procedures for such models rely on Circle Point Analysis (CPA) applied to 3D data acquired with laser tracker or other sensor: positioning the robot into a zero-reference configuration (i.e., where all joint angles are set to zero), and then rotating each joint one by one while keeping all other joints fixed at zero [20,21]. Unfortunately, POE-based models do not

**Citation:** Franaszek, M.; Marvel, J.A. Using Full Pose Measurement for Serial Robot Calibration. *Appl. Sci.* **2022**, *12*, 3680. https://doi.org/ 10.3390/app12073680

Academic Editor: Luis Gracia and Carlos Perez-Vidal

Received: 4 March 2022 Accepted: 31 March 2022 Published: 6 April 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

explicitly include zero offsets of encoder angles. Accurate estimating of zero offsets is critical because the largest contribution to the robot positioning error (97%) comes from incorrect zero offsets [22]. Performing the zero offsets calibration in CPA causes that errors in registration transformation and in the individual offsets accumulate. This may lead to inconsistent calibration results. For some poses, the calibration process reduced robot pose error seven-fold; for others, it actually increased error twofold [23].

A desired outcome of calibration is the error reduction in full pose of robot endeffector, i.e., in its position and orientation. However, both components belong to two different spaces: position is a vector in 3D space and its components have length units, like millimeters, while orientation matrix is parameterized by three angles in degrees. This causes a fundamental scaling problem when a full pose error is minimized (as discussed in [24], ad hoc introduced scaling factors put more weight either on linear or angular part of pose error and push optimizer towards different solutions). This may become a problem in commercial applications where not only position but also orientation of end-effector is important. For example, in automated drilling, a parallelism between the spindle axis and the normal axis of the drilling plate surface should be below 0.2◦ [25]. Small orientation error 0.05◦ required for automated riveting, drilling and spot welding was demonstrated by applying online pose corrections in [26]. Automated fiber placement is another example of industrial application where the orientation of robot end-effector is important [27].

The approach that we introduce in this paper avoids the pitfall of minimization of unbalanced 6D error. First, link twists are determined in the CPA-like procedure from 3D data. Then, using full 6D poses measured by sensors, encoder zero offsets are determined in a separate minimization. The error function used in this minimization does not depend on linear DH parameters (link lengths and linear offsets) nor on the position components of noisy 6D poses acquired by sensors. Once twists and zero offsets are known, they are inserted into another error function, which depends only on position components of sensor data. The remaining linear DH parameters are determined by minimizing this second error function. For comparison, robot calibration based on only 3D sensor data is also performed. Obtained results clearly show that orientation errors of end-effector are smaller when orientation part of 6D data is used. At the same time, the position errors are comparable for both methods.

#### **2. Background**

The frame *FTCP* associated with the robot's Tool Center Point (TCP) coordinate system can be expressed as a 4 × 4 homogeneous transformation consisting of a 3 × 3 rotation matrix *R* and a 3 × 1 translation vector *t*:

$$F\_{TCP} = \begin{bmatrix} \mathbf{R}\_{3 \times 3} & \mathbf{t}\_{3 \times 1} \\ \mathbf{0}\_{1 \times 3} & 1 \end{bmatrix} \tag{1}$$

For a serial, open-chain collaborative robot arm with *N* revolute joints, the frame *FTCP* in the robot's base coordinate system can be determined using a forward kinematic model:

$$F\_{TCP}(\theta\_k) = F\_1 F\_2 \dots F\_N F\_T \tag{2}$$

where

$$
\theta\_k = [\theta\_{1,k}, \theta\_{2,k}, \dots, \theta\_{n,k}, \dots, \theta\_{N,k}] \tag{3}
$$

and *θn*,*<sup>k</sup>* is the encoder angle of the *n*-th revolute joint for the *k*-th robot configuration, *F<sup>n</sup>* is the homogeneous transformation associated with the *n*-th joint, *n* = 1, ... , *N*, and *F<sup>T</sup>* is a transformation from the robot's flange frame to the TCP. Using the nominal DH parameters, the rotation component of each *Fn* can be written as

$$\mathcal{R}\_{n,k}(\theta\_{n,k}, \epsilon\_n, \alpha\_n) = \mathcal{R}\_z(\theta\_{n,k} + \epsilon\_n)\mathcal{R}\_x(\alpha\_n) \tag{4}$$

where *R<sup>z</sup>* and *R<sup>x</sup>* are rotations around *z* and *x* axis, respectively. Two angular DH parameters in (4) are *α<sup>n</sup>* (the link twist) and *<sup>n</sup>* (the zero offset angle for the *n*-th encoder). The translation component of *Fn* can be expressed as

$$\mathbf{t}\_n = \begin{bmatrix} r\_n \cos(\theta\_{n,k} + \epsilon\_n) & r\_n \sin(\theta\_{n,k} + \epsilon\_n) & d\_n \end{bmatrix}^T,\tag{5}$$

where *rn* and *dn* are two linear DH parameters (link length and offset), and [...] *<sup>T</sup>* denotes the vector transpose.

From (2) and (4) it can be seen that the rotation part *RTCP* of the *FTCP* frame depends only on the rotation components

$$R\_{TCP}(\theta\_k) = R\_{1,k} R\_{2,k} \dots R\_{N,k} R\_T. \tag{6}$$

This is a general property of serial chain manipulators with revolute joints, and is not dependent on a particular kinematic model (here, we use the DH model for illustration purposes only). In the remainder of this paper, we use the notation

$$R\_k = R\_{TCP}(\theta\_{k'} \mathfrak{a}, \mathfrak{e}) \tag{7}$$

where *α* = [*α*1, *α*2,..., *αN*] and = [1, 2,..., *N*] are the vectors of the DH angular parameters. Note that the positional component *t<sup>k</sup>* of the *FTCP*(*θk*) frame depends on joint angles and all four vectors of the DH parameters:

$$\mathfrak{t}\_k = \mathfrak{t}\_{\text{TCP}}(\theta\_{k'} \mathfrak{a}, \mathfrak{e}, \mathfrak{r}, \mathfrak{d}). \tag{8}$$

#### **3. Determination of Link Twist**

To ensure that forward kinematics correctly predict the tool pose in the robot coordinate frame, the DH parameters must be determined first during the robot calibration process. Once calibrated, they remain fixed during robot on-line operations. Calibration may be performed by installing a spherically-mounted retro-reflector (SMR) at the robot's TCP, and tracking it with a laser tracker. From four vectors of DH parameters (*α*, ,*r*, *d*), the twist angles *α* can be determined independently of other DH parameters by using 3D data acquired for CPA procedure. The twist angle *α<sup>n</sup>* is defined as the angle between two consecutive joint axes of rotation, *u<sup>n</sup>* and *un*+<sup>1</sup> (the last twist *α<sup>N</sup>* is, by definition, set to zero). If *Cn*,*<sup>K</sup>* denotes a set of *K* 3D points *t<sup>k</sup>* calculated in (8) and acquired for *n*-th joint in CPA procedure, then these points are distributed along an arch (section of a circle) on a plane in 3D space. Thus, for each joint *n* = 1, ..., *N*, a unit vector *c<sup>n</sup>* normal to the fitted plane can be calculated. While the exact locations of *Cn*,*<sup>K</sup>* points depend on all DH parameters (, *α*,*r*, *d*), vector *c<sup>n</sup>* is parallel to the axis of rotation *u<sup>n</sup>* and, therefore, *α<sup>n</sup>* can be determined from the scalar product of two consecutive axes, *α<sup>n</sup>* = arccos (*c<sup>n</sup>* · *cn*+1). If *Bn*,*<sup>K</sup>* is the set of 3D points measured by laser tracker which correspond to *Cn*,*K*, then a unit vector *b<sup>n</sup>* normal to the plane fitted to *Bn*,*<sup>K</sup>* can be calculated and the angle between two consecutive *b<sup>n</sup>* and *bn*+<sup>1</sup> is used as the estimate of *αn*.

To get correctly estimated twist angles *αn*, two important steps must be followed. First, since arccos () is an even function, a sign of estimated angle must be equal to the sign of the default (i.e., theoretical) twist angle, *sign*(*α*0,*n*). Second, plane fitting procedure provides only a normal to the plane, its particular direction (up or down) depends on a bounding box containing the points. To remove this ambiguity, fitted normal *b***˜** *<sup>n</sup>* must obey the right hand rule together with acquired 3D points *Bn*,*K*, which are located on a section of a circle. Thus, the estimated corrected twist angle *α*˜ *<sup>n</sup>* is determined as

$$\mathfrak{a}\_n = \operatorname{sign}(\mathfrak{a}\_{0,n}) \operatorname{arccos} \left( \tilde{\mathfrak{b}}\_n \cdot \tilde{\mathfrak{b}}\_{n+1} \right). \tag{9}$$

#### **4. Robot Calibration Based on 3D Measurements**

Once the twist angles *α***˜** are estimated, they can be inserted in (4) and the remaining DH parameters (,*r*, *d*) can be found in traditional calibration procedure using 3D data. Given *K* configurations of the arm (i.e., "poses") defined by *θ<sup>k</sup>* (*k* = 1, . . . , *K*), the SMR is moved to *K* positions *g<sup>k</sup>* = [*xk*, *yk*, *zk*] in 3D Cartesian space. Since *g<sup>k</sup>* and *t<sup>k</sup>* in (8) are determined in different coordinate frames, the error function *Errpos* in the calibration process is based on relative distances between two 3D points to avoid a dependence on a registration. For convenience, the whole set of *K* points can be divided in two halves and then

$$Err\_{\rm pos}(\mathfrak{a}, \mathfrak{e}, \mathfrak{r}, \mathfrak{d}) = \frac{1}{K/2} \sum\_{k=1}^{K/2} \left( L\_k(\mathfrak{a}, \mathfrak{e}, \mathfrak{r}, \mathfrak{d}) - l\_k \right)^2 \tag{10}$$

where

$$L\_k(\mathfrak{a}, \mathfrak{e}, \mathfrak{r}, d) = ||\mathfrak{t}\_k - \mathfrak{t}\_{k + K/2}|| \tag{11}$$

and *tk*,*tk*+*K*/2 are determined in (8), ... is the Euclidean norm and *hk* = *g<sup>k</sup>* − *gk*+*K*/2 is a distance between two points measured by laser tracker. 

Thus, the fitted DH parameters **˜**,*r***˜**, *d***˜** can be estimated by minimizing *Errpos*

$$\hat{\mathfrak{a}}(\tilde{\mathfrak{e}}, \tilde{\mathfrak{r}}, \tilde{\mathfrak{d}}) = \min\_{\mathfrak{e}, \mathfrak{r}, \tilde{\mathfrak{d}}} Err\_{\mathbb{P}^{\otimes \mathfrak{s}}}(\tilde{\mathfrak{a}}, \mathfrak{e}, \mathfrak{r}, \mathfrak{d}). \tag{12}$$

providing the vector of link twist angles *α***˜** is known. The actual dimension of search space is 3*N* − 2 since the distance between two points in (11) does not depend on *d*<sup>1</sup> and <sup>1</sup> (the two parameters may have arbitrary values which only affect the registration transformation between robot and sensor). In the remainder of this paper, we call this procedure Method 1.

### **5. Calibration Based on 6D Measurements**

Such data were used for robot calibration using different procedures [14,28,29]. The approach we propose calculates zero offsets **˜** in a separate minimization based on orientation components of 6D poses and determined earlier twist angles *α***˜** .

In the remainder of this paper, we assume that, for each robot configuration defined by *θk*, there is a corresponding 3 × 3 rotation matrix *G<sup>k</sup>* provided by an external sensor. Both *G<sup>k</sup>* and *R<sup>k</sup>* in (7) are determined in different coordinate frames. If **Ω** denotes the rotation component of registration matrix then, for each *k*-th robot orientation *R<sup>k</sup>* in (7) and the corresponding *G<sup>k</sup>* measured with the external sensor, the following relation holds:

$$G\_k = \mathcal{N}\_k \not\cong \mathcal{R}\_k \tag{13}$$

where N*<sup>k</sup>* is a small, random rotation accounting for noise in the orientation part of 6D data acquired by sensors. For a pair of orientations *G<sup>k</sup>* and *G<sup>k</sup>* (where *k* = *k* + *K*/2), matrix *D<sup>k</sup>* can be defined as

$$D\_k = G\_k \mathcal{R}\_k^{-1} \mathcal{R}\_{k'} \mathcal{G}\_{k'}^{-1} = \mathcal{N}\_k \mathcal{N}\_{k'}^{-1} \tag{14}$$

and its angle of rotation *ψ<sup>k</sup>* ∈ [0◦ 180◦] is calculated as

$$\psi\_k = \arccos\left(\frac{1}{2}(\operatorname{trace}(\mathbf{D}\_k) - 1)\right). \tag{15}$$

Matrix *D<sup>k</sup>* and its angle *ψ<sup>k</sup>* depend on the measured joint angle vectors *θ<sup>k</sup>* and *θ<sup>k</sup>* , the twist angles *α***˜** estimated earlier, and all zero offsets *<sup>n</sup>* for *n* = 2, ... , *N*, which can be obtained by minimizing the error function *Errrot*

$$(\tilde{\mathfrak{e}}\_{2}, \dots, \tilde{\mathfrak{e}}\_{N}) = \min\_{\mathfrak{e}\_{2}, \dots, \mathfrak{e}\_{N}} Err\_{rot}(\mathfrak{a}, \mathfrak{e}\_{2}, \dots, \mathfrak{e}\_{N}),\tag{16}$$

where

$$Err\_{\rm rot}(\mathfrak{a}, \mathfrak{e}\_2, \dots, \mathfrak{e}\_N) = \frac{1}{\mathbb{K}/2} \sum\_{k=1}^{K/2} \psi\_k(\mathfrak{a}, \mathfrak{e}\_2, \dots, \mathfrak{e}\_N). \tag{17}$$

Once the zero offsets ˜ are estimated, they can be inserted in (12) and the linear DH parameters *r* and (*d*2,..., *dN*) can be found by minimizing *Errpos*(*α*˜, ˜,*r*, *d*) in (10). In the remainder of this paper, we call this procedure Method 2.

To show a scaling problem when both position and rotation errors are simultaneously minimized, robot calibration was attempted by minimizing the following error function:

$$Err\_{full}(\mathfrak{a}, \mathfrak{e}, \mathfrak{r}, \mathfrak{d}) = Err\_{\text{pos}} + wErr\_{\text{vrt}\_{\prime}} \tag{18}$$

where *Errpos* is defined in (10), *Errrot* in (17) and positive scaling factor *w* ensures correct dimensionality of *Errf ull*. In the remainder of this paper, we call this procedure Method 3.

#### **6. Registering Robot Frame**

When all robot model parameters are known, i.e., estimated ˜2, ... , ˜*N*, *α***˜** , *r***˜**, ˜*d*2, ... , ˜*dN* and arbitrary values are assigned to *d*<sup>1</sup> and 1, then a registration transformation (rotation **Ω** and translation *τ*) between the coordinate systems of robot and laser tracker can be determined.

There are many registration techniques, one of the commonly used was developed in [30] and is based on 3D data. For calibration Method 1 described in Section 4, where only 3D data acquired by sensor are available, there is only one possible registration transformation (**Ω**, *τ*). When 6D data are available, the registration transformation can be calculated in two ways. In the first (which we name Registration (1)) **Ω**<sup>1</sup> is calculated using only the 3D positional parts of full poses, as in [30]. In the second (named hereafter as Registration (2)), the rotation matrix **Ω**<sup>2</sup> is calculated as the mean rotation **Ω** calculated properly [31] from the individual matrices *GkR*−<sup>1</sup> *<sup>k</sup>* in (13). Once **Ω**1,2 are known, the translation vectors *τ*1,2 can be determined as

$$
\pi\_{\bar{i}} = \overleftarrow{\mathfrak{g}}\_{s} - \Omega\_{\bar{i}} \,\, \overline{t}\_{r} \; , \; i = 1, 2 \tag{19}
$$

where *t<sup>r</sup>* and *g<sup>s</sup>* are the centroids of the collected 3D positions in the robot and the external sensor frame, respectively.

#### **7. Simulation**

All calculations were performed in Matlab. Built-in nonlinear least-square (NLS) optimizer *lsqnonlin* with default input parameters was used to minimize the error function *Errpos* in (10), *Errrot* in (17) and *Errf ull* in (18). As a starting point for all optimizations, default DH parameters were used.

To test the proposed calibration method, a kinematic model of a 7 degrees-of-freedom (DoF) industrial robot arm KUKA LWR 4+ was used. The robot's default DH parameters (0, *α*0,*r*0, *d*0) are provided in Table 1 (all angular parameters are in degrees and all linear in millimeters). Ground truth (GT) parameters used in simulations were defined as a sum of the defaults and deviations, for example *GT* = <sup>0</sup> + Δ. Deviations from the default DH parameters are provided in Table 2. Two sets of arbitrarily chosen deviations were used in simulations: small deviations (Δ*α*1, Δ*r*1, Δ*d*1) and large deviations (Δ*α*2, Δ*r*2, Δ*d*2). GT parameters were used to generate noisy sensor data *G<sup>k</sup>* from (7) and *g<sup>k</sup>* from (8)

$$G\_k = \mathcal{N}\_k \,\,\mathbf{0} \,\, \mathbf{R}\_k(\theta\_{k\prime} \,\mathbf{a}\_{GT\prime} \,\mathbf{e}\_{GT})\tag{20}$$

and

$$\mathbf{g}\_k = \boldsymbol{\Omega} \mathbf{t}\_k (\theta\_{k'} \mathbf{u}\_{GT'} \boldsymbol{\varepsilon}\_{GT'} \mathbf{r}\_{GT'} \boldsymbol{d}\_{GT}) + \boldsymbol{\pi} + \mathbb{J}\_k \tag{21}$$

where (**Ω**, *τ*) is arbitrarily selected transformation between robot and sensor frame, *ζ<sup>k</sup>* is 3D positional Gaussian noise with standard deviation *σp*, and *ξ<sup>k</sup>* is 3D angular Gaussian noise with standard deviation *σa*, which was used to generate small random rotations.

$$\mathcal{N}\_k = \mathcal{R}\_{\mathcal{Z}YX}(\mathfrak{f}\_k). \tag{22}$$


**Table 1.** Default DH parameters.

**Table 2.** Deviations of DH parameters.


In Figure 1a, examples of histograms for *x* component of vectors *ξ<sup>k</sup>* are shown (histograms for *y* and *z* components look similar). In Figure 1b, histograms of corresponding angles of rotation *β* of small random rotations N*<sup>k</sup>* are plotted. Note that histograms of *ξ<sup>k</sup>* are well approximated by a Gaussian distribution while non-symmetric histograms of *β* are well approximated by a Fisher–Bingham–Kent (FBK) distribution [32]. Similar histograms of angles were observed for experimental data acquired with a marker-based pose measuring system, see Figures 1 and 3 in [33].

Tool transformation *F<sup>T</sup>* needed in (7) and (8) was arbitrarily chosen with the caveat that the TCP center is not located on the last axis of rotation so that 3D data acquired for CPA procedure are located on a circle.

For each *n*-th join, *K<sup>α</sup>* = 40 vectors of encoder angles *θn*,*<sup>k</sup>* were created such that their components were all zero except *θn*,*<sup>k</sup>*

$$
\theta\_{n,k} = \theta\_{\min,n} + k\delta\_{\theta}, \quad k = 1, \dots, K\_{\mathfrak{a}} \tag{23}
$$

where *θmin*,*<sup>n</sup>* and *δθ* were such that all *θn*,*<sup>k</sup>* were within a valid range of *n*-th encoder angles. These angles were then inserted in (21) to generate 3D sensor data from which the twist angles *α***˜** were estimated as described in Section 3. In order to estimate the remaining DH parameters **˜**,*r***˜**, *d***˜** and calculate registration transformation (**Ω**, *τ*), another set of *K* = 100 joint angle vectors *θ<sup>k</sup>* was selected in such a way that corresponding poses *FTCP*(*θk*) in (1) were randomly scattered in the workspace that is accessible to the robot arm. In computer simulations, this is the only restriction for selection of tool poses, but additional limitations may arise in lab experiments due to a use of a line-of-sight sensor for pose acquisition.

In addition, a separate batch of *J* = 50 joint angles *θ<sup>j</sup>* was selected for evaluation of calibration and registration procedures. These test poses were used neither in calibration nor registration. To test the performance of all three procedures, the robot kinematic model *FTCP θj* in (1) was used with the parameters **˜**, *α***˜** ,*r***˜**, *d***˜** estimated by Method 1, 2 and 3. For Method 1, the registration transformation (**Ω**, *τ*) was calculated using 3D data. For Method 2 and 3, both registrations (**Ω**1, *τ*1) and (**Ω**2, *τ*2) were calculated, as described in Section 6. For each tested arm configuration *θ<sup>j</sup>* and selected *m*-th noise levels *σa*, *σ<sup>p</sup> <sup>m</sup>*, the corresponding rotation *G<sup>j</sup>* and position *g<sup>j</sup>* were calculated in (20) and (21) to simulate noisy 6D measurements acquired by sensor. Then, the mean of *J* angles *ηj* of rotations *Q<sup>j</sup>* and the mean of *J* relative distances *qj* were calculated, where

$$\mathbf{Q}\_{j}(\eta\_{j}) = \mathbf{G}\_{j}^{-1} \boldsymbol{\Omega} \mathbf{R}\_{j} \ \boldsymbol{q}\_{j} = \left\| \boldsymbol{\Omega} \mathbf{t}\_{j} + \boldsymbol{\pi} - \mathbf{g}\_{j} \right\| \ . \tag{24}$$

and the transformation (**Ω**, *τ*) was appropriate for each of the three calibration procedures and (for Method 2 and 3) the appropriate Registration 1 or 2. Both calculated means *ηj* and *qj* were used as metrics to gauge a performance of tested procedures.

These steps were repeated for each of the selected noise level *m* = 1, ... , *Mn* and both sets of GT parameters corresponding to two deviation vectors: small (Δ*α*1, Δ*r*1, Δ*d*1) and large (Δ*α*2, Δ*r*2, Δ*d*2), as shown in Table 2. *Mn* = 16 noise levels were equally spaced between zero and 0.15 (degrees for *σ<sup>a</sup>* and millimeters for *σp*). In order to estimate a variability of the calculated metrics, all the above calculations were repeated for *Nrep* = 25 different realizations of noise (different sequences of pseudo-numbers). Thus, for each *i*-th instance of noise and each *m*-th pair of noise levels *σa*, *σ<sup>p</sup> <sup>m</sup>*, the end-effector errors were calculated: *vm*,*<sup>i</sup>* = *qj* for positional error and *ρm*,*<sup>i</sup>* = *ηj* for angular error. As the final results, the averages and standard deviations from all repeats *Nrep* were stored for each *m*-th noise level:

$$\overline{\rho}\_{m} = \frac{1}{N\_{\rm rep}} \sum\_{i=1}^{Nrep} \rho\_{m,i\prime} \, \delta \rho\_{m}^{2} = \frac{1}{N\_{\rm rep}} \sum\_{i=1}^{Nrep} \left(\rho\_{m,i} - \overline{\rho}\_{m}\right)^{2} \tag{25}$$

and similarly for positional errors *vm* and *δv*<sup>2</sup> *m* .

To test a performance of the three error functions *Errpos*, *Errrot* and *Errf ull* used in calibration, for a few randomly selected noise repeats and strengths, minimization was restarted from 300 randomly scattered initial points (i.e., starting DH parameters) and the final optimized parameters were analyzed. In addition, for Method 3, minimization of *Errf ull* was repeated for a few scaling factors *w* in (18).

In all simulations performed in this study, the distal variant of DH parameters was used [34]. Alternatively, the proximal variant could be used, which would affect derived from it homogeneous matrix *FTCP*. However, not every kinematic model is suitable for describing any robot: a well-known example is a robot with two consecutive joint axes that are parallel to each other. In such a case, the DH model is not continuous and must be replaced by another model, e.g., POE [20], and parameters specific for a given model must be determined. Whichever kinematic model is selected, it is important to consistently use it in a calibration process along with other basic definitions (like use of a right-hand or left-hand coordinate system). With all procedural steps clearly defined and consistently followed, there is no ambiguity in the calibration process.

**Figure 1.** Characteristics of simulated small random rotations N*<sup>k</sup>* in (22): (**a**) histograms of *x* component of angle vectors *ξk*; (**b**) histograms of angle of rotation *β* of rotation matrix N*k*. Blue lines correspond to weak noise with *σ<sup>a</sup>* = 0.05◦ and black lines correspond to strong noise with *σ<sup>a</sup>* = 0.15◦.

#### **8. Results**

Fitted DH parameters revealed different amounts of variations for different simulated conditions. The twist angles *α***˜** estimated from 3D data generated for the CPA procedure showed moderate variations. The largest absolute deviation *δαmax* from the GT value over all *N* joints and all simulated conditions (*Mn* noise levels, *Nrep* repeats and both deviations Δ*α*1,2 from the default values *α*0) was 0.3◦. Zero offsets **˜** revealed larger deviations: the largest absolute deviation *δmax* = 1.18◦. The largest link length deviation was *δdmax* = 4.7 mm and the largest link offset deviation was *δrmax* = 4.2 mm. Such large differences between the fitted and the GT parameters were observed mostly for large noise levels *σ<sup>p</sup>* and *σa*.

Figure 2 shows an example of robot end-effector errors at *J* = 50 test poses. Position errors *qj* and orientation errors *η<sup>j</sup>* were calculated in (24) for robot DH parameters calibrated with Method 1 and Method 2. Presented errors were calculated for simulated sensor poses perturbed by *i* = 14 noise realization (selected arbitrary from *Nrep* repeats) and *m* = 7 noise levels *σa*, *σ<sup>p</sup> <sup>m</sup>*. These *qj*, *η<sup>j</sup> <sup>m</sup>*,*<sup>i</sup>* errors were then used to calculate (*vm*,*i*, *<sup>ρ</sup>m*,*i*) and then, mean errors *vm* and *ρ<sup>m</sup>* in (25) and the corresponding standard deviations *δvm* and *δρ<sup>m</sup>* for each *m*-th noise level. These means and standard deviations were then used to create the plots in the remaining Figures 3–6.

**Figure 2.** Robot end-effector errors calculated at *J* = 50 test poses for fixed sensor noise *σ<sup>p</sup>* = 0.055 mm, *σ<sup>a</sup>* = 0.055◦ and one, arbitrary selected noise realization: (**a**) positional errors *qj*; (**b**) orientation errors *ηj*. Robot was calibrated with Method 1 (black lines) and Method 2 (blue lines).

Figure 3 shows the outcomes of two registration transformations (**Ω**1, *τ*1) and (**Ω**2, *τ*2) described in Section 6. In both cases, robot was calibrated with Method 2. GT parameters used in simulation of 6D data, i.e., end-effector poses and noisy poses as measured by sensor, were obtained by modifying the default DH parameters with deviations shown in Table 2. For both registrations, mean errors were calculated at the same values of sensor noise (*σ<sup>p</sup>* in Figure 3a,c and *σ<sup>a</sup>* in Figure 3b,d). In each subplot, two graphs are slightly shifted horizontally only for better visualization. Error bars *δvm* in Figure 3a,c and *δρ<sup>m</sup>* in Figure 3b,d are the corresponding standard deviations calculated in (25) from *Nrep* repeated simulations of noisy sensor data.

Figure 4 shows the outcomes of two registration procedures applied after robot was calibrated using Method 3 and the error function *Errf ull* defined in (18) with the scaling factor *w* = 1. Presented results were obtained for 6D data generated with GT values of DH parameters deviating from their default values by (Δ*α*2, Δ*r*2, Δ*d*2) shown in Table 2.

**Figure 3.** Comparison of two registration procedures for robot calibrated with Method 2 and data generated using: (**a**,**b**)—small deviations from the default DH parameters (Δ*α*1, Δ*r*1, Δ*d*1); (**c**,**d**) large deviations (Δ*α*2, Δ*r*2, Δ*d*2). Dependence of the mean positional error *v* of robot end-effector on positional noise *σp* in sensor 6D data in (**a**,**d**); dependence of the mean orientation error *ρ* of robot end-effector on angular noise *σa* in sensor 6D data in (**b**,**d**).

**Figure 4.** Comparison of two registration procedures for robot calibrated with Method 3 and data generated using large deviations from the default DH parameters: (**a**) dependence of the mean positional error *v* of robot end-effector on positional noise *σp* in sensor 6D data; (**b**) dependence of the mean orientation error *ρ* of robot end-effector on angular noise *σa* in sensor 6D data.

**Figure 5.** Comparison of three calibration methods: (**a**) dependence of the mean positional error *v* of robot end-effector on positional noise *σp* —Registration 1 was used in Method 2 (blue line); (**b**) dependence of the mean orientation error *ρ* of robot end-effector on noise (*σ* = *σ<sup>p</sup>* for Method 1 and *σ* = *σ<sup>a</sup>* for Method 2)—Registration 2 was used in Method 2 (red line); (**c**) dependence of error *v* on positional noise *σp*—Registration 1 was used in both methods; (**d**) dependence of error *ρ* on noise *σa*—Registration 2 was used in both methods.

**Figure 6.** Comparison of robot calibrations using two different scaling factors *w* in error function *Errf ull* in Method 3: (**a**) dependence of the mean positional error *v* of robot end-effector on positional noise *σp* in sensor 6D data—Registration 1 was used; (**b**) dependence of the mean orientation error *ρ* of robot end-effector on angular noise *σa* in sensor 6D data—Registration 2 was used. Data generated using large deviations from the default DH parameters.

Figure 5 shows the outcomes of three calibration procedures: Method 1 based on 3D sensor data, and Method 2 and 3 based on 6D sensor data (in Figure 5b,d noise *σ* = *σ<sup>p</sup>* in mm for Method 1 and *σ* = *σ<sup>a</sup>* in degrees for Method 2 and 3). Two different registration procedures were used in robot calibration with Method 2 and 3: for positional error *v*, Registration 1 was used (blue line in Figure 5a,c, the same as in Figure 3c for Method 2 and the blue line with triangle markers in Figure 5c, the same as blue line in Figure 4a for Method 3). For angular error *ρ*, Registration 2 was used (red line in Figure 5b,d, the same as in Figure 3d for Method 2 and the red line with triangle markers in Figure 5d, the same as red line in Figure 4b). Error bars *δvm* in Figure 5a,c and *δρ<sup>m</sup>* in Figure 5b,d are the corresponding standard deviations calculated in (25) from *Nrep* repeated simulations of noisy sensor data. On each subplot, the two graphs are slightly shifted horizontally for a visualisation effect. Robot GT parameters used in simulation of 6D data were obtained by modifying the default DH parameters with large deviations (Δ*α*2, Δ*r*2, Δ*d*2) shown in Table 2. Similar results for *v* and *ρ* were obtained when small deviations (Δ*α*1, Δ*r*1, Δ*d*1) were used in simulations.

Figure 6 shows outcome of robot calibration for Method 3 with two different values of the scaling factor *w* in *Errf ull* in (18). Results for Method 3 presented in Figures 4 and 5c,d were obtained for *w* = 1.

For each of the selected cases where the minimization of the error function was repeated from 300 different starting points, all initial DH parameters led to the same solution. Fitted DH parameters depended on noise strengths, choice of error function and GT values of DH parameters.

#### **9. Discussion**

In this study, an open-chain robotic manipulator with *N* revolute joints was calibrated using three different methods and two different sets of data: 3D positions only, and full 6D poses. All three methods share the same strategy for determining link twists *α***˜** . Then, in Method 1, the error function *Errpos* in (10) was minimized, and the remaining DH parameters **˜**,*r***˜**, *d***˜** were found by using 3D data only. In Method 2, a search for the zero offsets **˜** was performed separately by minimizing *Errrot* in (17), which depends only on the orientation part of full 6D data. Once the zero offsets were known, the remaining DH parameters *r***˜**, *d***˜** were found by minimizing *Errpos*(*r*, *d*) in (10) using only the positional part of 6D data. Such an approach reduces the dimensionality of the search space when compared with minimization of *Errpos* in Method 1. In addition, by using angles *ψ<sup>k</sup>* of relative rotations *D<sup>k</sup>* in error function *Errrot* in (17) and relative distances *Lk* between pairs of 3D points in error function *Errpos* in (10), the proposed strategy decouples robot calibration from registration of the robot frame to the world frame. Different calibration strategies yielded different sets of fitted DH parameters which, in turn, led to different end-effector errors. This is expected, as the optimizer which uses different error functions and different sensor data usually converges to different solutions for the same kinematic model. It should be noted that both Methods 1 and 2 are equally valid and it is a matter of practicality which one is more useful.

In Method 2, two different approaches to registration were used. Rotation **Ω**<sup>1</sup> from the first approach minimizes distances between the sensor's 3D positions and robot's TCP points for *K* robot arm configurations [30]. Rotation **Ω**<sup>2</sup> is calculated as the mean rotation **Ω** of *K* relative rotations *GkR*−<sup>1</sup> *<sup>k</sup>* and, thus, minimizes angular distances between orientations of TCP frame and orientations provided by sensor. Therefore, one may expect that **Ω**<sup>2</sup> is better than **Ω**<sup>1</sup> in aligning robot orientations with sensor orientations. Indeed, end-effector angular errors *ρ* shown in Figure 3b,d are smaller for **Ω**<sup>2</sup> in Registration 2 (red line) than for **Ω**<sup>1</sup> in Registration 1 (blue line).

When it comes to the positional errors *v*, the situation is exactly opposite. Both translation vectors *τ*<sup>1</sup> and *τ*<sup>2</sup> are calculated in (19). Since **Ω**<sup>2</sup> does not depend on positional data, the transformation (**Ω**2, *τ*2,) does not minimize (in the least-square sense) the distances between sensor 3D positions and robot TCP points for *K* robot arm configurations. Transformation (**Ω**1, *τ*1,) does minimize the distances, and therefore is expected to better align the sensor 3D positions with the robot TCP. Indeed, end-effector position errors *v* shown

in Figure 3a,c are smaller for **Ω**<sup>1</sup> in Registration 1 (blue line) than for **Ω**<sup>2</sup> in Registration 2 (red line).

Analysis of the plots in Figure 3 suggests the optimal strategy: instead of choosing either the first (**Ω**1, *τ*1,) or the second (**Ω**2, *τ*2,) registration, take the best part from both. Use **Ω**<sup>2</sup> to transform orientations *R<sup>k</sup>* of robot end-effector and use (**Ω**1, *τ*1,) to transform robot TCP *tk*. Outcome of such strategy is displayed in Figure 5a,b: note blue line for positional errors *v* and red line for angular error *ρ* indicating a use of different registrations in Method 2.

Another advantage of using **Ω**<sup>2</sup> to transform the TCP orientations rather than **Ω**<sup>1</sup> is that there is a much smaller dispersion of orientation errors *ρ* for different noise realizations. The error bars in Figure 5b are much smaller for Method 2 (which leverages **Ω**2) than for Method 1. This implies that orientations from the world coordinate system can be fed into an inverse kinematic solver more consistently and accurately.

It may appear counter intuitive that mean position errors *v σp* and mean orientation errors *ρ*(*σa*) calculated for the same m-th pair of noise strengths *σp*, *σ<sup>a</sup> <sup>m</sup>* but different GT values of DH parameters are almost the same, as Figure 3 shows. However, it should not be a surprise since we used NLS optimizer with exact error function. Scale of deviation from the default DH parameters may become an issue when the calibration is performed using approximated, linearlized errors and the Jacobian is calculated at the default DH values.

Results of robot calibration obtained with Method 3 clearly reveal the consequences of scaling problem when simultaneous minimization of both position and orientation errors in one optimization is attempted, as demonstrated in Figure 6. While the mean orientation errors *ρ* are almost equal for two selected values of *w*, the corresponding position errors *v* differ substantially. This method, similarly as Method 2, uses 6D data and, therefore, two registration procedures are available. In Method 3, similarly to Method 2, smaller position errors are obtained when Registration 1 is applied to the position data and smaller orientations errors are observed when Registration 2 is applied to the orientation data, as results in Figure 4 clearly indicate. Even as both Method 2 and 3 share a possibility of using different registrations for position and rotation components of a full pose, a direct comparison between the two methods clearly points to Method 2 as a better procedure, as demonstrated by the results shown in Figure 5c,d. Thus, a use of Method 3 is discouraged.

The calibration strategy outlined in this paper was tested on a kinematic model of a serial open chain robot with revolute joints only. A question can be asked if the strategy can be applied to a more complex kinematic model when a serial chain has both revolute and prismatic joints. Acquisition of full 6D poses enables calculation of two registrations defined in (19): one of them minimizes a position error and the other minimizes an orientation error. Therefore, as long as full 6D poses are acquired, the outlined calibration strategy could in principle be used for robots with a mixture of revolute and prismatic joints. However, a presence of prismatic joints complicates the error function *Errpos* in (10) by increasing a number of search variables and it requires further study to verify whether the strategy is beneficial also for robots with revolute and prismatic joints.

The simulation results presented in this paper raise an important, practical question about the characteristics of 6D pose measuring sensors which are used for robot calibration. Commercially available sensors allow quick acquisition of many repeated measurements, which enables the noise in recorded data to be substantially reduced by calculating mean poses. The mean position error of robot end-effector *v* calculated by Method 2 is increasing with sensor position noise *σp*, as Figure 5a shows. If the three sigma rule is followed and approximate relation *v* ≈ 4*σ<sup>p</sup>* holds, then the upper bound *σ*ˆ *<sup>p</sup>* for sensor position noise should satisfy 12*σ*ˆ *<sup>p</sup>* < *tolp*, where *tolp* is the acceptable robot position tolerance. For orientation data, due to the strong non-symmetric FBK-like distribution of angles *β* (which accounts for deviation of noisy, instantaneous rotations from the mean rotation), the three sigma rule can be replaced by calculating quantile *β*ˆ <sup>997</sup> of angles *β* at 0.997 level. Assuming the mean orientation error of robot end-effector *ρ* is four times larger than sensor's orientation noise (as shown in Figure 5b for Method 2), the upper bound for sensor

orientation noise should satisfy 4*β*ˆ <sup>997</sup> < *tola* where *tola* is the acceptable robot orientation tolerance. For different robot, the dependence of end-effector error on sensor noise may be different from that shown in Figure 5a,b. Then, the estimates for upper bounds of position noise *σ*ˆ *<sup>p</sup>* and orientation noise *β*ˆ <sup>997</sup> need to be updated.

The proposed calibration strategy reduces both the position and orientation errors of the robot end-effector. Recommended procedure for serial robot calibration consists of: (1) acquiring the full 6D poses; (2) getting link twists in CPA-like procedure; (3) getting encoder zero offsets using orientation data only; (4) getting link lengths and offsets using position data only. Then, use two separate registrations to transform position and orientation component of a pose from a world to the robot frame. In summary, the dilemma of having only the position or the orientation error of the robot's end-effector minimized can be avoided and a pose with both optimized components can be fed into inverse kinematic solver.

**Author Contributions:** Conceptualization, M.F. and J.A.M.; methodology, M.F. and J.A.M.; software, M.F.; validation, M.F. and J.A.M.; formal analysis, M.F. and J.A.M.; investigation, M.F.; writing original draft preparation, M.F.; writing—review and editing, J.A.M.; visualization, M.F.; project administration, J.A.M. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Conflicts of Interest:** The authors declare no conflict of interest. Certain commercial equipment, instruments, or software are identified in this paper to foster understanding. Such identification does not imply recommendation or endorsement by the National Institute of Standards and Technology, nor does it imply that the equipment or software identified are necessarily the best available for the purpose.

#### **References**

