**Featured Application: A fast and low-cost process for automated identification of positions of workcell components, including robots. Suitable for rapid deployment of robotic applications without a need of previous simulations or CAD modeling.**

**Abstract:** In this paper, a new method for the calibration of robotic cell components is presented and demonstrated by identification of an industrial robotic manipulator's base and end-effector frames in a workplace. It is based on a mathematical approach using a Jacobian matrix. In addition, using the presented method, identification of other kinematic parameters of a robot is possible. The Universal Robot UR3 was later chosen to prove the working principle in both simulations and experiment, with a simple repeatable low-cost solution for such a task—image analysis to detect tag markers. The results showing the accuracy of the system are included and discussed.

**Keywords:** robotic manipulator; identification; calibration; Jacobian; workcell layout detection

### **1. Introduction**

For robotic arms there has always been a trade off between the repeatability and absolute accuracy of the measurement of a robot's positioning in 3D space, as examined by Abderrahim [1] or by Young [2]. Many manufacturers of the industrial robot present only the repeatability parameter in their datasheets, when it is way more precise than the absolute positioning. The general problem of robot accuracy is described with experiments by Salmani [3].

The absolute positioning of a robot examines how accurately the robot can move to a position with respect to a frame. To achieve better results, parameter identification and robot calibration are performed. Identification is the process in which a real robot's kinematic (and possibly dynamic) characteristics are compared with its mathematical model. It includes determination of the error values that are afterwards applied into the control system, which improves the robot's total pose accuracy using a software solution without the need for adjusting the hardware of the robot. A generally suggested method for robot calibration is the use of a laser tracker. The methodology identifies the error parameters of a robot's kinematic structure, as is described by Nubiola [4]. The precision may be even increased, as Wu showed in [5] when trying to filter errors in measurements and finding optimal measurement configurations. In [6] Nguyen added neural network to compensate for non-geometric errors after the parameter identification was performed.

Unfortunately, these solutions are very expensive because of the price of a laser tracker. One may rent a laser tracker if needed, but this is also a time consuming process due to the need to perform precise experiments, measurements and evaluations after every error made during the process that

may lead to incorrect final results. Therefore, the wide deployment of laser trackers is ineffective for many manufacturers.

There are other methods of robot calibration that tend to avoid the use of laser tracker. In [7], Joubair proposed a method using planes of a very preciously made granite cube, but acquisition of such a cube is not easy in general. Filion [8] or Moller [9] used additional equipment; in their case it is portable photogrammetry system. In [10] a new methodology is introduced by Lembono, who suggested to use three flat planes in a robot's workplace with a 2D laser range finder that intersects the planes, but the simulation was not verified by an experiment. A very different approach was taken by Marie [11] where the elasto-geometrical calibration method based on finite element theory and fuzzy logic modeling was presented.

On the other hand, the very precise results that the methodologies above wanted to achieve are not always necessary, and some nuances in a robot's kinematic structure that appear during its manufacturing process are acceptable for the users of the robot. The problem they may face is determination of the workplace coordinate system (base frame) in which the robot is deployed and eventually the offset of the tool's center point when a tool is attached to the robot's mounting flange, when they need to position it absolutely in a world frame.

For such applications the typical way to calibrate more robots is to use point markers attached to every robot, as described by Gan [12]. However, one important condition is that the robots need to be close together so they can approach each other with the point markers and perform the calibration. Additionally, there are a few optical methods using a camera to improve a robot's accuracy. Arai [13] used two cameras placed in specific positions to track an LED tag that was mounted on a robot; the method we propose allows us to put the camera in any place, in any orientation that will provide good visibility. In [14] Motta or Watanabe in [15] attached a camera to a robot and performed the identification process, but this cannot be used for other robots or to track positions of other components at the workplace at the same time. Van Albada describes in [16] the process of identification for a single robot. Santolaria presented in [17] the use of on-board measurement sensors mounted on a robot.

To avoid these restrictions, we propose a solution based on the OpenCV libraries [18] for Aruco tag detection by a camera, which adds to the calibration process benefits of simplicity, repeatability and low price. The outcomes may be used in offline robot programming, in reconfigurable robotic workplaces and for tracking of components, with as many tag markers and robots as needed, if the visibility for a camera or multiple cameras is provided.

There are methods for 2D camera calibration already presented, and they can be divided into two main approaches. The eye-on-hand calibration, wherein the camera is mounted on the robot and a calibration plate is static, and the eye-on-base method with the calibration marker mounted on the robot with static cameras around [19]. There are also Robot Operating System (ROS) packages [20,21] providing tools for 2D or 3D camera calibration using these two methods. The ROS is an advanced universal platform that may be difficult for some researchers to be able to utilize. Our approach combines both eye-on-hand and eye-on-base calibration processes, avoids using ROS and can be applied not only to localize the base of a robot, but to also localize other devices or objects in the workplace that are either static or of known kinematic structure (multiple robots) in relation to chosen world frame.

#### **2. Materials and Methods**

When an image with a tag is obtained, the OpenCV library's algorithm inserts a coordinate system frame in the tag and can calculate transformation from the camera to the tag. If there are tags placed on all important components of a cell, such as manipulated objects or pallets, the transformation between them may be calculated as well. If there is an industrial robot deployed in a workplace, we can attach an end-effector with Aruco tags to it, perform a trajectory with transformation measurements and using mathematical identification methods calculate the precise position of its base, no matter where it is.

#### *2.1. Geometric Model of a Robot*

For such an identification, a geometric model that is as precise as possible of a robot needs to be determined. The Universal Robot UR3 was chosen for demonstrating the function of the proposed solution. Its geometric model used for all calculations is based on modified Denavit–Hartenberg notation (MDH), described in [22] by Craig. Our geometric model consists of 9 coordinate systems. The "b" frame is the reference coordinate system (world frame); later in our measurements it is represented by a tag marker placed on a rod. The "0" frame represents base frame of the robot. The frames from "1" to "6" represent the joints; the 6th frame position corresponds to the mounting flange. The "e" frame stands for the tool offset, in this case a measuring point that was focused by the sensor. The scheme of the model is illustrated in Figure 1. The MDH parameters are noted in Table 1; the *oi* stands for offset of the *i*th joints in this position.

**Table 1.** MDH parameters of the UR3 robot.


**Figure 1.** UR3 with coordinate frames according to MDH.

For a vector *q* = [*q*1, *q*2, *q*3, *q*4, *q*5, *q*6] *<sup>T</sup>* representing the joint variables, a homogeneous transformation matrix *Tbe*(*q*) gives the position and orientation *P* of the UR3's end-effector tool frame "e" with respect to the base frame "b" of the workplace.

$$P = T\_{h^\ast}(q) \tag{1}$$

$$T\_{\rm bc}(q) = A\_{\rm b0} A\_{\rm 01}(q\_1) A\_{12}(q\_2) A\_{23}(q\_3) A\_{34}(q\_4) A\_{45}(q\_5) A\_{56}(q\_6) A\_{6c} \tag{2}$$

According to [22], matrix *Ai*−1,*<sup>i</sup>* in MDH notation is obtained by multiplying rotation matrix *Rx* along *x* axis, translation matrix *Tx* along *x* axis, rotation matrix *Rz* along *z* axis and translation matrix *Tz* along *z* axis.

$$A\_{i-1,i} = R\_{\mathbf{x}}(\mathfrak{a}\_{i-1})T\_{\mathbf{x}}(\mathfrak{x}\_{i-1})R\_{\mathbf{z}}(\theta\_i)T\_{\mathbf{z}}(d\_i) \tag{3}$$

G=The geometric model of the UR3 is mathematically expressed by the transformation matrix *Tbe*(*q*) noted in Equation (2). Matrix *Ab*<sup>0</sup> is displacement between the reference "b" frame and "0" frame; orientation difference is represented by *R*<sup>0</sup> rotational matrix. Matrix *A*6*<sup>e</sup>* is displacement between the mounting flange and "e" frame of the end-effector. The objective of this study is to determine the 12 parameters of *Ab*<sup>0</sup> matrix, so to find the base frame "0" of a robot in a workplace. To be able to achieve this, it is necessary to identify during the calculations also the displacement of the end-effector (*xe*, *ye*, and *ze*); however, the rotational part of the *A*6*<sup>e</sup>* can be freely chosen. The reason is that the transform is static (there is no variable between 6th frame of the robot and end-effector "e" frame). For simplicity, we choose *Re* as identity matrix.

$$A\_{b0} = \begin{bmatrix} & & & \mathbf{x}\_0 \\ & R\_0 & & y\_0 \\ & & & z\_0 \\ 0 & 0 & 0 & 1 \end{bmatrix}; A\_{\delta\varepsilon} = \begin{bmatrix} 1 & 0 & 0 & \mathbf{x}\_{\varepsilon} \\ 0 & 1 & 0 & y\_{\varepsilon} \\ 0 & 0 & 1 & z\_{\varepsilon} \\ 0 & 0 & 0 & 1 \end{bmatrix} \tag{4}$$

$$R\_0 = \begin{bmatrix} r\_{11} & r\_{12} & r\_{13} \\ r\_{21} & r\_{22} & r\_{23} \\ r\_{31} & r\_{32} & r\_{33} \end{bmatrix} \tag{5}$$

In general, geometric models are idealized and very difficult to make comply with real conditions due to manufacturing inaccuracy and environmental conditions. Error values can be estimated and included into the mathematical models though. Finding the relations between theoretical and real models is the crucial task of robot calibration. To find such a relation, geometric parameters of a device have to be identified. Robot identification is a process wherein error parameter values are determined using results of a test measurement. In the following simulation the UR3 robot performed a trajectory as described in Section 3. Obtained data of the end-effector position *Pc* were compared with the robot's position *P*(*q*) based on *qi* for each joint.

The parameters *x*0, *y*0, *z*0, *r*11, *r*12, *r*13, *r*21, *r*22, *r*23, *r*31, *r*32, *r*33, *xe*, *ye* and *ze* were chosen to be identified. The reason for identification of the end-effector frame is because the Aruco tags may be placed on a low-cost end-effector by 3D printing and the designed CAD model transformation might be different than the real solution. On the other hand, we wanted to make the model as simple as possible, so we avoided MDH parameter identification between particular joints and links of the robot, which would lead to the robot's calibration process. We assume that this simple identification process may compensate for the small errors between the links.

When the transformation matrix *Tbe*(*q*) was defined earlier, the position vector of the end-effector *P* was represented by 4th column of *Tbe* with respect to the reference frame. If everything is ideal, we can consider this coordinate's equal to the coordinate's values calculated using the position sensor, as Equation (6) shows, for a specific *q*, where *T*- *be* means 1st to 3rd elements of the 4th column of the transformation matrix.

$$P\_{\mathbb{C}} = T\_{\text{be}}'(q) = P(q) \tag{6}$$

#### *2.2. Identification with the Jacobian Method*

The most common method for parameter identification is the application of a Jacobian, which is also described, for example, in [6,16]. This iterative method utilizes benefits of the Jacobian matrix that is obtained by partial derivative of the position vector (1st to 3rd elements of 4th column) of *Tbe* with respect to the parameters in *X*, the parameters that are going to be identified. Symbolically, the Jacobian is expressed as 3×15 *Ji* matrix in Equation (7), where *pi* stands for a parameter of *X*.

$$J\_i = \begin{bmatrix} \frac{\partial T\_{bw\_y}(q, \mathcal{X})}{\partial p\_1} & \dots & \frac{\partial T\_{bw\_x}(q, \mathcal{X})}{\partial p\_n} \\ \frac{\partial T\_{bw\_y}(q, \mathcal{X})}{\partial p\_1} & \dots & \frac{\partial T\_{bw\_y}(q, \mathcal{X})}{\partial p\_n} \\ \frac{\partial T\_{bw\_z}(q, \mathcal{X})}{\partial p\_1} & \dots & \frac{\partial T\_{bw\_z}(q, \mathcal{X})}{\partial p\_n} \end{bmatrix} \tag{7}$$

For every measured point *i* the *Ji* is determined. By applying all measured points a 3*n*×15 *J* matrix is obtained, where *n* is a number of measured points.

$$J = \begin{bmatrix} I\_1 \\ I\_2 \\ \dots \\ I\_n \end{bmatrix} \tag{8}$$

As a first step of every iteration a position vector *Ym* is calculated using values *Xj*, where *j* represents the iteration step. For the first iteration, guessed values *X*<sup>0</sup> are used. The *q* is the vector of joint variables measured by robot's joint position sensors.

$$Y\_{\mathfrak{m}} = T\_{\mathfrak{bc}}(q\_{\prime}X\_{i})\tag{9}$$

The next step is to compare and calculate the difference between the position measured by camera *Yc* and the previously calculated position *Ym*, so Δ*Y* is determined. *Yc* is *n*×3 matrix, where *n* stands for number of measured points, and there are three measured coordinates *x*, *y*, *z*.

$$
\Delta \mathbf{Y} = \mathbf{Y}\_c - \mathbf{Y}\_m \tag{10}
$$

The key equation of this method is Equation (11). When a position changes, the Jacobian matrix changes too; therefore, Δ*x* can be observed as the change of the parameters in *X*.

$$
\Delta Y = f \Delta x \tag{11}
$$

By using matrix operations in Equations (12) and (13), the values of Δ*x* are determined.

$$((I^Tf)^{-1}I^T\Delta Y = (I^Tf)^{-1}I^Tf\Delta \mathbf{x} \tag{12}$$

$$
\Delta \mathbf{x} = \left(\boldsymbol{I}^T \boldsymbol{I}\right)^{-1} \boldsymbol{I}^T \boldsymbol{\Delta} \boldsymbol{Y} \tag{13}
$$

At the ends of iterations we added the computed values to the *Xj*+1. A convergent check follows to decide whether another iteration step is needed.

$$X\_{j+1} = X\_j + \Delta x \tag{14}$$

#### **3. Simulations and Experiment**

Two types of simulations and one experiment were performed to verify the proposed method of parameter identification. Simulation A was calculated only with absolute positions of the end-effector coordinates determined by CoppeliaSim software with the built-in UR3 model, as shown in Figure 2. The robot moved along the same path in both simulations and the experiment, consisting of 250 points. The robot stopped at each pose and the measurements were taken. The reason for choosing such a path was to obtain coordinates of the joints that were as different as possible; on the other hand, due to the experiment that was performed with cameras we needed to guarantee the visibility of the Aruco tags, which were used for the simulation B and the experiment.

**Figure 2.** Simulated path of the UR3 in CoppeliaSim, simulation A.

#### *3.1. Simulation A*

The robot was moved along a defined path with fixed points to stop at. Once it stopped, the joint coordinates and the position vector of the end-effector related to the world base frame were acquired. With these two sets of input values, the identification was made using the methods described in the previous chapter. Results may be seen in Section 3.5.

### *3.2. Simulation B*

For simulation B and the experiment, there were cameras and OpenCV libraries [18] applied for image processing to detect the Aruco tags. Based on the previous research by Oscadal [23], we used a 3D gridboard with tags, which improves the reliability and accuracy of detection in comparison with basic 2D tags. The gridboard represents a coordinate frame; in our case it was the base frame "b" and the end-effector frame "e" as shown in Figures 3 and 4. The OpenCV library algorithm can calculate transformation from a camera to a tag. In real-time measurements, we used Equation (15) thanks to which the position of any camera was not important. Matrix *Tcb* is the transformation from the camera to the base; matrix *Tce* is the transformation from the camera to the end-effector. The "c" frame is the camera frame. On the other hand, the position of a camera may be saved for later operations.

$$T\_{bc} = T\_{cb}^{-1} T\_{cc} \tag{15}$$

In the simulation B (Figure 3) we deployed the image analysis in CoppeliaSim, with a single camera of resolution 1280 × 720 px. The virtual camera was self-calibrated and the detection parameters of the OpenCV library for finding the Aruco tags were set similarly as in [23]. The dimensions of the tags were 70 × 70 mm with a 6 × 6 bit matrix.

**Figure 3.** Simulation B with tags and a camera.

**Figure 4.** Experiment A setup; point of view of camera 2.

#### *3.3. Experiment A*

Three independent cameras were placed around the UR3 robot to observe its trajectory and to calculate transformations in a laboratory during a real experiment. Intel RealSense D435i cameras with 1280 × 720 px resolution were used, and even though they are depth cameras, only the simple 2D RGB pictures were analyzed. The specifications of the cameras are shown in Table 2. The cameras were self-calibrated following the methodology used in [23].

As already said before, the robot went through 250 positions on the path. To reduce the inaccuracy of the detection in reality, 10 camera frames were taken for every position, which gave us 2500 measured points. In total for the three cameras, 7500 pictures were analyzed during the calibration process.

**Table 2.** Specifications of Intel RealSense D435i camera [24].


### *3.4. Experiment B*

To observe the impact of placement of the world base frame "b," another experiment was performed as shown in Figure 5. Please, notice the difference in position and rotation of the base frame. Only one camera (the same type, resolution and calibration) was used in this case with robot following a similar trajectory as in the previous experiment A. The robot went through 200 poses; at each pose, five images were taken and analyzed, so 1000 measurements in total were made.

**Figure 5.** Experiment B setup.

#### *3.5. Results*

The calibration results of simulations are presented in Table 3. The data calculated based on simulation A (without tag detection, only end-effector position tracking) show high-precision identification with a very small difference in comparison to the expected results (difference Δ is shown in the brackets). Such an accuracy could be achieved using, for example, a laser tracker in reality. For simulation B, when the tags were detected using simulated camera, the error was higher (maximum 2.59 mm for *x*0), which gave us an idea about how accurate the system might be, so the noise from the environment was lowered, but the camera parameters were kept.

In Table 4 are the results of the experiment A. The best values were obtained when all the results of the three cameras were combined and analyzed together. The error for the base frame (*x*0, *y*0, and *z*0) was maximally 7.61 mm in the *z*<sup>0</sup> direction. We performed other measurements following the same strategy; they all provided similar results, which made it made clear that the position of a camera has an influence on the detection accuracy, which is supported by results of Krajnik's research [25].

In Table 5 are the results of the experiment B. It proves the possibility of placing the base frame freely with respect to the robot.

The trajectories are compared in Figures 6–10. Measured cameras/measured path are plotted points that were obtained by camera/end-effector position sensor; robot path is a self-check plot of the end-effector path after the identified parameters of *X* were applied in the transformation matrix. Points of origin are points where the world base frame was determined for every point on the path. In Figures 11 and 12 we can observe the error values in box plots.


**Table 3.** Results of simulations for the *X* vector after identification.

**Table 4.** Results of experiment A for the *X* vector after identification.


**Table 5.** Results of experiment B for the *X* vector after identification.


**Figure 6.** Measured path in comparison with the real robot path of simulation A.

**Figure 7.** Measured path in comparison with the real robot path of simulation B.

**Figure 8.** Paths measured by cameras in comparison with the real robot path of experiment A.

**Figure 9.** Paths measured by cameras in comparison with the real robot path of experiment A, xy plane.

**Figure 10.** Path measured by a camera in comparison with the real robot path of experiment B.

**Figure 11.** Errors for the identified values based on simulation B.

**Figure 12.** Errors for the identified values based on experiment A.

#### **4. Conclusions**

The process of identification of the robot's base frame in a workplace using Aruco markers and OpenCV was presented and verified in this study. This approach may be used for more robots and other components of the workplace at the same time, which brings the main advantage of fast evaluation and later recalibration. The typical scenario is placing all components in their positions, placing markers on them and other points of interest, running a robot's path while measuring end-effector position by a camera and evaluating the results—obtaining the coordinates of robot's base and coordinates of points of interest (manipulated object, pallet, etc.). As the end-effector we used an 3D printed gridboard that might be replaced by a cubic with tags carried by a gripper.

When observing the results provided in the Section 3.5, one can see there is a gap between the accuracy of the simulated workplace and the experiments. As already mentioned above, simulation A demonstrates the possible accuracy of this method when all conditions are close to ideal. Therefore, there are some methods and topics for another study that would help to minimize the errors. The DH parameters of the UR3 were based on its datasheet, but when the robot is manufactured it is calibrated and modified DH parameters are saved in the control unit. This parameters may be retrieved and applied in the identification calculations. In general, UR3 is not as precise as other industrial robots; depending on demands, a different manipulator should provide more accurate results.

Nevertheless, the end-effector was 3D printed and assembled from three parts; more accuracy may be achieved using better manufacturing methods. In some cases, if the end-effector was manufactured precisely beforehand, only the identification of a base frame might be enough (instead of identification of the base frame and end-effector offset, as was presented).

However, the biggest issue seems to be the detection of the tags, as results differed for every camera in the experiment, as shown in Table 4. Positioning of a camera (distance from a tag) seems to have a big influence on the outcome. This topic was researched in the Krajnik's work [25]. Additionally, alternating the OpenCV's software algorithms and filtering leads to better detection; more on this topic is discussed in [23]. A user may seriously consider the use of a self-calibrated camera with higher resolution than 1280 × 720 px, as we used. It is important to note that the presented accuracy in simulation B and real experiment were achieved by cameras that we calibrated. There is no doubt that one with better calibrated hardware may achieve more accurate results.

Another question to focus on is which trajectory and how many measured points are necessary to provide satisfying results; we tested the system with only one path of 250 points.

Once this camera-based method is well optimized for a task depending on certain available equipment, and the accuracy is acceptable, it will stand as sufficient easy-to-deploy and low-cost solution for integrators and researchers. They will be able to quickly place components and robots, tag them and obtain their position coordinates based on prepared universal measurement. In addition, even the current system may serve in the manufacturing process as a continuous safety-check that all

required components, including robots, are in the place where they should be, if the detection accuracy is acceptable.

In addition, this method will be used for identification of reference coordinate systems and kinematic parameters of experimental custom manipulators, the design which is a point of interest for Research Centre of Advanced Mechatronic Systems project.

To make this calibration method easier to follow, the Matlab's scripts with the calculations and raw input data obtained by simulation and experiments may be found on the Github page of the Department of Robotics, VSB-Technical University of Ostrava [26]. The calibration methodology and Supplementary Materials provided may serve engineers who have no previous experience with the process; they can use cameras or eventually other sensors, such as laser trackers.

**Supplementary Materials:** The following are available online at http://www.mdpi.com/2076-3417/10/21/7679/ s1.

**Author Contributions:** Conceptualization, D.H. and Z.B.; methodology, D.H. and A.V.; software, P.O.; validation, D.H., P.O. and T.S.; formal analysis, T.S.; investigation, D.H.; resources, M.V.; data curation, T.S.; writing—original draft preparation, D.H.; writing—review and editing D.H and T.S.; visualization, M.V.; supervision, A.V.; project administration, Z.B.; funding acquisition, Z.B. All authors have read and agreed to the published version of the manuscript.

**Funding:** This work was supported by the European Regional Development Fund in the Research Centre of Advanced Mechatronic Systems project, project number CZ.02.1.01/0.0/0.0/16\_019/0000867 within the Operational Programme Research, Development and Education. This article has been also supported by specific research project SP2020/141 and financed by the state budget of the Czech Republic.

**Conflicts of Interest:** The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.
