**1. Introduction**

Exact pose estimation is the key technology for mapping, location, and navigation in the field of the mobile robot [1], which can provide the message of the robot's position and gesture in real time. Sensors used to obtain robot pose estimates include LiDAR, cameras, wheel encoders, IMUs, etc. According to the different sensors the robot is equipped with, SLAM technology is divided into visual SLAM and laser SLAM. Although the sensor used in visual SLAM has low cost and rich image information, it has a great impact on the normal operation of the camera under weak- or no-light conditions. What is more, because the image information is too rich, the algorithm requires high processor performance. Now, the mainstream mobile robots are still dominated by laser sensors [2], such as the unmanned delivery vehicle of JD and the "prime" unmanned delivery vehicle of Amazon.

A 2D LiDAR estimates the pose of the sensor by scan-matching two adjacent frames of laser data [3]. However, only relying on 2D laser SLAM to estimate the pose of the robot has many limitations. The frequency of the system output estimated pose is low, and the running time becomes longer, which will generate a large cumulative error and eventually affects the positioning and map construction of the robot. A cartographer algorithm [4] is developed using a SICK radar, and the frame rate can reach more than 100 Hz. The motion distortion can be ignored, so there is no distortion correction algorithm module. However, the frame rate of most LiDAR is around 10 Hz. Without distortion correcting, there will be distortion error appearing in LiDAR data, which is hard to eliminate through loopback detection and back-end optimization, etc. The research on this issue has great practical significance. Many domestic and foreign works have been conducted on removing motion distortion and false match of LiDAR data in recent years. Yoon et al. [5] proposed an unsupervised parameter learning in the Gaussian variational inference setting, which combines classical trajectory estimation of mobile robots and deep learning on rich sensor data to learn a complete estimator via the deep network. However, it requires a large amount of calculation, the captured laser

**Citation:** Wu, Q.; Meng, Q.; Tian, Y.; Zhou, Z.; Luo, C.; Mao, W.; Zeng, P.; Zhang, B.; Luo, Y. A Method of Calibration for the Distortion of LiDAR Integrating IMU and Odometer. *Sensors* **2022**, *22*, 6716. https://doi.org/10.3390/s22176716

Academic Editors: Yong Liu and Xingxing Zuo

Received: 25 July 2022 Accepted: 2 September 2022 Published: 5 September 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

data cannot complete feature extraction or matching when the environment is not clearly structured, and the real-time and robustness are poor. Therefore, it is only suitable for small indoor scenes with clear structure instead of open large outdoor scenes. Hyeong et al. [6] proposed an ICP (Iterative Closest Points, iterative closest point) outlier rejection scheme to compare the laser data of the scanned environment and select matching points and reject the algorithm that does not match parts. The ICP algorithm needs to be provided with an initial value, and the matching accuracy of the ICP algorithm directly depends on whether the initial value is accurate. However, in the process of acquiring the surrounding environment, the laser is often accompanied by the motion of the robot. Especially when the laser frame rate is small, the captured laser data will produce motion distortion, and there will be a large error with the real environment over time. Xue et al. [7] proposed a simultaneous fusion of IMU, wheel encoder, and LiDAR to estimate the own motion of a moving vehicle. However, this method does not propose a countermeasure for discontinuous laser scanning. Bazet and Cherfaoui [8] proposed a method for correcting errors caused by time stamp errors during sensor data acquisition, but this scheme assumes that the scanning angle of the laser is fixed and the quadratic interpolation assumption is too simplistic, which cannot meet the complex outdoor environment. Hong et al. [9] proposed a new approach to enhancing ICP algorithms by updating speed, which estimates the speed of the LiDAR through ICP iterations, and uses the estimated speed to compensate for scan distortion due to motion. Although it considers the motion of the robot into consideration, its assumption of uniform motion is too ideal; for low-frame rate LiDAR, the assumption of uniform motion does not hold.

Aiming at the above problems, this paper proposes an improved algorithm for estimated speed scan matching that integrates an IMU and odometer. This algorithm is called Iao\_ICP (ICP that integrates IMU and Odometer) in this paper. The main contributions of this paper are as follows: (1) The algorithm uses the linear interpolation method to obtain the pose of LiDAR, which solves the alignment problem of the discontinuous laser scan data. (2) The data fused by the IMU and the odometer provides a better initial value for the ICP, and the estimated speed of the LiDAR is introduced as the iterative value of the ICP method to realize the termination condition of LiDAR data compensation.

The rest of the paper is organized as follows: Firstly, the causes of motion distortion in the traditional ICP algorithm are analyzed. Secondly, the incremental information of the wheel odometer and the angular velocity information of the IMU are integrated into the pose estimation. Finally, through data sets and physical experiments, the effectiveness of the proposed algorithm in removing motion distortion and improving the accuracy of map construction is demonstrated.

#### **2. Causes of LiDAR Motion Distortion**

The mechanical LiDAR is driven by an internal motor to rotate the radar ranging core 360◦ clockwise to obtain the surrounding environment data. Each frame of laser data is encapsulated by the data information obtained by a certain number of discrete laser beams, and the laser data of each frame is not obtained instantaneously. The data distortion of LiDAR is related to the motion state of the robot which carries LiDAR. When laser scanning is accompanied by the motion of the robot, the laser data of each angle is not obtained instantaneously. When the scanning frequency of the LiDAR is relatively low, the motion distortion of the laser frame caused by the motion of the robot cannot be ignored [10].

The current domestic LiDAR rotation frequency is about 5–10 Hz. When the robot carrying the LiDAR is stationary, the measurement data of the LiDAR has no error, but in the SLAM system, the robot is often in a state of motion. Take the environment shown in Figure 1 as an example. It can be seen that the distance data of each laser beam are collected in different poses, as shown in the pose of points A and B. Suppose the robot is moving at a constant speed, the solid curved arrow indicates that the LiDAR rotation direction is clockwise, and the solid long straight arrow indicates that the LiDAR moves from point A to point B along the X direction. Then, in the case of no motion distortion correction during this period, the LiDAR data will have a motion distortion error of Δ*x*.

**Figure 1.** The acquisition process of one frame of LiDAR data.

As described above, when the robot obtained a frame of LiDAR data, the laser is obtained at point A, and the laser is obtained at point B. However, when general LiDAR drives package data, it is assumed that all laser beams of a frame of LiDAR data are obtained in the same pose and instantaneously, that is, all laser beams are obtained from point A data. Its pose actually produces motion changes, and each laser point is generated on a different reference pose, which eventually causes the environmental distortion of the laser collection. As Figure 2 shows, the left picture is the actual environment, while the dotted line in the picture on the right is the true value, and the solid line is the LiDAR data with motion distortion.

**Figure 2.** LiDAR motion distortion.

#### **3. Principle of ICP Algorithm**

The ICP algorithm [11] was first developed by Beals and McKay in 1992. The ICP algorithm is essentially an optimal registration method based on the least-squares method. ICP first matches each point of the target laser data with the closest point of the reference laser data and finds the rotation matrix R and translation matrix p, which are used to convert the two. Afterward, the laser matching is iteratively optimized by repeatedly generating pairwise closest points until the convergence accuracy requirements for correct registration are met. The ICP algorithm first needs to determine an initial pose, and the selected initial value will have an important impact on the final registration result. The algorithm may fall into a local optimum instead of a global minimum if the initial value is not chosen properly.

Given *X* = {*x*1, *x*2, ··· , *xNx* } as a frame of laser data, *P* = *p*1, *p*2, ··· , *pNp* as the laser data of adjacent frames, and *T* = {*T*1, *T*2, ··· , *T*i} as the transformation matrix of laser data of adjacent frames, *xi* and *pi* indicate the coordinates of the laser spot, *Nx* and *Np* indicate the number of laser dots, and i indicates the frame number of laser data. This paper defined a minimizing objective Function (1) to transform *P* through the coordinates, and cover the maximum to *X* [11].

$$E(\mathcal{R}, p) = \frac{1}{N\_p} \sum\_{i=1}^{Np} \parallel \left. \mathbf{x}\_i - \mathbf{R}p\_i - t \right\|^2 \tag{1}$$

The resulting transformation matrix *T* can be described as (2):

$$T = \begin{bmatrix} R & p \\ 0 & 1 \end{bmatrix} \tag{2}$$

The processing steps of the given objective function are shown as follows: Step1: Solving the mean value of LiDAR data *X* and *P*:

$$\mathcal{U}\_{\mathbf{x}} = \frac{1}{N\_{\mathbf{x}}} \sum\_{i=1}^{N\_{\mathbf{x}}} X\_{i\prime} \mathcal{U}\_{\mathbf{p}} = \frac{1}{N\_{\mathbf{p}}} \sum\_{i=1}^{N\_{\mathbf{p}}} P\_{i\prime} \mathcal{U}\_{\mathbf{p}}$$

Step2: Remove the translation of LiDAR data *X* and *P* to distributed laser data around the mean value:

$$\mathbf{x}'\_i = \mathbf{x}\_i - \mathcal{U}\_{\mathbf{x}'} p'\_i = p\_i - \mathcal{U}\_p$$

Step3: Define matrix, and make SVD decomposition of it, where *H* is the matrix to be decomposed by SVD, *U* and *V* are the two non-singular matrices decomposed, and *σ*1, *σ*2, and *σ*<sup>3</sup> are the three singular values decomposed, respectively:

$$H = \sum\_{i=1}^{Np} \mathbf{x}\_i' p\_i'^T = \mathcal{U} \begin{bmatrix} \sigma\_1 & 0 & 0 \\ 0 & \sigma\_2 & 0 \\ 0 & 0 & \sigma\_3 \end{bmatrix} V^T$$

Step4: Calculate the solution of the objective function:

$$\mathcal{R} = \mathcal{U}V^{\mathcal{T}}, \; p = u\_x - Ru\_p.$$

Since the ICP algorithm uses the closest point as the corresponding point, the initial result may be different from the real environment. However, the results converge to the base environment by repeating this process. The LiDAR scan data for frame *i*, namely, *X*, are shown in Figure 3a. The LiDAR scan data for frame *i* + 1, namely, *P*, are shown in Figure 3b. The first step of ICP iteration is shown in Figure 3c. The closest point between *X* and *P* is found as Figure 3d shows. The first matching estimated transformation and updated *P* by *p <sup>i</sup>* = T1 *pi*, which is shown in Figure 3e. The *X* and *P* matched after many iterations, as Figure 3f shows. Final pose estimation is solved through the transformation of *T* = *TnTn*−<sup>1</sup> ··· *T*2*T*1(*i* = 1, ··· , *n*), namely:

$$\mathbf{x}\_{i} = T\_{n}T\_{n-1}\cdots \cdot T\_{2}T\_{1}p\_{i} = Tp\_{i} \tag{3}$$

**Figure 3.** The principle of ICP algorithm. (**a**) Frame *i* (**b**) Frame *i* +1(**c**) Start matching (**d**) Find adjacent (**e**) First match (**f**) After multiple iterations of matching.

#### **4. Estimation Speed Scan Matching Algorithm Based on IMU and Odometer**

A wheeled odometer and IMU are introduced to compensate for motion distortion of laser data caused by robot moves. Direct measurement of displacement and angle information through a wheel odometer or direct measurement of angular velocity and linear acceleration through an IMU [12], then integrate them, respectively, to obtain the displacement and angle information. In the ideal conditions, the wheel odometer or IMU has a high-precision local pose estimation ability because of the high pose update frequency (higher than 200 Hz) of the above sensors, which can accurately reflect the motion of the robot in real time [13]. What is more, these two types of sensors are completely decoupled from the robot state estimation, which can prevent the introduction of errors. However, on the one hand, during the actual movement of the robot, the wheels will slip and the accumulated error will occur, which leads to a certain deviation in the obtained odometer angle data when only the encoder is used, and the error increases with the running time and the stroke increases. On the other hand, the linear acceleration accuracy of the IMU is poor, though it has high angular velocity measurement accuracy, and the local accuracy of the quadratic integral is still very poor, which leads to a certain deviation of obtained displacement data. Therefore, this paper proposes the Iao\_ICP algorithm, and the algorithm framework is shown in Figure 4. First, the information of the IMU and the odometer is fused, and the pose of the LiDAR is obtained using the linear interpolation method to remove most of the motion distortion. Then, scan matching of LiDAR data is conducted using the ICP method. Data fused by the IMU and odometer provide a better initial value for ICP, and estimated speed is introduced as a termination condition for iteration of the ICP method [14]. The matching result is used as the correct value, and the error value of the odometer is obtained. The error value is evenly distributed to each point, and the position of the laser point is corrected again, so as to further determine the pose of the laser point.

**Figure 4.** The architecture diagram of the Iao\_ICP algorithm.
