*4.1. Pose Estimation with Fusion of IMU and Odometer*

The chassis control system of the mobile robot reads the IMU data and the odometer data. Each time the IMU data are read, the odometer data can also be obtained without considering the problem of time synchronization. That means the IMU pose queue and odometer pose queue maintain strict alignment, which can directly fuse both to generate a new pose queue. However, the update frequency of low-cost LiDAR is generally only 5–10 Hz, which leads to the new pose queue after fusion cannot maintain strict alignment with the pose queue of laser frames. Although there is no way to obtain the pose of the laser frame directly from the fused pose queue since the pose queues of the two are not strictly aligned, the pose of the laser frame can be obtained by linearly interpolating the fused pose queue. Below are the detailed steps to obtain the estimated pose based on the linear interpolation method by fusing the IMU and odometer data:

Step1: As the start time of the current laser frame, the end time of the current laser frame, and the time interval between two laser beams have been known. Odometer data and IMU data are stored in a queue in the same chronological order, and the team leader is the earliest. There are oldest odometer and IMU data timestamps, and latest odometer and IMU data timestamps. First, solve the new queue generated by fusing odometry and IMU data within the above timestamps. The fusion expressions are shown below:

$$\begin{cases} \bullet \text{dom\\_Imum\\_List}[i].x = \bullet \text{dom\\_List}[i].x\\ \bullet \text{dom\\_Imum\\_List}[i].y = \bullet \text{dom\\_List}[i].y\\ \bullet \text{dom\\_Imum\\_List}[i].\theta = \text{ImudList}[i].\theta \end{cases} \tag{4}$$

In the formula, *Odom*\_*Imu*\_*List* [*i*] is the fused pose data at the ti moment, *OdomList* [*i*] is the odometer pose data at the ti moment, *ImuList* [*i*] is the IMU pose data at the ti moment, and *x*, *y*, and *θ* are the X-axis data, Y-axis data, and angle data in the pose data, respectively.

Step2: Solve the emission pose corresponding to each laser in the current frame of laser data, namely, to solve the robotic pose at the time of {*ts*, *ts* + Δ*t*, ··· *ts* + *i*Δ*t* ··· *te*}. It is reasonable to assume that the robot moves at a uniform speed during the data update of the fusion of two adjacent frames due to the high update frequency of odometer data and IMU data. Linear interpolation can be used on this assumption, as shown in Figure 5.

**Figure 5.** Linear interpolation of laser pose.

Suppose there are corresponding fused pose queues at the time of *l*, *k* for laser data, but not at the time of *s*, and the value of *s* is greater than *l*, and less than *k*. Then, solve the pose of robot *ps*, *pm*, *pe* corresponding to the three moments *ts, tm, te* (*ts* < *tm* < *te*). The pose of the first laser beam can be calculated with the Formula (5). In the same way, the emission pose of the last laser beam and the laser beam at the middle time can be obtained.

$$\begin{cases} p\_l = \text{Odom\\_Imu\\_List}[i] \\ p\_k = \text{Odom\\_Imu\\_List}[k] \\ p\_s = p\_l + \frac{p\_k - p\_l}{k - l}(s - l) \end{cases} \tag{5}$$

Step3: Following the method in the Step2, *pm* and *pe* can be solved. Further assumed, the robot performs uniform acceleration motion during a frame of laser data. Thus, the pose of the robot is a quadratic function of time, as Figure 6 shows. Thus, using the known robot pose *ps*, *pm*, *pe* as the independent variable, a quadratic curve function *P*(*t*) = *At*<sup>2</sup> + *Bt* + *C*(*ts* < *t* < *te*) can be obtained by interpolation, and *A*, *B*, *C* are the coefficients of the quadratic function. Next, the value of every time {*ts*, *ts* + Δ*t*, ··· *ts* + *i*Δ*t* ··· *te*} can be substituted into a curve, and the pose of each laser point data in global coordinate system {*pts*,*pts*<sup>+</sup>Δ*t*, ··· *pts*<sup>+</sup>*i*Δ*<sup>t</sup>* ··· *pte*} can be obtained.

**Figure 6.** Pose function graph.

Step4: The relative pose (array form) of the laser point in the global coordinate system is converted into a pose change matrix. Then, convert the coordinate information in the radar coordinate system xi to the coordinates in the global coordinate system, as Formula (6) shows.

$$\mathbf{x}'\_{i} = V \mathbf{2} T(p\_{i}) \mathbf{x}\_{i} \tag{6}$$

In the above Formula (6), function *V*2*T* (*pi*) is a whole, indicating that the relative pose in the form of an array *pi* is converted into a pose transformation matrix in the form of a matrix. By the coordinate information in radar system *xi* left multiplication corresponding matrix *pi*, the coordinate of the radar coordinate system *xi* can be translated into the coordinate in the global coordinate system *x i* , because *pi* is the pose in the global coordinate system.

Step5: According to the coordinates of the scanning point corresponding to each laser beam in the global coordinate system *x i* , the laser data of the laser scan point in the LiDAR coordinate system can be solved with Formula (7).

$$\begin{cases} \begin{array}{c} \mathbf{x}\_{i}^{\prime} = \begin{pmatrix} p\_{x\prime} \ p\_{y} \end{pmatrix} \\ \text{range} = \sqrt{p\_{x} \cdot p\_{x} + p\_{y} \cdot p\_{y}} \\ \text{angle} = \text{atan2}\left(p\_{y\prime} \ p\_{x}\right) \end{array} \end{cases} \tag{7}$$

For the first equation above, *px*, *py* are the coordinates of the ith frame of laser data in the LiDAR coordinate system on the X- and Y-axis, respectively.

For the second equation above, the coordinates *px* and *py* on the x and y axes of the laser *xi* frame in the laser coordinate system are known. The distance point *xi* from the origin of the laser coordinate system can be found according to the "Pythagorean Theorem".

For the third equation above, *px* and *py* have been found, and the angle between point *xi* and X-axis can be solved according to inverse trigonometric functions. The specific implementation process of the algorithm is shown in Algorithm 1.

#### **Algorithm 1: A Pose Estimation Algorithm Based on IMU and Odometer**

Input: Odometer pose queue *OdomList*[*i*], IMU pose queue, and laser pose queue *xi* Output: laser pose queue *Xn*

1: *for i = 1:n do*

2: *Odom*\_*Imu*\_*List*[*i*].*x* = *OdomList*[*i*].*x*;

*Odom*\_*Imu*\_*List*[*i*].*y* = *OdomList*[*i*].*y*;

*Odom*\_*Imu*\_*List*[*i*].*θ* = *ImuList*[*i*].*θ*; //fuse the data of odometer and IMU pose queue, then put into *Odom*\_*Imu*\_*List*[*i*]

3: **end for**

4: *ps* = *LinerInterp*(*Odom*\_*Imu*\_*List*[*ts*]);

*pm* = *LinerInterp*(*Odom*\_*Imu*\_*List*[*tm*]);

*pe* = *LinerInterp*(*Odom*\_*Imu*\_*List*[*te*]); //Perform linear interpolation on the fusion pose of the start, end and intermediate moments, *LinerInterp*() is function used to make linear interpolation 5: *P*(*t*) = *P*(*t*) = *At*<sup>2</sup> + *Bt* + *C*; //Substitute *ps*, *pm*, *pe* into above formula in order, and the coefficients of quadratic curve functions A, B, C can be solved.

#### 6: *for i = 1:n do*

7: *pi* = *Ai*<sup>2</sup> + *Bi* + *C*; //solve the pose of each laser point in global coordinate system *pi* 8: *x <sup>i</sup>* <sup>=</sup> *<sup>V</sup>*2*T*(*pi*)*xi* <sup>=</sup> *px*, *py* ; //obtain the pose of each laser point in the global coordinate system *x i* 9: *Xn* = (*range*, *angle*) = *px* <sup>∗</sup> *px* <sup>+</sup> *py* <sup>∗</sup> *py*, atan2 *py*, *px* ; //compose a new laser point set *X*n 10: **end for**

#### *4.2. Estimated Velocity and Laser Data Pose Compensation*

To remove the motion distortion of the laser point cloud data, the speed of the robot needs to be estimated. Since the scanning period of LiDAR is about 0.1 s, it can be assumed that the speed of the robot is constant during this scanning period, and *Vi* is used to indicate the velocity in the LiDAR coordinate system at *t*<sup>i</sup> time. Firstly, estimate the velocity *Vi* from the relative motion transformation between two adjacent frames of laser data *Xi* and *Xi*−1, supposing that n indicates the number of laser points of laser data *Xi*. The time interval

between two adjacent frames of laser points is Δ*t*. *x*<sup>0</sup> , *x*<sup>1</sup> , ··· , *xn* is the laser point of laser *Xi*, *txj* − *txj*−<sup>1</sup> = <sup>Δ</sup>*ts* (*<sup>j</sup>* = 0, 1, ··· , *<sup>n</sup>* − <sup>1</sup>).

Therefore, the estimated velocity *Vi* is:

$$V\_i = \frac{T2V\left(T\_{i-1}^{-1}T\_i\right)}{\Delta t} \approx \frac{1}{\Delta t} \lg T\_{i-1}^{-1}T\_i \tag{8}$$

In Formula (8), *T*−<sup>1</sup> *<sup>i</sup>*−<sup>1</sup>*Ti* is a whole, indicating the pose difference of the robot from *<sup>i</sup>*−<sup>1</sup> time to *i* time in the radar coordinate system, and *T*2*V T*−<sup>1</sup> *<sup>i</sup>*−<sup>1</sup>*Ti* indicates a way to convert the pose difference from matrix form to array form.

The pose of frame *i* and laser point *j* is:

$$T(t\_i + j\Delta t\_s) = T\_{i^\cdot} V 2T(V\_{i^\cdot} j\Delta t\_s) = T\_i \mathbf{e}^{j\Delta t\_s} V\_i \tag{9}$$

In Formula (9), *j*Δ*ts* is the duration of laser point cloud data in frame *i* from laser point 0th to laser point *j*.

*Vi* is the origin velocity of laser point data in frame *i*.

*Vi*·*j*Δ*ts* is the pose difference of frame *i* laser data cloud from laser point 0th to laser point *k*.

*V*2*T*(*Vi*·*j*Δ*ts*) is the conversion of relative pose difference from array form to matrix form.

*Ti*·*V*2*T*(*Vi*·*j*Δ*ts*) is to obtain the pose of laser point *j* in frame *i* by using frame *i* of laser point cloud data right-multiplied by the pose difference from the initial pose of the 0th laser point.

Substitute the above Formula (9) into Formula (3), the laser point cloud data collection *Xi* is converted into *X*<sup>∗</sup> , and *X*<sup>∗</sup> is the laser point cloud data collection after speed compensation.

$$\overline{X}^\* = \left\{ \mathbf{e}^{j\Delta t\_s V\_j} p\_j \mid j = 0, 1, \cdots, n \right\} \tag{10}$$

For some types of LiDAR, it takes 100 ms to perform a scan with a scan angle of 360◦, which takes the estimation of robot motion later than the actual movement. To prevent this kind of delay, a backward compensation scheme can be used. Take the time corresponding to the last laser point as the reference time, the corresponding time of each laser point can be converted. With the above conditions, Formula (9) can be revised into:

$$T[t\_i - (n - j)\Delta t\_s] = T\_i \mathbf{e}^{(n - j)\Delta t\_s(-V\_i)}\tag{11}$$

Formula (10) can be revised into:

$$\overline{X} = \left\{ \mathbf{e}^{(n-j)\Lambda\_s(-V\_i)} \mathbf{x}\_j \mid j = 0, 1, \dots, n \right\} \tag{12}$$

The specific implementation process of the algorithm is shown in Algorithm 2.

## **Algorithm 2: Estimating velocity and removing motion distortion from laser point cloud data combined with ICP**

Input: the queue of laser pose *Xn*

Output: motion transformation matrix of adjacent laser frames T

1: *V* = *Vi* //speed initialization

#### 2: **do**

3: TΔts = eΔts(−Vi) //the motion transformation matrix T is estimated by the speed of the two adjacent frames of laser light

4: **for** j = 1 :n **do** //traverse all laser points in the current laser frame

5: Tj<sup>Δ</sup>ts <sup>=</sup> <sup>T</sup>(j−1)ΔtsT<sup>Δ</sup>ts //calculate the motion transformation matrix of each laser point

6: x*ij* = TjΔtsx*ij* //Motion transformation for each laser point

7: end **for**

8: T <sup>=</sup> ICP **<sup>X</sup>**−<sup>1</sup> ,**X**i, T //iterative matching via ICP

9: *V* = *Vi* //renew the value of velocity

10: Vi = 1/Δlg T //do the next round of speed estimation

11: *While* ||*V* − *Vi*|| > e //when the speed error value is greater than the threshold e, execute the loop

#### **5. Positioning Accuracy Evaluation of Laser Odometry after Motion Distortion Calibration**

This experiment utilizes the sequences b0\_2014\_07\_11\_10\_58\_16 (denoted as <sup>1</sup> ), b0\_2014\_07\_11\_11\_00\_49 (denoted as <sup>2</sup> ), and b0\_2014\_07\_21\_12\_42\_53 (denoted as <sup>3</sup> ) in the Cartographer public dataset. The laser odometry accuracy of the Iao\_ICP algorithm and the original Cartographer algorithm is quantitatively evaluated by executing this. Figure 7 shows the mapping effect of the Iao\_ICP algorithm on the sequence. The processor of the test equipment is Intel (R) Core (TM) i5−5200 CPU 2.20 GHz and it has 8 GB RAM.

**Figure 7.** Mapping based on the b0\_2014\_07\_11\_11\_00\_49 sequence.

The analysis is performed by comparing the data calculated by the Iao\_ICP algorithm with the Cartographer data set. Table 1 lists the absolute trajectory errors calculated by these two algorithms [15]. In addition, the Iao\_ICP algorithm was used to calculate the relative trajectory error and compared with the relative trajectory error of the original Cartographer algorithm.


**Table 1.** Comparison results of absolute trajectory error between Iao\_ICP algorithm and Cartographer algorithm.

Using sequence <sup>1</sup> for testing, the comparison of the relative trajectory error results obtained is shown in Figure 8.

**Figure 8.** Comparison of relative trajectory errors of sequence -1 . (**a**) Cartographer, improvement scheme, and real trajectory comparison (**b**) Local trajectory map (**c**) Absolute trajectory error of Cartographer (**d**) Absolute trajectory error of the improved scheme.

The obtained comparison of relative trajectory error results is shown in Figure 9 by using a sequence for testing.

**Figure 9.** Comparison of relative trajectory errors of sequence -2 . (**a**) Cartographer, improvement scheme, and real trajectory comparison (**b**) Local trajectory map (**c**) Absolute trajectory error of Cartographer (**d**) Absolute trajectory error of the improved scheme.

The obtained comparison of relative trajectory error results is shown in Figure 10 by using a sequence for testing.

**Figure 10.** Comparison of relative trajectory errors of sequence -3 . (**a**) Cartographer, improvement scheme, and real trajectory comparison (**b**) Local trajectory map (**c**) Absolute trajectory error of Cartographer (**d**) Absolute trajectory error of the improved scheme.

From the sequence <sup>1</sup> test results, it can be seen that, from the RMSE index, the root-mean-square error of the Iao\_ICP algorithm is 0.0179 m, and the root-mean-square error of the original Cartographer algorithm is 0.0230 m. Compared with the original Cartographer algorithm, the root-mean-square error of the Iao\_ICP algorithm is reduced by 22.06%. The average error of the Iao\_ICP algorithm is 0.0044 m smaller than that of the original Cartographer algorithm. The maximum absolute trajectory error of the original Cartographer algorithm is 0.1428 m in this sequence, and the maximum absolute trajectory error of the Iao\_ICP algorithm is 0.1357 m. The minimum absolute trajectory error of the Cartographer original algorithm is 0.0024 m, and the minimum absolute trajectory error of the Iao\_ICP algorithm is 0.0011 m. It can be seen from the above data that the Iao\_ICP algorithm has a smaller relative trajectory error than the original Cartographer algorithm in sequence <sup>1</sup> .
