Next Article in Journal
Convolutional Neural Networks for the Real-Time Monitoring of Vital Signs Based on Impulse Radio Ultrawide-Band Radar during Sleep
Next Article in Special Issue
LiDAR Inertial Odometry Based on Indexed Point and Delayed Removal Strategy in Highly Dynamic Environments
Previous Article in Journal
Working Mode Recognition of Non-Specific Radar Based on ResNet-SVM Learning Framework
Previous Article in Special Issue
HFNet-SLAM: An Accurate and Real-Time Monocular SLAM System with Deep Features
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Uncontrolled Two-Step Iterative Calibration Algorithm for Lidar–IMU System

College of Resource Environment and Tourism, Capital Normal University, Beijing 100048, China
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(6), 3119; https://doi.org/10.3390/s23063119
Submission received: 21 February 2023 / Revised: 5 March 2023 / Accepted: 10 March 2023 / Published: 14 March 2023

Abstract

:
Calibration of sensors is critical for the precise functioning of lidar–IMU systems. However, the accuracy of the system can be compromised if motion distortion is not considered. This study proposes a novel uncontrolled two-step iterative calibration algorithm that eliminates motion distortion and improves the accuracy of lidar–IMU systems. Initially, the algorithm corrects the distortion of rotational motion by matching the original inter-frame point cloud. Then, the point cloud is further matched with IMU after the prediction of attitude. The algorithm performs iterative motion distortion correction and rotation matrix calculation to obtain high-precision calibration results. In comparison with existing algorithms, the proposed algorithm boasts high accuracy, robustness, and efficiency. This high-precision calibration result can benefit a wide range of acquisition platforms, including handheld, unmanned ground vehicle (UGV), and backpack lidar–IMU systems.

1. Introduction

Accurate state estimation of a platform is a critical requirement for autonomous driving. While satellite navigation systems can provide accurate positioning information in the wilderness, their signal loss is commonplace in metropolitan areas, limiting their usefulness for autonomous driving applications. The Simultaneous Localization and Mapping (SLAM) method, which enables autonomous localization and mapping, has emerged as a research hotspot in the fields of autonomous driving and robot navigation [1,2,3,4]. Depending on the type of data used, SLAM can be classified into two main categories: visual SLAM and lidar SLAM.
Visual SLAM was the dominant focus of early research. Traditional methods were primarily implemented through filtering [5], and classical algorithms were utilized to accomplish localization and mapping through the Extended Kalman Filter (EKF) [4]. Subsequently, keyframe-based visual SLAM techniques matured, and mapping algorithms such as ORB-SLAM became popular [6]. While visual sensors can more accurately extract environmental features when operating at low speeds in texture-rich areas, their accuracy decreases with increasing motion speed. Thus, multi-sensor fusion SLAM techniques, particularly those that combine cameras and IMUs (VI-SLAM), have emerged. VI-SLAM is a low-cost combination of camera and IMU, which has gained a lot of attention. Experiments have demonstrated that VI-SLAM can operate effectively in complex scenes with violent motions [7]. Cameras can capture rich scene details, while IMUs can be used to predict the platform’s attitude and position. The two sensors complement each other to yield better positioning accuracy and mapping results. The VI-ORB algorithm has been proposed based on the ORB-SLAM algorithm [8]. The VINS-Mono algorithm employs a tightly coupled, non-linear optimization-based approach for scale recovery through a monocular camera and IMU, demonstrating better robustness and generality [9]. The PL-VIO algorithm proposes a tightly coupled monocular visual inertial odometry system based on the use of point and line features [10], and the VINS-Fusion algorithm extends VINS-Mono to integrate stereo cameras and GPS devices, enhancing the applicability of VINS-Mono [11].
In recent years, lidar SLAM has grown rapidly, thanks to the reduction in hardware costs of lidar. The robustness of lidar to light and viewpoint changes and its high range accuracy make it an important sensor for localization and mapping in large indoor and outdoor environments. The principle of lidar SLAM is similar to visual SLAM, and the LOAM algorithm [12] has been a representative algorithm for lidar SLAM. Compared to previous methods, the LOAM algorithm is innovative in reducing the number of point clouds by feature point extraction, improving matching accuracy, and constructing optimized cost functions based on point-to-line and point-to-face distances for different feature point types. Several extended algorithms based on LOAM have been proposed, including A-LOAM [13], LeGO-LOAM [14], LIO-SAM [15], and F-LOAM [16]. These algorithms offer unique features such as a simplified version of LOAM, separation of ground points, frame–local map matching, and optimization strategies. These extended algorithms have shown improved operating efficiency and are suitable for localization and mapping in different scenes.
Multi-sensor fusion has emerged as a promising approach to improving the performance of lidar SLAM, particularly in challenging environments. By integrating lidar data with complementary sensor information, such as IMU data, the accuracy of localization and mapping can be significantly improved. A key challenge in lidar and IMU data fusion is the calibration of external parameters between the two sensors. Sensor calibration is crucial in the field of autonomous driving, especially in determining the vehicle’s attitude, which refers to its orientation with respect to the ground, such as pitch, roll, and yaw angles. The vehicle’s attitude is essential in determining its position, trajectory, and behavior on the road, such as lane-keeping, turning, and braking. Therefore, it is essential to calibrate the sensors to reduce their errors and improve their accuracy and consistency [17,18].
Temporal calibration is a crucial component of multi-sensor calibration. Since the lidar and IMU have different emission frequencies, the system may produce time deviations, necessitating pre-synchronization correction. GNSS hardware calibration is a popular method for optimizing this process [19]. However, software-based calibration methods are also available [20,21,22].
The widely used calibration methods are offline calibration or online calibration through markers [23]. However, these methods can be complicated and time-consuming. Recently, Zhejiang University (ZJU) proposed an online calibration method LI-Calib that does not require markers, which employs continuous-time trajectory equations and optimal estimation to decompose space into individual units and identify planar elements for data association [24]. Despite its potential, this method may have higher data requirements in practice. LI-Init is a robust, real-time initialization method for lidar–inertial systems proposed by Hong Kong University (HKU) [25]. The proposed method calibrates the temporal offset and extrinsic parameter between lidars and IMUs. However, this method is more suitable for complex lidar–IMU systems. Since this method involves SLAM mapping, it consumes a lot of resources. For simpler lidar–IMU systems it requires a long processing time.
We present an uncontrolled two-step iterative calibration algorithm for the lidar–IMU system to address the challenge of calibrating the rotation matrix between the lidar and IMU. Our algorithm leverages the high accuracy of the IMU during short periods and corrects the rotational distortion of a single frame of a point cloud acquired in one scan cycle, while also predicting the rotation matrix between adjacent point cloud frames based on the IMU’s angle values. At the start of the calibration process, the rotation matrix is unknown, hence, we employ a two-step approach. In the first step, we match the point cloud without aberration correction to obtain an initial calibration matrix. In the second step, we correct the distortion of the point cloud caused by rotational motion and match the point cloud again to obtain a higher accuracy calibration matrix. The calibration matrix is updated iteratively until convergence. Our proposed algorithm is effective in calibrating the rotation matrix for lidar–IMU systems, and can significantly improve the accuracy of localization and mapping.

2. Materials and Methods

2.1. Related Work

2.1.1. Calibration Principle for Lidar–IMU System

As illustrated in Figure 1, two distinct methods exist to transform a point p k + 1 i m u in the IMU coordinate system at time k + 1 to the lidar coordinate system at time k . The first involves transforming the point to the lidar coordinate system at time k + 1 and then to the lidar coordinate system position at time k based on the relative position attitude of the lidar. The second involves first transforming the point to the IMU coordinate system position at time k and then to the lidar coordinate system position at time k based on the relative position attitude of the IMU.
In Equation (1), the transformation of the point p k + 1 i m u is shown in two ways as mentioned earlier.
R k , k + 1 l i d a r R l i d a r , i m u p k + 1 i m u = R l i d a r , i m u R k , k + 1 i m u p k + 1 i m u
This equation involves the rotation matrices R k , k + 1 l i d a r , R l i d a r , i m u , and R k , k + 1 i m u . Simplified versions of these matrices are shown in Equations (2) and (3), where q l b is the quaternion of rotation from the lidar to the IMU coordinate system, q b k + 1 b k is the result of IMU pre-integration measured from moment b k to moment b k + 1 , and q l k + 1 l k represents the attitude change at moment l k + 1 with respect to the lidar coordinate system at moment l k .
R k , k + 1 l i d a r R l i d a r , i m u = R l i d a r , i m u R k , k + 1 i m u
q b k + 1 b k = q l b q l k + 1 l k q b l
The quaternion multiplication can be represented as the multiplication of matrices with quaternions [19], as shown in Equations (4)–(7), where Q 1 + is the left multiplication matrix and Q 2 is the right multiplication matrix. Q b k + 1 b k + is the left multiplication matrix of the result of the attitude pre-integration of the IMU measurements during moment b k to moment b k + 1 , and Q l k + 1 l k is the right multiplication matrix of the attitude change of moment b k + 1 with respect to the lidar coordinate system of moment b k matrix. After merging, these equations can be transformed into Equation (8). By assuming that there are n sets of measurement data for attitude calibration, the system of chi-square linear equations shown in Equation (9) can be established and solved by the singular value decomposition (SVD) method to obtain the value of q l b [26].
q 1 q 2 = Q 1 + q 2 Q 1 + = q w I + 0 q v T q v q v ×
q 1 q 2 = Q 2 q 1 Q 2 = q w I + 0 q v T q v q v ×
q v × = 0 q z q y q z 0 q x q y q x 0
q = q w q v
Q b k + 1 b k + Q l k + 1 l k q l b = 0
Q b 1 b 0 + Q l 1 l 0 Q b 2 b 1 + Q l 2 l 1 Q b n b n 1 + Q l n l n 1 4 n × 4 q l b = A 4 n × 4 q l b = 0

2.1.2. Matching Algorithms for Point Clouds

In this study, various point cloud matching algorithms are investigated and compared for calculating the relative rotation and translation between two adjacent frames. The algorithms considered include the Iterative Closest Point (ICP) [27], Generalized ICP (GICP) [28], Normal Distributions Transform (NDT) [29], and OMP-NDT [30]. The ICP algorithm aims to minimize the distance between the two point clouds by finding the spatial transformation of the overlapping area, which is simple and does not require segmentation or feature extraction of point clouds. However, its computational cost for finding the nearest corresponding point is high, and it lacks the use of point cloud structure information. The GICP algorithm improves on the ICP algorithm by comprehensively considering point-to-point, point-to-surface, and surface-to-surface strategies, resulting in higher accuracy and robustness. The NDT algorithm divides the point cloud space into voxel grids and calculates the normal distribution of the point cloud in each grid, providing fast computation and less influence from moving targets in the scene. The OMP-NDT algorithm is an OpenMP-based parallel NDT algorithm derived from PCL that uses a multi-threaded approach for faster alignment speed, with a small deviation in matching results. The voxel nearest neighbor search in this study was performed using the DIRECT 7 algorithm.

2.2. Uncontrolled Two-Step Iterative Calibration Algorithm

Figure 2 illustrates the proposed two-step calibration algorithm. In Step 1, the initial rotation matrix is calculated based on the matching of the original point cloud and the integration of IMU data. The pre-integrated attitude measurement values of the IMU from time b k to b k + 1 are obtained using the angular velocity data. A system of homogeneous linear equations is established based on Equation (9), and the SVD method is applied to solve for the value of q l b . In Step 2, the initial calibration result is used to perform motion distortion correction on the original point cloud. Attitude prediction is made based on the IMU integral, and the corrected point cloud is matched and calibrated again to obtain a more accurate calibration result. The process is iterated until convergence is achieved.
In the motion distortion correction process, each point in a scan period’s point cloud data is transformed into a reference coordinate system c 0 , which has its origin typically located at the start of the scan period. The rotation and translation of each point’s coordinate system c k with respect to c 0 is determined, allowing the transformation of p k from c k to c 0 . Since the 6-axis IMU only outputs angular velocity and linear acceleration, the correction of motion distortion caused by attitude is considered. The three attitude angles corresponding to each coordinate system c k can be obtained from the integration of angular velocities measured by the IMU. Figure 3 illustrates this principle.
To correct motion distortion in the lidar–IMU system, we first evaluate the rotation matrices R a x , R a y , and R a z around the x , y , and z axes, respectively, by using the angular velocities ω x i , ω y i , and ω z i measured by the 6-axis IMU at each point i . Equation (10) is used to calculate the rotation matrix for each axis, and then Equation (11) is used to determine the transformation matrix R 0 , k from c k to c 0 . The motion distortion is then solved using Equation (12). The correction process is related to the initial calibrated rotation matrix and can be further optimized through iterations. By utilizing the rotation matrix obtained from these equations, we can predict the transformation in the subsequent frame and improve the speed and accuracy of the matching process.
R a x = i = 1 k ω x i
R a y = i = 1 k ω y i
R a z = i = 1 k ω z i
R 0 , k = R a x R a y R a z
p k u d = R 0 , k p k

3. Results

3.1. Experimental Data

In order to evaluate the proposed method, point cloud data were collected from five distinct locations and datasets, including Rotation, Park, Campus, Walk, and Cnu, using high-precision equipment. The Rotation, Park, Campus, and Walk data are publicly available datasets collected using a Velodyne VLP-16 lidar and a Microstrain 3DM-GX5-25 IMU at the Massachusetts Institute of Technology (MIT) [15]. Meanwhile, the Cnu dataset was obtained at Capital Normal University using a Velodyne VLP-16 lidar and an Inertial Labs INS-D IMU. The specific details of each dataset are outlined in Table 1. To obtain the position and attitude information of each lidar frame, we utilized the Simultaneous Localization and Mapping (SLAM) algorithm. As shown in Figure 4, the resulting stitched point cloud represents the collected data from each location. To compare the performance of the proposed method, we calculated the error by converting the estimated rotation matrix to Euler angles and comparing them to the ground truth values. The ground truth calibration values of Rotation, Park, Campus, and Walk datasets are reported as (0, 0, 0), as established in previous research [15]. On the other hand, for the Cnu dataset, the ground truth calibration values were established offline in the laboratory using control points and measured at (−179.90, −0.35, −88.54).

3.2. Comparative Analysis of Calibration Accuracy of Different Processing Methods

In this study, the accuracy of the calibration is evaluated by decomposing the rotation matrix between the calibrated and actual values into three Euler angles (roll angle, pitch angle, heading angle), and calculating the distance error (in degrees) between the three angles. The smaller the distance error, the higher the calibration accuracy. In Equation (13), Δ r , Δ p , and Δ y represent the differences in roll, pitch, and heading angles, respectively. To assess the effectiveness of different calibration methods, we compared four approaches, three of which were processed following the steps outlined in Figure 2 with only slight variations between them. The first method (Method I) utilized the original point cloud without motion distortion correction, IMU prediction, or iterative processing, thus eliminating the frame-to-local map matching. The second (Method II) and third (Method III) methods were novel techniques introduced in this study, with Method III differing from Method II only in the inclusion of IMU prediction. Finally, we compared our approach to LI-Calib proposed by ZJU [24] and LI-Init proposed by HKU [25]. By comparing the results of these four calibration methods, we aim to determine the most effective approach for lidar–IMU calibration.
R o t a t i o n _ e r r o r = Δ r 2 + Δ p 2 + Δ y 2 2
The first three calibration methods utilize the OMP-NDT algorithm for point cloud frame matching, as illustrated in Figure 5. Method II proposed in this paper showed an overall improvement in accuracy after incorporating motion distortion correction and iterative processing. However, the introduction of IMU prediction did not consistently improve the accuracy and, in some cases, led to a decrease in accuracy. This variability in results may be attributed to the accuracy of the IMU measurements. The current study did not account for the drift phenomenon of IMU hardware over time during IMU integration, which may impact the accuracy of the predictions and overall matching results.
Although LI-Calib provides open-source software with a GUI, the format of the point cloud used by LI-Calib is different from our experimental data, and only the Park dataset can be processed at present. To compare the accuracy of LI-Calib with our proposed method, we conducted three tests in the same environment and evaluated the stability of the algorithms. The results presented in Table 2 show that our proposed Method II has an average error of 11.01°, which is stable and consistent across all three tests. On the other hand, the average error of LI-Calib is 36.63°, and it exhibits larger fluctuations. The intermediate results of LI-Calib open-source software are shown in Figure 6.
In order to evaluate the performance of the proposed Method II, we conducted experiments on Rotation data and compared it with the LI-Init method. Our results demonstrate that Method II outperforms the LI-Init method in terms of calibration accuracy, with slightly better results and more efficient running time. These findings are consistent with our expectations, and support the potential of Method II to address calibration issues in a more accurate and time-effective manner. A detailed presentation of the experimental results can be found in Table 3.

3.3. Calibration Accuracy of Different Point Cloud Matching Algorithms

In this study, the impact of various point cloud matching algorithms on the calibration accuracy is investigated, as point cloud matching quality is a crucial factor in calibration. Four representative matching algorithms, including ICP, NDT, GICP, and OMP-NDT, are selected to evaluate their impact on calibration accuracy. As presented in Table 4, calibration results vary among the different matching algorithms. The effects of the matching algorithms on the calibration accuracy can be clearly seen in Figure 7, which demonstrates that the accuracy of calibration results varies for different data. Overall, the calibration accuracy is substantially improved by using GICP or NDT, while ICP exhibits inferior performance in comparison.

3.4. Analysis of the Influence of the Scenario on the Calibration Accuracy

In order to assess the performance of the calibration methods and matching algorithms, we analyzed the angular change between adjacent frames of the first 100 frames of point clouds for each scene. This was carried out by calculating the angular distance based on the matching results, and the results are visualized in Figure 8a. Furthermore, we evaluated the calibration errors of the five datasets, which were acquired using different calibration methods and matching algorithms. The results are shown in Table 5.
In this study, we evaluated the calibration errors of different scenes using a handheld lidar–IMU combination system for fixed position acquisition, as well as a combined system using a backpack. Our results indicate that the average calibration error for Rotation data was only 3.58°, which is a relatively small overall error. However, the errors for Park and Walk data were higher, while Campus data had smaller errors, suggesting that calibration errors are influenced by the scene type. The average error for the combined lidar–IMU system using the backpack (Cnu data) was 9.09°, which was higher than that of the handheld system. Figure 8b shows a scatter plot of the mean angular variation of the different data versus the mean calibration error. Our findings suggest that maintaining a steady point cloud matching throughout calibration and having a large rotational change between neighboring point cloud frames can result in higher calibration accuracy. To achieve this, we recommend collecting calibration data while the equipment is uniformly rotating in the same location to minimize the impact of equipment movement on matching. Additionally, including wall structures in the collected scene can improve the effect of extracting stable point cloud features, which can lead to a regular point cloud distribution and improve point cloud matching.

4. Discussion

The findings reveal that the calibration accuracy is significantly affected by the data acquisition method and point cloud matching algorithm employed. The calibration results of the GICP and NDT algorithms are superior to those of the conventional ICP algorithm, indicating that advanced matching algorithms can enhance the calibration performance of the lidar–IMU system. Furthermore, the calibration outcomes of different scenarios show that the calibration results of Rotation data are the most stable and have higher accuracy, while the errors of the Park and Walk scenarios are comparatively larger. These results emphasize the importance of carefully selecting the data acquisition method and point cloud matching algorithm for achieving reliable calibration results.
In addition, this study recommends using handheld lidar, keeping the data acquisition in a single fixed location, and keeping the rotation speed constant and reasonable to guarantee stable calibration results. Accurate calibration and successful matching can both be achieved by gathering regular wall information in the scenario.

5. Conclusions and Future Work

This study introduces an uncontrolled two-step iterative calibration algorithm to address the current calibration issues with the lidar–IMU system. The proposed algorithm directly utilizes the data acquired by the lidar–IMU system to obtain the initial rotation matrix through initial matching and then iteratively increases the matching accuracy by exploiting motion distortion and IMU pre-integration. The experimental comparison demonstrates that the method in this study can produce significantly better calibration results for vehicle-mounted, handheld, backpack, and other lidar acquisition modalities when compared to the conventional method. In comparison with existing algorithms, the proposed algorithm boasts high accuracy, robustness, and efficiency.
In our future work, we aim to further improve the calibration algorithm by increasing the number of experimental scenes, examining the impact of frame-to-submap matching on calibration accuracy, and accounting for IMU data inaccuracy. We also aim to enhance the open-source code and employ GPU to accelerate point cloud matching, thereby enabling the calibration program to operate more efficiently and conveniently. We also plan to investigate the challenge of achieving non-hardware time synchronization between the lidar and IMU sensors in our system. As each sensor has its own timestamp, this can be a complex task. We aim to leverage the results obtained through pose synchronization to develop an algorithm for time synchronization. By integrating the pose and time synchronization algorithms, we expect to improve the overall accuracy and robustness of our system.

Author Contributions

Conceptualization, S.Y. and D.X.; methodology, S.Y.; software, S.Y.; validation, D.X., Y.F. and Z.W.; formal analysis, S.Y.; investigation, S.Y. and D.X.; resources, Z.W.; data curation, Y.F.; writing—original draft preparation, S.Y.; writing—review and editing, D.X.; visualization, S.Y.; supervision, R.Z.; project administration, R.Z.; funding acquisition, D.X. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Key R&D Program of China, grant number 2022YFB3904101.

Institutional Review Board Statement

The study did not require ethical approval.

Informed Consent Statement

The study did not involve humans.

Data Availability Statement

The data presented in this study are available in [15]. To facilitate the research community, the project is openly available at https://gitee.com/yinshilun/two-step-calib.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhou, Z.; Cao, J.; Di, S. Overview of 3D Lidar SLAM algorithms. Chin. J. Sci. Instrum. 2021, 42, 13–27. [Google Scholar]
  2. Durrant-Whyte, H.; Bailey, T. Simultaneous localization and mapping: Part I. IEEE Robot. Autom. Mag. 2006, 13, 99–110. [Google Scholar] [CrossRef]
  3. Bailey, T.; Durrant-Whyte, H. Simultaneous localization and mapping (SLAM): Part II. IEEE Robot. Autom. Mag. 2006, 13, 108–117. [Google Scholar] [CrossRef]
  4. Yousif, K.; Bab-Hadiashar, A.; Hoseinnezhad, R. An Overview to Visual Odometry and Visual SLAM: Applications to Mobile Robotics. Intell. Ind. Syst. 2015, 1, 289–311. [Google Scholar] [CrossRef]
  5. Hauke, S.; Montiel, J.M.M.; Davison, A.J. Visual SLAM: Why filter? Image Vis. Comput. 2012, 30, 65–77. [Google Scholar]
  6. Raul, M.-A.; Montiel, J.M.M.; Tardos, J.D. ORB-SLAM: A versatile and accurate monocular SLAM system. IEEE Trans. Robot. 2015, 31, 1147–1163. [Google Scholar]
  7. Jonathan, K.; Sukhatme, G.S. Visual-inertial sensor fusion: Localization, mapping and sensor-to-sensor self-calibration. Int. J. Robot. Res. 2011, 30, 56–79. [Google Scholar]
  8. Raúl, M.-A.; Tardós, J.D. Visual-inertial monocular SLAM with map reuse. IEEE Robot. Autom. Lett. 2017, 2, 796–803. [Google Scholar]
  9. Tong, Q.; Li, P.; Shen, S. Vins-mono: A robust and versatile monocular visual-inertial state estimator. IEEE Trans. Robot. 2018, 34, 1004–1020. [Google Scholar]
  10. He, Y.; Zhao, J.; Guo, Y.; He, W.; Yuan, K. Pl-vio: Tightly-coupled monocular visual–inertial odometry using point and line features. Sensors 2018, 18, 1159. [Google Scholar] [CrossRef] [PubMed]
  11. Qin, T.; Pan, J.; Cao, S.; Shen, S. A General Optimization-based Framework for Local Odometry Estimation with Multiple Sensors. arXiv 2019, arXiv:1901.03638. [Google Scholar]
  12. Zhang, J.; Singh, S. LOAM: Lidar odometry and mapping in real-time. Robot. Sci. Syst. 2014, 2, 109–111. [Google Scholar]
  13. Available online: https://github.com/HKUST-Aerial-Robotics/A-LOAM (accessed on 6 June 2022).
  14. Shan, T.; Englot, B. LeGO-LOAM: Lightweight and Ground-Optimized Lidar Odometry and Mapping on Variable Terrain. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 4758–4765. [Google Scholar]
  15. Shan, T.; Englot, B.; Meyers, D.; Wang, W.; Ratti, C.; Rus, D. LIO-SAM: Tightly-coupled Lidar Inertial Odometry via Smoothing and Mapping. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 24 October 2020–24 January 2021; pp. 5135–5142. [Google Scholar]
  16. Wang, H.; Wang, C.; Chen, C.-L.; Xie, L. F-LOAM: Fast LiDAR Odometry and Mapping. In Proceedings of the 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Prague, Czech Republic, 27 September–1 October 2021; pp. 4390–4396. [Google Scholar]
  17. Liu, W.; Xia, X.; Xiong, L.; Lu, Y.; Gao, L.; Yu, Z. Automated Vehicle Sideslip Angle Estimation Considering Signal Measurement Characteristic. IEEE Sens. J. 2021, 21, 21675–21687. [Google Scholar] [CrossRef]
  18. Yan, G.; Liu, Z.; Wang, C.; Shi, C.; Wei, P.; Cai, X.; Ma, T.; Liu, Z.; Zhong, Z.; Liu, Y.; et al. Opencalib: A multi-sensor calibration toolbox for autonomous driving. Softw. Impacts 2022, 14, 100393. [Google Scholar] [CrossRef]
  19. Xiong, L.; Xia, X.; Lu, Y.; Liu, W.; Gao, L.; Song, S.; Yu, Z. IMU-Based Automated Vehicle Body Sideslip Angle and Attitude Estimation Aided by GNSS Using Parallel Adaptive Kalman Filters. IEEE Trans. Veh. Technol. 2020, 69, 10668–10680. [Google Scholar] [CrossRef]
  20. Feng, Z.; Li, J. Monocular Visual-Inertial State Estimation with Online Temporal Calibration. In Proceedings of the Ubiquitous Positioning, Indoor Navigation and Location-Based Services, Wuhan, China, 22–23 March 2018; pp. 1–8. [Google Scholar]
  21. Tong, Q.; Shaojie, S. Online Temporal Calibration for Monocular Visual-Inertial Systems. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Madrid, Spain, 1–5 October 2018; pp. 3662–3669. [Google Scholar]
  22. Feng, Z.; Li, J.; Zhang, L.; Chen, C. Online Spatial and Temporal Calibration for Monocular Direct Visual-Inertial Odometry. Sensors 2019, 19, 2273. [Google Scholar] [CrossRef] [PubMed]
  23. Rehder, J.; Nikolic, J.; Schneider, T.; Hinzmann, T.; Siegwart, R. Extending kalibr: Calibrating the extrinsics of multiple IMUs and of individual axes. In Proceedings of the 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 16–21 May 2016; pp. 4304–4311. [Google Scholar]
  24. Lv, J.; Xu, J.; Hu, K.; Liu, Y.; Zuo, X. Targetless Calibration of LiDAR-IMU System Based on Continuous-time Batch Estimation. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 24 October 2020–24 January 2021; pp. 9968–9975. [Google Scholar]
  25. Zhu, F.; Ren, Y.; Zhang, F. Robust Real-time LiDAR-inertial Initialization. In Proceedings of the 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Kyoto, Japan, 23–27 October 2022. [Google Scholar]
  26. Joan, S. Quaternion kinematics for the error-state Kalman filter. arXiv 2017, arXiv:1711.02508. [Google Scholar]
  27. Magnusson, M. The Three-Dimensional Normal-Distributions Transform: An Efficient Representation for Registration, Surface Analysis, and Loop Detection. Ph.D. Thesis, Örebro Universitet, Örebro, Sweden, 2009. [Google Scholar]
  28. Aleksandr, S.; Haehnel, D.; Thrun, S. Generalized-icp. Robot. Sci. Syst. 2009, 2, 435. [Google Scholar]
  29. Biber, P.; Strasser, W. The normal distributions transform: A new approach to laser scan matching. In Proceedings of the 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2003) (Cat. No.03CH37453), Las Vegas, NV, USA, 27–31 October 2003; Volume 3, pp. 2743–2748. [Google Scholar]
  30. Available online: https://github.com/koide3/ndt_omp (accessed on 6 June 2022).
Figure 1. Calibration schematic for lidar–IMU system.
Figure 1. Calibration schematic for lidar–IMU system.
Sensors 23 03119 g001
Figure 2. Flowchart of the two-step calibration algorithm.
Figure 2. Flowchart of the two-step calibration algorithm.
Sensors 23 03119 g002
Figure 3. Principle of motion distortion correction.
Figure 3. Principle of motion distortion correction.
Sensors 23 03119 g003
Figure 4. The results obtained by applying SLAM algorithm to different data: (a) Park data; (b) Walk data; (c) Rotation data; (d) Campus data; (e) Cnu data.
Figure 4. The results obtained by applying SLAM algorithm to different data: (a) Park data; (b) Walk data; (c) Rotation data; (d) Campus data; (e) Cnu data.
Sensors 23 03119 g004
Figure 5. Calibration error of three methods.
Figure 5. Calibration error of three methods.
Sensors 23 03119 g005
Figure 6. (a,b) Operation results using ZJU’s algorithm.
Figure 6. (a,b) Operation results using ZJU’s algorithm.
Sensors 23 03119 g006
Figure 7. Calibration results of different matching algorithms for different data: (a) Park data; (b) Walk data; (c) Rotation data; (d) Campus data; (e) Cnu data.
Figure 7. Calibration results of different matching algorithms for different data: (a) Park data; (b) Walk data; (c) Rotation data; (d) Campus data; (e) Cnu data.
Sensors 23 03119 g007
Figure 8. (a) Angle change between adjacent frames of different data; (b) the relationship between angular variation of different data and calibration error.
Figure 8. (a) Angle change between adjacent frames of different data; (b) the relationship between angular variation of different data and calibration error.
Sensors 23 03119 g008
Table 1. Experimental data information.
Table 1. Experimental data information.
NameDuration (Seconds)Acquisition PlatformLocation
Rotation58.60HandheldMIT
Park560.00Vehicle-mountedMIT
Campus994.00HandheldMIT
Walk655.00HandheldMIT
Cnu1199.20BackpackCapital Normal University
Table 2. Comparison of the accuracy of Method II and LI-Calib.
Table 2. Comparison of the accuracy of Method II and LI-Calib.
NameMethod II (°)LI-Calib (°)
Park11.0118.74
11.0153.09
11.0138.05
Average error11.0136.63
Table 3. Comparison of the accuracy of Method II and LI-Init.
Table 3. Comparison of the accuracy of Method II and LI-Init.
MethodRotation (°)Time (s)
Method II3.4140.2
LI-Init4.0847.3
Table 4. Calibration results for different data and registration methods.
Table 4. Calibration results for different data and registration methods.
Matching AlgorithmNameMethod I (°)Method II (°)Method III (°)Average Error (°)
NDTRotation4.163.293.063.50
Park11.919.6412.7711.44
Campus2.311.801.892.00
Walk4.974.195.654.94
Cnu16.133.009.609.58
OMP-NDTRotation4.373.413.683.82
Park11.7011.0112.0311.58
Campus1.491.360.841.23
Walk3.092.873.173.04
Cnu15.272.7610.599.54
ICPRotation4.983.553.403.98
Park17.2414.2314.2915.25
Campus2.081.481.531.70
Walk12.278.776.089.04
Cnu17.796.874.179.61
GICPRotation4.262.392.453.03
Park11.168.118.169.14
Campus1.911.311.351.52
Walk8.116.654.446.40
Cnu12.456.174.267.63
Table 5. Average angular distance of different data.
Table 5. Average angular distance of different data.
NameAverage Angle Change ( ° ) Average Calibration Error ( ° )
Park2.2311.85
Walk2.305.86
Rotation3.793.58
Campus3.061.61
Cnu0.999.09
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yin, S.; Xie, D.; Fu, Y.; Wang, Z.; Zhong, R. Uncontrolled Two-Step Iterative Calibration Algorithm for Lidar–IMU System. Sensors 2023, 23, 3119. https://doi.org/10.3390/s23063119

AMA Style

Yin S, Xie D, Fu Y, Wang Z, Zhong R. Uncontrolled Two-Step Iterative Calibration Algorithm for Lidar–IMU System. Sensors. 2023; 23(6):3119. https://doi.org/10.3390/s23063119

Chicago/Turabian Style

Yin, Shilun, Donghai Xie, Yibo Fu, Zhibo Wang, and Ruofei Zhong. 2023. "Uncontrolled Two-Step Iterative Calibration Algorithm for Lidar–IMU System" Sensors 23, no. 6: 3119. https://doi.org/10.3390/s23063119

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop