A LiDAR-Camera Joint Calibration Algorithm Based on Deep Learning
Abstract
:1. Introduction
- (1)
- A mathematical model was constructed for joint LiDAR–camera calibration. A detailed analysis of the working principles and data characteristics of cameras and LiDAR, the preprocessing of LiDAR point cloud images into standardized two-dimensional depth images, and the sensor joint calibration process were conducted.
- (2)
- A deep-learning-based LiDAR–camera parameter-solving network model was constructed. The model consists of a feature extraction layer, feature matching layer, and feature aggregation layer. It accurately solves the rotational translation parameter matrix to complete the joint calibration of the spatial positions of two sensors.
- (3)
- A data migration–fusion mechanism was introduced to improve the robustness of the sensor relative position offset and improve the prediction accuracy of the network.
2. Camera and LiDAR Principle and Related Work
2.1. Camera Working Principal
2.2. Camera Imaging Geometry
2.3. Working Principle and Data Characteristics of LiDAR
3. Method
3.1. Joint Calibration Mathematical Model
3.2. Data Migration–Fusion Mechanism
- (1)
- Rich dataset: Randomly using multiple images, randomly scaling, and then splicing the random distribution substantially enriches the detection dataset. In particular, random scaling adds many small targets, making the network more robust.
- (2)
- GPU memory reduction: The data of multiple pictures are directly calculated such that the minibatch size need not be large to achieve better results.
Algorithm 1: Data migration–fusion algorithm. |
Input: pictures: Collection of original images; input_shape: specifies the size of the image Output: image_new_list: a collection of new images enhanced with data 1 image_list=get_image_info(Picture); 2 image_datas, box_data=get_random(image_list, input_shape); 3 new_images=Merge_image(image_datas); 4 image_new_list =Merge_boxes(new_images,box_data); 5 image_new_list |
4. Deep Learning Parameter Solving Model
4.1. LiDAR-Camera Joint Calibration Network Model
4.1.1. Feature Extraction Module
4.1.2. Feature Matching Module
4.1.3. Feature Regression Module
4.1.4. Regression Loss Function
4.1.5. Iterative Refinement
4.1.6. Pseudocode Implementation of Algorithm
Algorithm 2: Algorithm for solving the joint calibration model of LiDAR and camera. |
5. Evaluations
5.1. Settings
5.1.1. Experimental Dataset
5.1.2. Experimental Environment
5.2. Effectiveness
5.2.1. Network Model Training Convergence
5.2.2. Different Decalibration Ranges
5.2.3. Average Error and Rotation Error Effect Verification
5.2.4. Sample Calibration Case Experiment
5.2.5. Verification of Data Migration-Fusion Mechanism
6. Conclusions
- (1)
- Although this study achieved calibration in various scenarios, the presence of noise and changes in lighting conditions can still lead to feature loss. In the future, the deep-learning network model should be optimized to improve its generalization ability.
- (2)
- For the proposed joint calibration algorithm, experimental tests were conducted based on an open-source dataset in a laboratory environment, but the program was not embedded in an AGV for on-site testing in the factory. In future studies, it should be embedded in AGVs in the field. The stability, real-time performance, and detection accuracy of the algorithm should be tested.
- (3)
- Calibration between the camera and the LiDAR sensor is the basis for a higher level of fusion between the two sensors. However, the fusion of the feature level and decision level requires additional algorithms. Plans exist to take the next step using multisensor fusion technology.
- (4)
- Existing calibration methods perform poorly in long-distance scenarios or when the LiDAR point cloud is sparse. Future research will focus on improving calibration accuracy in these situations, potentially by utilizing more complex feature extraction methods or multi-frame data fusion to address these challenges.
- (5)
- Given the differences in data characteristics between different types of sensors (e.g., point clouds and images), future research may explore cross-domain calibration methods, enabling effective data fusion from different sensors in highly heterogeneous environments.
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Yeong, D.J.; Velasco-Hernandez, G.; Barry, J.; Walsh, J. Sensor and sensor fusion technology in autonomous vehicles: A review. Sensors 2021, 21, 2140. [Google Scholar] [CrossRef] [PubMed]
- Song, W.; Zou, S.; Tian, Y.; Sun, S.; Qiu, L. A CPU-GPU hybrid system of environment perception and 3D terrain reconstruction for unmanned ground vehicle. J. Inf. Process. Syst. 2018, 14, 1445–1456. [Google Scholar]
- Caltagirone, L.; Bellone, M.; Svensson, L.; Wahde, M. LIDAR-camera fusion for road detection using fully convolutional neural networks. Robot. Auton. Syst. 2019, 111, 125–131. [Google Scholar] [CrossRef]
- Lee, J.S.; Park, T.H. Fast road detection by cnn-based camera–lidar fusion and spherical coordinate transformation. IEEE Trans. Intell. Transp. Syst. 2020, 22, 5802–5810. [Google Scholar] [CrossRef]
- Nie, J.; Yan, J.; Yin, H.; Ren, L.; Meng, Q. A Multimodality Fusion Deep Neural Network and Safety Test Strategy for Intelligent Vehicles. IEEE Trans. Intell. Veh. 2021, 6, 310–322. [Google Scholar] [CrossRef]
- Tóth, T.; Pusztai, Z.; Hajder, L. Automatic LiDAR-camera calibration of extrinsic parameters using a spherical target. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 8580–8586. [Google Scholar]
- Bai, Z.; Jiang, G.; Xu, A. LiDAR-Camera Calibration Using Line Correspondences. Sensors 2020, 20, 6319. [Google Scholar] [CrossRef]
- Sengupta, A.; Ye, Y.; Wang, R.; Liu, C.; Roy, K. Going deeper in spiking neural networks: VGG and residual architectures. Front. Neurosci. 2019, 13, 425055. [Google Scholar] [CrossRef]
- Geiger, A.; Moosmann, F.; Car, O.; Schuster, B. Automatic camera and range sensor calibration using a single shot. In Proceedings of the IEEE International Conference on Robotics & Automation, St. Paul, MN, USA, 14–18 May 2012; pp. 3936–3943. [Google Scholar]
- Guo, C.X.; Roumeliotis, S.I. An analytical least-squares solution to the line scan LIDAR-camera extrinsic calibration problem. In Proceedings of the 2013 IEEE International Conference on Robotics and Automation, Karlsruhe, Germany, 6–10 May 2013; pp. 2943–2948. [Google Scholar] [CrossRef]
- Verma, S.; Berrio, J.S.; Worrall, S.; Nebot, E. Automatic extrinsic calibration between a camera and a 3D Lidar using 3D point and plane correspondences. In Proceedings of the 2019 IEEE Intelligent Transportation Systems Conference (ITSC), Auckland, New Zealand, 27–30 October 2019; pp. 3906–3912. [Google Scholar] [CrossRef]
- Wang, W.; Sakurada, K.; Kawaguchi, N. Reflectance intensity assisted automatic and accurate extrinsic calibration of 3d lidar and panoramic camera using a printed chessboard. Remote Sens. 2017, 9, 851. [Google Scholar] [CrossRef]
- Xie, S.; Yang, D.; Jiang, K.; Zhong, Y. Pixels and 3-D Points Alignment Method for the Fusion of Camera and LiDAR Data. IEEE Trans. Instrum. Meas. 2019, 68, 3661–3676. [Google Scholar] [CrossRef]
- Zhou, L.; Li, Z.; Kaess, M. Automatic extrinsic calibration of a camera and a 3D lidar using line and plane correspondences. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 5562–5569. [Google Scholar]
- Zhang, Q.; Pless, R. Extrinsic calibration of a camera and laser range finder (improves camera calibration). In Proceedings of the 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat. No. 04CH37566), Sendai, Japan, 28 September–2 October 2004; Volume 3, pp. 2301–2306. [Google Scholar]
- Deng, Z.; Xiong, L.; Yin, D.; Shan, F. Joint Calibration of Dual Lidars and Camera Using a Circular Chessboard; Technical Report; SAE Technical Paper; SAE: Warrendale, PA, USA, 2020. [Google Scholar]
- Liu, H.; Xu, Q.; Huang, Y.; Ding, Y.; Xiao, J. A Method for Synchronous Automated Extrinsic Calibration of LiDAR and Cameras Based on a Circular Calibration Board. IEEE Sens. J. 2023, 23, 25026–25035. [Google Scholar] [CrossRef]
- Debattisti, S.; Mazzei, L.; Panciroli, M. Automated extrinsic laser and camera inter-calibration using triangular targets. In Proceedings of the 2013 IEEE Intelligent Vehicles Symposium (IV), Gold Coast, QLD, Australia, 23–26 June 2013; pp. 696–701. [Google Scholar]
- Pereira, M.; Silva, D.; Santos, V.; Dias, P. Self calibration of multiple LIDARs and cameras on autonomous vehicles. Robot. Auton. Syst. 2016, 83, 326–337. [Google Scholar] [CrossRef]
- Pusztai, Z.; Hajder, L. Accurate calibration of LiDAR-camera systems using ordinary boxes. In Proceedings of the IEEE International Conference on Computer Vision Workshops, Venice, Italy, 22–29 October 2017; pp. 394–402. [Google Scholar]
- Xu, X.; Zhang, L.; Yang, J.; Liu, C.; Xiong, Y.; Luo, M.; Tan, Z.; Liu, B. LiDAR–camera calibration method based on ranging statistical characteristics and improved RANSAC algorithm. Robot. Auton. Syst. 2021, 141, 103776. [Google Scholar] [CrossRef]
- Jiang, P.; Osteen, P.; Saripalli, S. Semcal: Semantic lidar-camera calibration using neural mutual information estimator. In Proceedings of the 2021 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI), Karlsruhe, Germany, 23–25 September 2021; pp. 1–7. [Google Scholar]
- Wendt, A. A concept for feature based data registration by simultaneous consideration of laser scanner data and photogrammetric images. ISPRS J. Photogramm. Remote Sens. 2007, 62, 122–134. [Google Scholar] [CrossRef]
- Schneider, N.; Piewak, F.; Stiller, C.; Franke, U. RegNet: Multimodal sensor registration using deep neural networks. In Proceedings of the 2017 IEEE Intelligent Vehicles Symposium (IV), Redondo Beach, CA, USA, 11–14 June 2017; pp. 1803–1810. [Google Scholar]
- Duy, A.N.; Yoo, M. Calibration-Net: LiDAR and camera auto-calibration using cost volume and convolutional neural network. In Proceedings of the 2022 International Conference on Artificial Intelligence in Information and Communication (ICAIIC), Jeju Island, Republic of Korea, 21–24 February 2022; pp. 141–144. [Google Scholar]
- Lv, X.; Wang, B.; Dou, Z.; Ye, D.; Wang, S. LCCNet: LiDAR and camera self-calibration using cost volume network. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 2894–2901. [Google Scholar]
- Yuan, K.; Guo, Z.; Wang, Z.J. RGGNet: Tolerance Aware LiDAR-Camera Online Calibration With Geometric Deep Learning and Generative Model. IEEE Robot. Autom. Lett. 2020, 5, 6956–6963. [Google Scholar] [CrossRef]
- Yu, Y.; Fan, S.; Li, L.; Wang, T.; Li, L. Automatic Targetless Monocular Camera and LiDAR External Parameter Calibration Method for Mobile Robots. Remote Sens. 2023, 15, 5560. [Google Scholar] [CrossRef]
- Huang, J.K.; Grizzle, J.W. Improvements to target-based 3D LiDAR to camera calibration. IEEE Access 2020, 8, 134101–134110. [Google Scholar] [CrossRef]
- Nakano, T.; Sakai, M.; Torikai, K.; Suzuki, Y.; Takeda, S.; Noda, S.e.; Yamaguchi, M.; Nagao, Y.; Kikuchi, M.; Odaka, H.; et al. Imaging of 99mTc-DMSA and 18F-FDG in humans using a Si/CdTe Compton camera. Phys. Med. Biol. 2020, 65, 05LT01. [Google Scholar] [CrossRef] [PubMed]
- Zhang, K.; Ren, W.; Luo, W.; Lai, W.S.; Stenger, B.; Yang, M.H.; Li, H. Deep image deblurring: A survey. Int. J. Comput. Vis. 2022, 130, 2103–2130. [Google Scholar] [CrossRef]
- Lin, M. Network in network. arXiv 2013, arXiv:1312.4400. [Google Scholar]
- Geiger, A.; Lenz, P.; Stiller, C.; Urtasun, R. Vision meets robotics: The KITTI dataset. Int. J. Robot. Res. 2013, 32, 1231–1237. [Google Scholar] [CrossRef]
Hardware Platform | Software Platform | ||
---|---|---|---|
CPU | i7 @2.5 GHz | Operating system | Ubuntu |
GPU | GTX2070Ti | Deep learning space | Pytorch |
RAM | 32G | Programming language | Python |
Video memory | 32G |
Network Training Parameter Setting | |||
---|---|---|---|
Batch _size | 32 | Epochs size | 100 |
Optimizer | Adam | Weight decay | 0.0001 |
Learning rate | 0.003 | Iterations | 120 |
Multi-Range | Indicators | Translation Error (cm) | Rotation Error (°) | ||||||
---|---|---|---|---|---|---|---|---|---|
X | Y | Z | Roll | Pitch | Yaw | ||||
After / m network | Mean | ||||||||
Median | 0.292 | ||||||||
Std. | |||||||||
After / m network | Mean | 1.724 | 0.427 | 0.431 | 0.371 | 0.213 | 0.120 | 0.054 | 0.076 |
Median | 1.329 | 0.314 | 0.348 | 0.186 | 0.084 | 0.042 | 0.039 | ||
Std. | 1.642 | 0.321 | 0.394 | 0.194 | 0.377 | 0.097 | 0.054 | 0.082 | |
After / m network | Mean | 2.378 | 0.986 | 1.829 | 0.896 | 0.374 | 0.236 | 0.211 | 0.214 |
Median | 2.211 | 0.913 | 1.714 | 0.974 | 0.246 | 0.176 | 0.184 | 0.119 | |
Std. | 1.812 | 0.512 | 0.6220 | 0.324 | 0.537 | 0.214 | 0.413 | 0.243 | |
After / m network | Mean | 3.987 | 1.378 | 2.231 | 1.238 | 0.469 | 0.293 | 0.314 | 0.324 |
Median | 3.724 | 1.394 | 2.574 | 1.144 | 0.314 | 0.189 | 0.209 | 0.213 | |
Std. | 2.471 | 0.714 | 0.987 | 0.589 | 0.674 | 0.513 | 0.577 | 0.398 | |
After / m network | Mean | 5.782 | 2.410 | 3.047 | 3.228 | 0.631 | 0.534 | 0.582 | 0.603 |
Median | 5.210 | 2.340 | 3.141 | 2.874 | 0.811 | 0.319 | 0.412 | 0.372 | |
Std. | 3.971 | 0.994 | 1.682 | 1.019 | 1.144 | 0.919 | 0.891 | 0.602 |
Method | Error Range | Translation Error (cm) | Rotation Error (°) | ||||||
---|---|---|---|---|---|---|---|---|---|
Mean | X | Y | Z | Mean | Roll | Pitch | Yaw | ||
Regnet [24] | [−1.5 m, 1.5 m]/[, ] | 6 | 7 | 7 | 4 | 0.28 | 0.24 | 0.25 | 0.36 |
Calibnet [25] | [−1.5 m, 1.5 m]/[, ] | 4.2 | 4 | 1.5 | 7.2 | 0.4 | 0.17 | 0.9 | 0.14 |
LCCnet [26] | [−1.5 m, 1.5 m]/[, ] | 0.49 | 0.32 | 0.35 | 0.8 | 0.26 | 0.3 | 0.42 | 0.08 |
Ours | [−1.5 m, 1.5 m]/[, ] |
Method | Error Range | Translation Error (cm) | Rotation Error (°) | ||||||
---|---|---|---|---|---|---|---|---|---|
Mean | X | Y | Z | Mean | Roll | Pitch | Yaw | ||
Ours | [−1.5 m, 1.5 m]/[, ] | ||||||||
Not added | [−1.5 m, 1.5 m]/[, ] | 0.30 | 0.26 | 0.28 | 0.016 | 0.03 | 0.02 | 0.01 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Ren, F.; Liu, H.; Wang, H. A LiDAR-Camera Joint Calibration Algorithm Based on Deep Learning. Sensors 2024, 24, 6033. https://doi.org/10.3390/s24186033
Ren F, Liu H, Wang H. A LiDAR-Camera Joint Calibration Algorithm Based on Deep Learning. Sensors. 2024; 24(18):6033. https://doi.org/10.3390/s24186033
Chicago/Turabian StyleRen, Fujie, Haibin Liu, and Huanjie Wang. 2024. "A LiDAR-Camera Joint Calibration Algorithm Based on Deep Learning" Sensors 24, no. 18: 6033. https://doi.org/10.3390/s24186033
APA StyleRen, F., Liu, H., & Wang, H. (2024). A LiDAR-Camera Joint Calibration Algorithm Based on Deep Learning. Sensors, 24(18), 6033. https://doi.org/10.3390/s24186033