ZUST Campus: A Lightweight and Practical LiDAR SLAM Dataset for Autonomous Driving Scenarios
Abstract
:1. Introduction
- High resolution vs. low resolution: To cope with tasks such as object detection and semantic segmentation, most datasets employ LiDARs with high resolution (e.g., Velodyne HDL-64E) as their primary sensors. However, high-resolution LiDAR sensors are often prohibitively expensive, making the widespread use of low-cost, low-resolution LiDAR sensors more common in autonomous robotics applications. From the viewpoint of an efficiency-sensitive SLAM algorithm, the use of high-resolution LiDARs significantly increases computational burden. Actually, we also noticed that the impact of the decreased resolution of the point cloud on the accuracy of LiDAR SLAM is relatively limited. Most of the state-of-the-art LiDAR SLAM methods require a downsampling of the point cloud in the point cloud preprocessing step. Therefore, for scenarios that demand high efficiency or experimental platforms with limited onboard computing power, LiDARs with lower resolution are preferred. However, currently, such practical datasets obtained from low-resolution LiDAR sensors are relatively scarce.
- Elevation error of the GPS ground truth pose: The ground truth of vehicle pose and trajectory play an essential role in the evaluation of a LiDAR SLAM. In outdoor environments, the vast majority of datasets for autonomous driving scenarios use RTK-based GPS measurements as the source of the ground truth. However, a frequently overlooked fact is that due to the principles of GPS and factors such as obstruction and satellite geometry, the altitude, or elevation error, in GPS positioning is more severe compared to the horizontal error. Thus, using the GPS altitude directly as the elevation value of the ground truth can be insufficient. Currently, investigations into this issue are limited and corresponding compensation strategies are required.
- The errors brought by the calibration process: Because the LiDAR sensor continuously samples range data from the environment, the compensation for motion distortion in point cloud data is essential in datasets as the platform is moving. This process is usually accomplished with the assistance of an Inertial Measurement Unit (IMU). Thus, for most LiDAR SLAM datasets, the calibration for extrinsic transformation between LiDAR and an IMU is considered a prerequisite. In some widely used datasets [7,9], traditional calibration methods are employed for the joint calibration. However, these methods tend to introduce errors that cannot be overlooked, as they fail to consider signal synchronization between different types of sensors. These errors might result in long-term drift and inaccurate map construction results for SLAM systems, weakening the applicability of the dataset.
- A lightweight LiDAR SLAM dataset designed for autonomous driving scenarios is presented, utilizing a low-resolution LiDAR as its primary sensor. This dataset enriches the existing field of LiDAR SLAM datasets by offering additional options, particularly those working with low-computing-power platforms or exploring the performance of SLAM methods using low-resolution point clouds.
- A series of methods are proposed to mitigate the elevation errors in the GPS-based ground truth data. Compared to other datasets, our dataset can provide a more reliable and accurate benchmark, particularly in terms of the elevation.
- The module for the joint calibration of LiDAR and the IMU is enhanced. This improvement effectively enhances the correction of LiDAR motion distortion, thereby significantly reducing the errors introduced by the dataset itself in the study of SLAM algorithms.
- The dataset’s utility and usability are fully verified through a series of experiments using three state-of-the-art LiDAR SLAM methods. These experimental results also provide valuable insight into the performance of each method under diverse scenarios when utilizing low-resolution point clouds.
2. Related Works
3. Components and Details of the Dataset
3.1. Sensors
- Velodyne VLP-16 3D LiDAR: 10 Hz, 16 beams, 0.1 degree angular resolution, 2 cm distance accuracy, collecting ∼57,000 points/second, effective range of 100 m, horizontal Field of View (FoV) 360°, vertical FoV 30°.
- Xsens MTi-630 IMU: 1 kHz, 9 axes, 0.2° roll and pitch angle accuracy, 1.5° yaw angle accuracy, 8°/h Gyro bias stability.
- FDI DETA100 RTK-GPS systems: 200 Hz, 0.05° roll and pitch angle accuracy, 1° yaw angle accuracy, 0.03 m/s speed accuracy.
- GoPro HERO 11 camera: 27.13 Megapixels, 5.3 K 60 fps/4 K 120 fps/2.7 K 240 fps supported, 16:9 wide angle screen.
3.2. System Setup
- Vehicle platform: A WULING Mini EV car is used as the mobile vehicle to carry various types of sensors and onboard computing devices. A customized mounting bracket is installed on the roof to connect and fix various sensors. The copilot position is fitted with a mounting bracket for the onboard computer, which is used for the main computational tasks of the SLAM module.
- Ground truth system: The ground truth system is responsible for collecting the precise position and pose value of the vehicle platform during experiments, and is utilized to quantitatively analyze and evaluate the pose estimation results output by the SLAM system. In terms of hardware, an RTK-supported INS-GNSS module with dual antennas is used. A microcomputer is installed on the roof to run the software of the RTK positioning module.
- SLAM module: The SLAM module is responsible for running the SLAM method in real time so that the validity of the dataset can be verified in real time as the data are collected. The hardware of the module is composed of LiDAR, an IMU, and a high-performance PC. The IMU is fixed at the bottom of the LiDAR bracket; the onboard PC is placed in the copilot seat. The proposed method [24] is used to jointly calibrate the LiDAR and IMU. The software of the module is implemented based on C++ language under the Robot Operating System (ROS) in Ubuntu 20.04.
3.3. Solution to the Errors Contained in the Height Value of the Ground Truth
3.4. Sensor Calibration and Motion Distortion Compensation
3.5. Overview of Experimental Scenarios
3.6. Description of the Data Files
- Velodyne: The point cloud data obtained from Velodyne VLP-16 are stored in two formats, the BAG file and the PCD file. When running point cloud data using ROS, the reader can use the BAG files. For visualizing point clouds using the MATLAB code provided, the PCD files are recommended.
- Ground Truth: According to the storage format used in [25], the ground truth data are stored in the TXT files in TUM format. We provide the trajectory sample file to facilitate readers to view the length of the ground truth and the position in the satellite map.
- IMU: The IMU measurements are stored in the BAG files. The calibration results are stored in the TXT file in the following format .
- EVO Result: For the evaluation results of the three LiDAR SLAM algorithms, the JPG files display the outputted mapping results and the trajectories of SLAM methods. The ATE/RPE (Absolute Trajectory Error/Relative Pose Error) result files hold the evaluation results of the LiDAR SLAM method.
- Video: The process of collecting data is recorded in an MP4 format video file.
4. Experiments
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Mur-Artal, R.; Montiel, J.; Tardos, J. ORB-SLAM: A versatile and accurate monocular SLAM system. IEEE Trans. Robot. 2015, 31, 1147–1163. [Google Scholar] [CrossRef]
- Mur-Artal, R.; Tardós, J.D. ORB-SLAM2: An Open-Source SLAM System for Monocular, Stereo, and RGB-D Cameras. IEEE Trans. Robot. 2017, 33, 1255–1262. [Google Scholar] [CrossRef]
- Cvisic, I.; Markovic, I.; Petrovic, I. SOFT2: Stereo Visual Odometry for Road Vehicles Based on a Point-to-Epipolar-Line Metric. IEEE Trans. Robot. 2023, 39, 273–288. [Google Scholar] [CrossRef]
- Ji, Z.; Singh, S. LOAM: Lidar Odometry and Mapping in Real-Time. In Proceedings of the Robotics: Science and Systems Conference (RSS), Berkeley, CA, USA, 12–14 July 2014. [Google Scholar]
- Shan, T.; Englot, B. LeGO-LOAM: Lightweight and Ground-Optimized Lidar Odometry and Mapping on Variable Terrain. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 4758–4765. [Google Scholar]
- Ruan, J.; Li, B.; Wang, Y.; Fang, Z. GP-SLAM+: Real-time 3D lidar SLAM based on improved regionalized Gaussian process map reconstruction. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2020, Las Vegas, NV, USA, 24 October–24 January 2020; pp. 5171–5178. [Google Scholar]
- Geiger, A.; Lenz, P.; Stiller, C.; Urtasun, R. Vision meets robotics: The KITTI dataset. Int. J. Robot. Res. 2013, 32, 1231–1237. [Google Scholar] [CrossRef]
- Maddern, W.; Pascoe, G.; Linegar, C.; Newman, P. 1 Year, 1000 km: The Oxford RobotCar Dataset. Int. J. Robot. Res. 2017, 36, 3–15. [Google Scholar] [CrossRef]
- Geyer, J.; Kassahun, Y.; Mahmudi, M.; Ricou, X.; Durgesh, R.; Chung, A.S.; Hauswald, L.; Pham, V.H.; Mühlegg, M.; Dorn, S.; et al. A2d2: Audi autonomous driving dataset. arXiv 2020, arXiv:2004.06320. [Google Scholar]
- Caesar, H.; Bankiti, V.; Lang, A.H.; Vora, S.; Liong, V.E.; Xu, Q.; Krishnan, A.; Pan, Y.; Baldan, G.; Beijbom, O. nuScenes: A multimodal dataset for autonomous driving. arXiv 2019, arXiv:1903.11027. [Google Scholar]
- Roynard, X.; Deschaud, J.E.; Goulette, F. Paris-Lille-3D: A large and high-quality ground-truth urban point cloud dataset for automatic segmentation and classification. Int. J. Robot. Res. 2018, 37, 545–557. [Google Scholar] [CrossRef]
- Alibeigi, M.; Ljungbergh, W.; Tonderski, A.; Hess, G.; Lilja, A.; Lindström, C.; Motorniuk, D.; Fu, J.; Widahl, J.; Petersson, C. Zenseact open dataset: A large-scale and diverse multimodal dataset for autonomous driving. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Paris, France, 1–6 October 2023; pp. 20178–20188. [Google Scholar]
- Pandey, G.; McBride, J.R.; Eustice, R.M. Ford campus vision and lidar data set. Int. J. Robot. Res. 2011, 30, 1543–1552. [Google Scholar] [CrossRef]
- Jeong, J.; Cho, Y.; Shin, Y.S.; Roh, H.; Kim, A. Complex Urban LiDAR Data Set. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, QLD, Australia, 21–25 May 2018. [Google Scholar]
- Blanco-Claraco, J.L.; Moreno-Dueñas, F.Á.; González-Jiménez, J. The Málaga urban dataset: High-rate stereo and LiDAR in a realistic urban scenario. Int. J. Robot. Res. 2014, 33, 207–214. [Google Scholar] [CrossRef]
- Knights, J.; Vidanapathirana, K.; Ramezani, M.; Sridharan, S.; Fookes, C.; Moghadam, P. Wild-places: A large-scale dataset for lidar place recognition in unstructured natural environments. In Proceedings of the 2023 IEEE International Conference on Robotics and Automation (ICRA), London, UK, 29 May–2 June 2023; pp. 11322–11328. [Google Scholar]
- Alqobali, R.; Alshmrani, M.; Alnasser, R.; Rashidi, A.; Alhmiedat, T.; Alia, O.M. A Survey on Robot Semantic Navigation Systems for Indoor Environments. Appl. Sci. 2024, 14, 89. [Google Scholar] [CrossRef]
- Romero-González, C.; Villena, Á.; González-Medina, D.; Martínez-Gómez, J.; Rodríguez-Ruiz, L.; García-Varea, I. Inlida: A 3d lidar dataset for people detection and tracking in indoor environments. In Proceedings of the 12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP), Porto, Portugal, 27 February–1 March 2017; pp. 484–491. [Google Scholar]
- Behley, J.; Garbade, M.; Milioto, A.; Quenzel, J.; Behnke, S.; Stachniss, C.; Gall, J. SemanticKITTI: A Dataset for Semantic Scene Understanding of LiDAR Sequences. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019; pp. 9297–9307. [Google Scholar]
- Deschaud, J.E. KITTI-CARLA: A KITTI-like dataset generated by CARLA Simulator. arXiv 2021, arXiv:2109.00892. [Google Scholar]
- Kulkarni, A.; Chrosniak, J.; Ducote, E.; Sauerbeck, F.; Saba, A.; Chirimar, U.; Link, J.; Behl, M.; Cellina, M. RACECAR—The Dataset for High-Speed Autonomous Racing. In Proceedings of the 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Detroit, MI, USA, 1–5 October 2023; pp. 11458–11463. [Google Scholar]
- Sun, X.; Jin, L.; He, Y.; Wang, H.; Huo, Z.; Shi, Y. SimoSet: A 3D Object Detection Dataset Collected from Vehicle Hybrid Solid-State LiDAR. Electronics 2023, 12, 2424. [Google Scholar] [CrossRef]
- Pham, Q.H.; Sevestre, P.; Pahwa, R.S.; Zhan, H.J.; Pang, C.H.; Chen, Y.D.; Mustafa, A.; Chandrasekhar, V.; Lin, J. A*3D Dataset: Towards Autonomous Driving in Challenging Environments. In Proceedings of the IEEE International Conference on Robotics and Automation, Paris, France, 31 May–15 June 2020; pp. 2267–2273. [Google Scholar]
- Le Gentil, C.; Vidal-Calleja, T.; Huang, S. 3D LiDAR-IMU calibration based on upsampled preintegrated measurements for motion distortion correction. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, QLD, Australia, 21–25 May 2018; pp. 2149–2155. [Google Scholar]
- Sturm, J.; Engelhard, N.; Endres, F.; Burgard, W.; Cremers, D. A benchmark for the evaluation of RGB-D SLAM systems. In Proceedings of the 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, Algarve, Portugal, 7–12 October 2012; pp. 573–580. [Google Scholar]
- Ruan, J.; Li, B.; Wang, Y.; Sun, Y. SLAMesh: Real-time LiDAR Simultaneous Localization and Meshing. In Proceedings of the International Conference on Robotics and Automation (ICRA), London, UK, 29 May–2 June 2023; pp. 3546–3552. [Google Scholar]
Dataset | Year | 3D LiDAR | 2D LiDAR | Resolution |
---|---|---|---|---|
Ford Campus [13] | 2011 | 1 × Velodyne HDL-64E | - - | high |
KITTI [7] | 2013 | 1 × Velodyne HDL-64E | - - | high |
Málaga Urban [15] | 2014 | - - | 3 × Hokuyo UTM-30LX | high |
2 × SICK LMS-200 | ||||
Oxford [8] | 2017 | 1 × SICK LD-MRS | 2 × SICK LMS-151 | high |
Inlida [18] | 2017 | 1 × Velodyne VLP-16 | - - | low |
Paris-Lille-3D [11] | 2018 | 1 × Velodyne HDL-32E | - - | middle |
Complex Urban [14] | 2018 | 2 × Velodyne VLP-16 | 2 × SICK LMS-511 | middle |
nuScenes [10] | 2019 | 1 × Velodyne HDL-32E | - - | middle |
SemantickITTI [19] | 2019 | 1 × Velodyne HDL-64E | - - | high |
A2D2 [9] | 2020 | 5 × Velodyne VLP-16 | - - | high |
A*3D [23] | 2020 | 1 × Velodyne HDL-64E | - - | high |
KITTI-CARLA [20] | 2021 | 1 × Velodyne HDL-64E | - - | high |
ZOD [12] | 2023 | 1 × Velodyne VLS-128 | - - | high |
2 × Velodyne VLP-16 | ||||
Wild-places [16] | 2023 | 1 × Velodyne VLP-16 | - - | low |
RACECAR [21] | 2023 | 3 × Lumina H3 | - - | high |
SimoSet [22] | 2023 | 1 × RS-LiDAR-M1 | - - | middle |
ZUST Campus (ours) | 2024 | 1 × Velodyne VLP-16 | - - | low |
File | Transform | Rotation Matrix | Value (Meters and Degrees) |
---|---|---|---|
Calib_01_20 | −0.9996; 0.0014; 0.0291 | [−0.30 cm, 1.01 cm, −4.43 cm, , , ] | |
−0.0016; −1.0000; 0.0061 | |||
0.0291; −0.0061; 0.9996 | |||
−1.0000; −0.0079; 0.0052 | [−55.25 cm, 7.08 cm, −16.12 cm, , , ] | ||
−0.0079; 0.9998; −0.0157 | |||
−0.0051; −0.0157; −0.9999 | |||
Calib_01_21 | −0.9999; 0.0010; 0.0173 | [−0.30 cm, 0.98 cm, −4.42 cm, , , ] | |
−0.0008; −0.9999; 0.0131 | |||
0.0173; 0.0131; 0.9998 | |||
−1.0000; 0.0070; 0.0026 | [−55.36 cm, 7.15 cm, −16.20 cm, , , ] | ||
0.0070; 0.9999; 0.0086 | |||
−0.0026; 0.0086; −1.0000 |
APE | RPE | |||||
---|---|---|---|---|---|---|
Full | Trans Part | Rot Part | Full | Trans Part | Rot Part | |
GP-SLAM+ | 6.11/7.43 | 5.96/7.34 | 0.98/0.63 | 1.34/1.00 | 1.33/0.99 | 0.09/0.05 |
A-LOAM | 6.26/28.31 | 6.09/28.27 | 1.01/0.75 | 2.09/1.85 | 2.08/1.84 | 0.14/0.10 |
LeGO-LOAM | 6.01/7.41 | 5.74/7.32 | 0.92/0.61 | 1.30/1.01 | 1.29/1.00 | 0.08/0.06 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
He, Y.; Li, B.; Ruan, J.; Yu, A.; Hou, B. ZUST Campus: A Lightweight and Practical LiDAR SLAM Dataset for Autonomous Driving Scenarios. Electronics 2024, 13, 1341. https://doi.org/10.3390/electronics13071341
He Y, Li B, Ruan J, Yu A, Hou B. ZUST Campus: A Lightweight and Practical LiDAR SLAM Dataset for Autonomous Driving Scenarios. Electronics. 2024; 13(7):1341. https://doi.org/10.3390/electronics13071341
Chicago/Turabian StyleHe, Yuhang, Bo Li, Jianyuan Ruan, Aihua Yu, and Beiping Hou. 2024. "ZUST Campus: A Lightweight and Practical LiDAR SLAM Dataset for Autonomous Driving Scenarios" Electronics 13, no. 7: 1341. https://doi.org/10.3390/electronics13071341