A Lightweight and Drift-Free Fusion Strategy for Drone Autonomous and Safe Navigation
Abstract
:1. Introduction
- The LDMF system leverages the Kanade–Lucas [13] algorithm to track multiple visual feature descriptors [14] in each video frame, and the image corner describing and matching between adjacent frames are not required in the corner tracking procedure. Moreover, our system supports multiple types of cameras, such as monocular, binocular, and RGB-D. After NVIDIA CUDA acceleration, the robot pose estimator can achieve camera-rate (30Hz) performance on a single-board computer.
- The LDMF navigation system can not only quickly provide the drone pose and velocity to the trajectory planner, but also synchronously generate the environment map for automatic obstacle avoidance. Furthermore, the additional marginalization strategy discharged the computation complexity of the LDMF system.
- By entirely using the GNSS raw measurements, the intrinsic drift from the vision-IMU odometry will be dumped, and the yaw angle residual between the odometry frame and the world frame will be updated without any offline calibration. The drone state estimator is able to execute rapidly in unpredictable scenarios and achieves local smooth and global drift-free characteristics without visual closed-loop detection.
2. Related Work
2.1. Visual Odometry
2.2. Multisensor Fusion State Estimation
3. System Overview
4. Multisensor Fusion Strategy
4.1. Formulation
4.2. Visual Constraint
4.3. Inertial Measurements Constraint
4.4. GNSS Constraint
4.5. Tightly Coupled Drone Flight State Estimation
5. Experiments
5.1. Benchmark Tests in Public Dataset
5.2. Real-World Position Estimation Experiments
5.2.1. Experimental Preparation
5.2.2. Pure Rotation Test on a Soccer Field
5.2.3. Dynamic Interference Test on an Overpass
5.3. Autonomous and Safe Navigation with an Agile Drone
6. Conclusions and Future Work
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Gupta, A.; Fernando, X. Simultaneous Localization and Mapping (SLAM) and Data Fusion in Unmanned Aerial Vehicles: Recent Advances and Challenges. Drones 2022, 6, 85. [Google Scholar] [CrossRef]
- Chen, J.; Li, S.; Liu, D.; Li, X. AiRobSim: Simulating a Multisensor Aerial Robot for Urban Search and Rescue Operation and Training. Sensors 2020, 20, 5223. [Google Scholar] [CrossRef] [PubMed]
- Tabib, W.; Goel, K.; Yao, J.; Boirum, C.; Michael, N. Autonomous Cave Surveying with an Aerial Robot. IEEE Trans. Robot. 2021, 9, 1016–1032. [Google Scholar] [CrossRef]
- Zhou, X.; Wen, X.; Wang, Z.; Gao, Y.; Li, H.; Wang, Q.; Yang, T.; Lu, H.; Cao, Y.; Xu, C.; et al. Swarm of micro flying robots in the wild. Sci. Robot. 2022, 7, eabm5954. [Google Scholar] [CrossRef] [PubMed]
- Paul, M.K.; Roumeliotis, S.I. Alternating-Stereo VINS: Observability Analysis and Performance Evaluation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–22 June 2018; pp. 4729–4737. [Google Scholar]
- Qin, T.; Li, P.; Shen, S. VINS-Mono: A robust and versatile monocular visual-inertial state estimator. IEEE Trans. Robot. 2018, 34, 1004–1020. [Google Scholar] [CrossRef] [Green Version]
- Tian, Y.; Chang, Y.; Herrera Arias, F.; Nieto-Granda, C.; How, J.; Carlone, L. Kimera-Multi: Robust, Distributed, Dense Metric-Semantic SLAM for Multi-Robot Systems. IEEE Trans. Robot. 2022, 38, 2022–2038. [Google Scholar] [CrossRef]
- Campos, C.; Elvira, R.; Rodríguez, J.J.G.; Montiel, J.M.M.; Tardós, J.D. ORB-SLAM3: An Accurate Open-Source Library for Visual, Visual–Inertial, and Multimap SLAM. IEEE Trans. Robot. 2021, 37, 1874–1890. [Google Scholar] [CrossRef]
- Li, T.; Zhang, H.; Gao, Z.; Niu, X.; El-sheimy, N. Tight Fusion of a Monocular Camera, MEMS-IMU, and Single-Frequency Multi-GNSS RTK for Precise Navigation in GNSS-Challenged Environments. Remote Sens. 2019, 11, 610. [Google Scholar] [CrossRef] [Green Version]
- Cao, S.; Lu, X.; Shen, S. GVINS: Tightly Coupled GNSS–Visual–Inertial Fusion for Smooth and Consistent State Estimation. IEEE Trans. Robot. 2022, 38, 2004–2021. [Google Scholar] [CrossRef]
- Zhang, C.; Yang, Z.; Fang, Q.; Xu, C.; Xu, H.; Xu, X.; Zhang, J. FRL-SLAM: A Fast, Robust and Lightweight SLAM System for Quadruped Robot Navigation. In Proceedings of the IEEE International Conference on Robotics and Biomimetics (ROBIO), Sanya, China, 27–31 December 2021; pp. 1165–1170. [Google Scholar] [CrossRef]
- Zhang, C.; Yang, Z.; Liao, L.; You, Y.; Sui, Y.; Zhu, T. RPEOD: A Real-Time Pose Estimation and Object Detection System for Aerial Robot Target Tracking. Machines 2022, 10, 181. [Google Scholar] [CrossRef]
- Lucas, B.; Kanade, T. An iterative image registration technique with an application to stereo vision. In Proceedings of the 7th International Joint Conference on Artificial Intelligence, Vancouver, BC, Canada, 24–28 August 1981; pp. 24–28. [Google Scholar]
- Shi, J.; Tomasi. Good features to track. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 21–23 June 1994. [Google Scholar]
- Klein, G.; Murray, D. Parallel Tracking and Mapping for Small AR Workspaces. In Proceedings of the IEEE and ACM International Symposium on Mixed and Augmented Reality, Nara, Japan, 13–16 November 2007; pp. 225–234. [Google Scholar]
- Mur-Artal, R.; Montiel, J.M.M.; Tardós, J.D. ORB-SLAM: A Versatile and Accurate Monocular SLAM System. IEEE Trans. Robot. 2015, 31, 1147–1163. [Google Scholar] [CrossRef] [Green Version]
- Mur-Artal, R.; Tardós, J.D. ORB-SLAM2: An Open-Source SLAM System for Monocular, Stereo, and RGB-D Cameras. IEEE Trans. Robot. 2017, 33, 1255–1262. [Google Scholar] [CrossRef] [Green Version]
- Endres, F.; Hess, J.; Sturm, J.; Cremers, D.; Burgard, W. 3-D Mapping With an RGB-D Camera. IEEE Trans. Robot. 2014, 30, 177–187. [Google Scholar] [CrossRef]
- Engel, J.; Schöps, T.; Cremers, D. LSD-SLAM: Large-Scale Direct Monocular SLAM. In Proceedings of the The European Conference on Computer Vision (ECCV), Zurich, Switzerland, 6–12 September 2014; pp. 834–849. [Google Scholar] [CrossRef] [Green Version]
- Geneva, P.; Eckenhoff, K.; Lee, W.; Yang, Y.; Huang, G. OpenVINS: A Research Platform for Visual-Inertial Estimation. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 4666–4672. [Google Scholar] [CrossRef]
- Mourikis, A.; Roumeliotis, S. A multi-state constraint Kalman filter for vision-aided inertial navigation. In Proceedings of the IEEE International Conference on Robotics and Automation, Rome, Italy, 10–14 April 2007; pp. 3565–3572. [Google Scholar]
- Cadena, C.; Carlone, L.; Carrillo, H.; Latif, Y.; Scaramuzza, D.; Neira, J.; Reid, I.; Leonard, J.J. Past, Present, and Future of Simultaneous Localization and Mapping: Toward the Robust-Perception Age. IEEE Trans. Robot. 2016, 32, 1309–1332. [Google Scholar] [CrossRef] [Green Version]
- Forster, C.; Pizzoli, M.; Scaramuzza, D. SVO: Fast semi-direct monocular visual odometry. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, 31 May–7 June 2014. [Google Scholar] [CrossRef] [Green Version]
- Weiss, S.; Achtelik, M.; Lynen, S.; Chli, M.; Siegwart, R. Real-time onboard visual-inertial state estimation and self-calibration of MAVs in unknown environments. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Saint Paul, MN, USA, 14–18 May 2012; pp. 957–964. [Google Scholar] [CrossRef] [Green Version]
- Wang, Y.; Kuang, J.; Li, Y.; Niu, X. Magnetic Field-Enhanced Learning-Based Inertial Odometry for Indoor Pedestrian. IEEE Trans. Instrum. Meas. 2022, 71, 2512613. [Google Scholar] [CrossRef]
- Zhou, B.; Pan, J.; Gao, F.; Shen, S. RAPTOR: Robust and Perception-Aware Trajectory Replanning for Quadrotor Fast Flight. IEEE Trans. Robot. 2021, 37, 1992–2009. [Google Scholar] [CrossRef]
- Leutenegger, S.; Lynen, S.; Bosse, M.; Siegwart, R.; Furgale, P. Keyframe-based visual–inertial odometry using nonlinear optimization. Int. J. Robot. Res. 2015, 34, 314–334. [Google Scholar] [CrossRef] [Green Version]
- Qin, T.; Cao, S.; Pan, J.; Shen, S. A General Optimization-based Framework for Global Pose Estimation with Multiple Sensors. arXiv 2019, arXiv:1901.03642. [Google Scholar]
- Rosinol, A.; Abate, M.; Chang, Y.; Carlone, L. Kimera: An Open-Source Library for Real-Time Metric-Semantic Localization and Mapping. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 1689–1696. [Google Scholar]
- Lynen, S.; Achtelik, M.; Weiss, S.; Chli, M.; Siegwart, R. A robustand modular multi-sensor fusion approach applied to MAV navigation. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Tokyo, Japan, 3–7 November 2013; pp. 3923–3929. [Google Scholar]
- Shen, S.; Mulgaonkar, Y.; Michael, N.; Kumar, V. Multi-sensor fusion for robust autonomous flight in indoor and outdoor environments with a rotorcraft MAV. In Proceedings of the IEEE International Conference on Robotics and Automation, Hong Kong, China, 31 May–7 June 2014; pp. 4974–4981. [Google Scholar]
- Qin, T.; Li, P.; Shen, S. Relocalization, Global Optimization and Map Merging for Monocular Visual-Inertial SLAM. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, 21–25 May 2018; pp. 1197–1204. [Google Scholar] [CrossRef] [Green Version]
- Qin, T.; Shen, S. Robust initialization of monocular visual-inertial estimation on aerial robots. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, Canada, 24–28 September 2017; pp. 4225–4232. [Google Scholar] [CrossRef]
- Burri, M.; Nikolic, J.; Gohl, P.; Schneider, T.; Rehder, J. The EuRoC micro aerial vehicle datasets. Int. J. Robot. Res. 2016, 35, 1157–1163. [Google Scholar] [CrossRef]
- Takasu, T.; Yasuda, A. Development of the low-cost RTK-GPS receiver with an open source program package RTKLIB. In Proceedings of the International Symposium on GPS/GNSS, Jeju, Republic of Korea, 4–6 November 2009; Volume 1, pp. 1–6. [Google Scholar]
Sequence | OKVIS | VINS-Fusion | LDMF |
---|---|---|---|
MH01 | 0.16 | 0.18 | 0.17 |
MH02 | 0.22 | 0.12 | 0.14 |
MH03 | 0.24 | 0.23 | 0.13 |
MH04 | 0.34 | 0.29 | 0.23 |
MH05 | 0.47 | 0.25 | 0.35 |
V101 | 0.09 | 0.12 | 0.07 |
V102 | 0.20 | 0.13 | 0.11 |
V103 | 0.24 | 0.07 | 0.13 |
V201 | 0.13 | 0.09 | 0.07 |
V202 | 0.16 | 0.14 | 0.09 |
V203 | 0.29 | 0.23 | 0.18 |
Average | 0.23 | 0.17 | 0.15 |
Sensor Item | Specification | Unit |
---|---|---|
Vision | ||
Model Number | Intel D435i | |
Image Technology | Global Shutter | |
Resolution | 640 × 480 | pixel |
Field of View | 87 × 58 | degree |
Frame Rate | 30 | FPS |
Size | 90 × 25 × 25 | mm |
Inertia | ||
Microchip | MPU-9250 | |
Gyroscope Resolution | 0.061 | º/s |
Gyroscope Noise | 0.028~0.07 | º/s |
Gyroscope Zero Drift | ±0.5 | º/s |
Accelerometer Resolution | 0.0005 | g |
Accelerometer Noise | 0.75~1 | mg |
Accelerometer Zero Drift | ±20~40 | mg |
Frequency | 200 | Hz |
GNSS | ||
Antenna | BenTian-3070 | |
Gain | 35 ± 2 | dB |
Receiver | u-blox ZED-F9P | |
Error Range | 1.5 | m |
Frequency | 10 | Hz |
Index | RTKLIB | VINS-Fusion | GVINS | LDMF |
---|---|---|---|---|
RMSE | 13.11 | 2.90 | 1.77 | 1.39 |
Median | 11.02 | 2.46 | 1.27 | 1.16 |
Mean Error | 11.73 | 2.59 | 1.46 | 1.24 |
Standard Deviation | 5.86 | 1.32 | 1.31 | 0.63 |
CPU Usage | 55% | 395% | 275% | 190% |
Index | RTKLIB | VINS-Fusion | GVINS | LDMF |
---|---|---|---|---|
RMSE | 8.42 | 2.59 | 1.87 | 1.19 |
Median | 6.77 | 1.83 | 1.33 | 0.79 |
Mean Error | 7.54 | 2.23 | 1.61 | 0.95 |
Standard Deviation | 3.73 | 1.32 | 1.58 | 0.73 |
CPU Usage | 55% | 385% | 270% | 190% |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhang, C.; Yang, Z.; Zhuo, H.; Liao, L.; Yang, X.; Zhu, T.; Li, G. A Lightweight and Drift-Free Fusion Strategy for Drone Autonomous and Safe Navigation. Drones 2023, 7, 34. https://doi.org/10.3390/drones7010034
Zhang C, Yang Z, Zhuo H, Liao L, Yang X, Zhu T, Li G. A Lightweight and Drift-Free Fusion Strategy for Drone Autonomous and Safe Navigation. Drones. 2023; 7(1):34. https://doi.org/10.3390/drones7010034
Chicago/Turabian StyleZhang, Chi, Zhong Yang, Haoze Zhuo, Luwei Liao, Xin Yang, Tang Zhu, and Guotao Li. 2023. "A Lightweight and Drift-Free Fusion Strategy for Drone Autonomous and Safe Navigation" Drones 7, no. 1: 34. https://doi.org/10.3390/drones7010034
APA StyleZhang, C., Yang, Z., Zhuo, H., Liao, L., Yang, X., Zhu, T., & Li, G. (2023). A Lightweight and Drift-Free Fusion Strategy for Drone Autonomous and Safe Navigation. Drones, 7(1), 34. https://doi.org/10.3390/drones7010034