Off-Road Detection Analysis for Autonomous Ground Vehicles: A Review
Abstract
:1. Introduction
2. Related Works, Novelty, and Classifications
2.1. The Novelty of This Work
2.2. Classification
- 1.
- Ground or drivable pathway detection
- 1.1
- Single Sensor-Based Detection
- 1.2
- Multiple Sensor-Based Detection
- 2.
- Obstacle detection
- 2.1
- Positive obstacle detection
- 2.1.1
- Single sensor-based detection
- 2.1.2
- Multiple sensor-based detections
- 2.2
- Negative obstacle detection
- 2.2.1
- Missing data analysis
- 2.2.2
- Vision-based detection
- 2.2.3
- Other methods
3. Ground or Drivable Pathway Detection
3.1. Single Sensor-Based Detection
3.2. Multi-Sensor-Based Detection
4. Positive Obstacles Detection and Analysis
4.1. Single Sensor Based
4.2. Multi-Sensor Based
5. Negative Obstacles Detection and Analysis
5.1. Missing Data Analysis
5.2. Vision-Based Detection
5.3. Other Methods
6. Discussion
6.1. Key Findings
6.1.1. Sensors
6.1.2. Sensor Fusion
6.1.3. Learning-Based Model
6.2. Challenges
6.2.1. Availability of Dataset
6.2.2. Hanging Obstacles Detection
6.2.3. Sensor Issues
6.2.4. Environmental Challenges
6.2.5. Real-Time Detection
6.3. Future Possibilities
7. Conclusions
Author Contributions
Funding
Conflicts of Interest
Abbreviations
AI | Artificial Intelligence |
AGV | Autonomous Ground Vehicle |
ALV | Autonomous Land Vehicles |
BVNet | Bird’s Eye View Network |
CaT | CAVS Traversability Dataset |
CNN | Convolutional Neural Network |
CAVS | Center for Advanced Vehicular Systems |
DARPA | Defense Advanced Research Projects Agency |
GNSS | Global Navigation Satellite Systems |
GPS | Global Positioning Systems |
HLD | Height, Length, and Density |
IMU | Inertial Measurement Unit |
InSAR | Interferometric Synthetic Aperture Radar |
LADAR | Light and Radio Detection and Ranging |
Laser | Light amplification by stimulated emission of radiation |
LIDAR | Light Detection and Ranging |
LM-BP | Levenberg–Marquardt back-propagation |
MAVS | Mississippi State University Autonomous Vehicular Simulator |
MLP | Multilayer Perceptron |
NODR | Negative Obstacle DetectoR |
PCA | Principal Component Analysis |
SONAR | Sound Navigation and Ranging |
SSD | Single Shot multi-box Detector |
SVM | Support Vector Machine |
SVR | Space-Variant Resolution |
RADAR | Radio Detection and Ranging |
ROOAD | RELLIS Off-road Odometry Analysis Dataset |
R-CNN | Region-based Convolutional Neural Network |
TTA | Terrain Traversability Analysis |
UGV | Unmanned Ground Vehicle |
YOLO | You Only Look Once |
References
- Gomi, T.; Ide, K.-I.; Matsuo, H. The development of a fully autonomous ground vehicle (FAGV). In Proceedings of the Intelligent Vehicles’94 Symposium, Paris, France, 24–26 October 1994; pp. 62–67. [Google Scholar] [CrossRef]
- Gage, D.W. Ugv history 101: A brief history of unmanned ground vehicle (ugv) development efforts. DTIC Document. Tech. Rep. 1995, 13, 1–9. [Google Scholar]
- Thakkar, J.J. Applications of structural equation modelling with AMOS 21, IBM SPSS. In Structural Equation Modelling; Springer: Berlin/Heidelberg, Germany, 2020; pp. 35–89. [Google Scholar]
- Shang, E.; An, X.; Li, J.; He, H. A novel setup method of 3D LIDAR for negative obstacle detection in field environment. In Proceedings of the 2014 IEEE 17th International Conference on Intelligent Transportation Systems (ITSC 2014), Qingdao, China, 8–11 October 2014; pp. 1436–1441. [Google Scholar]
- Luettel, T.; Himmelsbach, M.; Wuensche, H.-J. Autonomous Ground Vehicles—Concepts and a Path to the Future. Proc. IEEE 2012, 100, 1831–1839. [Google Scholar] [CrossRef]
- Folsom, T. Energy and Autonomous Urban Land Vehicles. IEEE Technol. Soc. Mag. 2012, 31, 28–38. [Google Scholar] [CrossRef]
- Islam, F.; Nabi, M.M.; Farhad, M.; Peranich, P.L.; Ball, J.E.; Goodin, C. Evaluating performance of extended Kalman filter based adaptive cruise control using PID controller. Auton. Syst. Sens. Process. Secur. Veh. Infrastruct. 2021, 11748, 46–56. [Google Scholar] [CrossRef]
- Johnson, E.N.; Mooney, J.G. A Comparison of Automatic Nap-of-the-earth Guidance Strategies for Helicopters. J. Field Robot. 2014, 31, 637–653. [Google Scholar] [CrossRef]
- Dabbiru, L.; Sharma, S.; Goodin, C.; Ozier, S.; Hudson, C.R.; Carruth, D.W.; Doude, M.; Mason, G.; Ball, J.E. Traversability mapping in off-road environment using semantic segmentation. Auton. Syst. Sens. Process. Secur. Veh. Infrastruct. 2021, 11748, 78–83. [Google Scholar] [CrossRef]
- Choi, J.; Lee, J.; Kim, D.; Soprani, G.; Cerri, P.; Broggi, A.; Yi, K. Environment-Detection-and-Mapping Algorithm for Autonomous Driving in Rural or Off-Road Environment. IEEE Trans. Intell. Transp. Syst. 2012, 13, 974–982. [Google Scholar] [CrossRef] [Green Version]
- Brock, O.; Park, J.; Toussaint, M. Mobility and Manipulation. In Springer Handbook of Robotics; Siciliano, B., Khatib, O., Eds.; Springer: New York, NY, USA, 2016; pp. 1007–1036. [Google Scholar]
- Chhaniyara, S.; Brunskill, C.; Yeomans, B.; Matthews, M.; Saaj, C.; Ransom, S.; Richter, L. Terrain trafficability analysis and soil mechanical property identification for planetary rovers: A survey. J. Terramech. 2012, 49, 115–128. [Google Scholar] [CrossRef]
- Papadakis, P. Terrain traversability analysis methods for unmanned ground vehicles: A survey. Eng. Appl. Artif. Intell. 2013, 26, 1373–1385. [Google Scholar] [CrossRef] [Green Version]
- Ilas, C. Electronic sensing technologies for autonomous ground vehicles: A review. In Proceedings of the 2013 8th International Symposium on Advanced Topics in Electrical Engineering (ATEE), Bucharest, Romania, 23–25 May 2013; pp. 1–6. [Google Scholar] [CrossRef]
- Babak, S.-J.; Hussain, S.A.; Karakas, B.; Cetin, S.; Jahromi, B.S. Control of autonomous ground vehicles: A brief technical review. In Proceedings of the International Research and Innovation Summit (IRIS2017), Xi’an, China, 20–24 June 2017; p. 012029. [Google Scholar] [CrossRef]
- Lynch, L.; Newe, T.; Clifford, J.; Coleman, J.; Walsh, J.; Toal, D. Automated Ground Vehicle (AGV) and Sensor Technologies- A Review. In Proceedings of the 2019 13th International Conference on Sensing Technology (ICST 2019), Sydney, Australia, 2–4 December 2019; pp. 347–352. [Google Scholar] [CrossRef]
- Hu, J.-W.; Zheng, B.-Y.; Wang, C.; Zhao, C.-H.; Hou, X.-L.; Pan, Q.; Xu, Z. A survey on multi-sensor fusion based obstacle detection for intelligent ground vehicles in off-road environments. Front. Inf. Technol. Electron. Eng. 2020, 21, 675–692. [Google Scholar] [CrossRef]
- Guastella, D.C.; Muscato, G. Learning-Based Methods of Perception and Navigation for Ground Vehicles in Unstructured Environments: A Review. Sensors 2020, 21, 73. [Google Scholar] [CrossRef]
- Liu, T.; Liu, D.; Yang, Y.; Chen, Z. Lidar-based Traversable Region Detection in Off-road Environment. In Proceedings of the 38th Chinese Control Conference (CCC2019), Guangzhou, China, 27–30 July 2019; pp. 4548–4553. [Google Scholar] [CrossRef]
- Gao, B.; Xu, A.; Pan, Y.; Zhao, X.; Yao, W.; Zhao, H. Off-Road Drivable Area Extraction Using 3D LiDAR Data. In Proceedings of the Intelligent Vehicles Symposium (IV), Paris, France, 9–12 June 2019; pp. 1505–1511. [Google Scholar] [CrossRef] [Green Version]
- Chen, L.; Yang, J.; Kong, H. Lidar-histogram for fast road and obstacle detection. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Singapore, 29 May–3 June 2019; pp. 1343–1348. [Google Scholar] [CrossRef]
- Katramados, I.; Crumpler, S.; Breckon, T.P. Real-Time Traversable Surface Detection by Colour Space Fusion and Temporal Analysis. In Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2009; Volume 5815, pp. 265–274. [Google Scholar] [CrossRef]
- Shaban, A.; Meng, X.; Lee, J.; Boots, B.; Fox, D. Semantic terrain classification for off-road autonomous driving. In Proceedings of the Machine Learning Research (PMLR), Almería, Spain, 5–7 October 2022; pp. 619–629. [Google Scholar]
- Gao, B.; Hu, S.; Zhao, X.; Zhao, H. Fine-Grained Off-Road Semantic Segmentation and Mapping via Contrastive Learning. In Proceedings of the IRC 2021: IEEE International Conference on Robotic Computing, Taichung, Taiwan, 15–17 November 2021; pp. 5950–5957. [Google Scholar] [CrossRef]
- Zhu, B.; Xiong, G.; Di, H.; Ji, K.; Zhang, X.; Gong, J. A Novel Method of Traversable Area Extraction Fused with LiDAR Odometry in Off-road Environment. In Proceedings of the IEEE ICVES 2019 2019: IEEE International Conference on Vehicular Electronics and Safety 2019, Cairo, Egypt, 4–6 September 2019; pp. 1–6. [Google Scholar] [CrossRef]
- Dahlkamp, H.; Kaehler, A.; Stavens, D.; Thrun, S.; Bradski, G. Self-supervised Monocular Road Detection in Desert Terrain. Robot. Sci. Syst. 2006, 38. [Google Scholar] [CrossRef]
- Mei, J.; Yu, Y.; Zhao, H.; Zha, H. Scene-Adaptive Off-Road Detection Using a Monocular Camera. In Proceedings of the 5th IEEE International Conference on Models and Technologies for Intelligent Transportation Systems, Naples, Italy, 26–28 June 2017; pp. 242–253. [Google Scholar] [CrossRef]
- Tang, L.; Ding, X.; Yin, H.; Wang, Y.; Xiong, R. From one to many: Unsupervised traversable area segmentation in off-road environment. In Proceedings of the 2017 IEEE International Conference on Robotics and Biometics, Parisian Macao, China, 5–8 December 2017; pp. 1–6. [Google Scholar] [CrossRef]
- Zhang, H.; Dana, K.; Shi, J.; Zhang, Z.; Wang, X.; Tyagi, A.; Agrawal, A. Context Encoding for Semantic Segmentation. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 7151–7160. [Google Scholar] [CrossRef] [Green Version]
- Reina, G.; Milella, A.; Worst, R. LIDAR and stereo combination for traversability assessment of off-road robotic vehicles. Robotica 2015, 34, 2823–2841. [Google Scholar] [CrossRef]
- Sock, J.; Kim, J.; Min, J.; Kwak, K. Probabilistic traversability map generation using 3D-LIDAR and camera. In Proceedings of the 2016 IEEE International Symposium on Robotics and Manufacturing Automation (IEEE-ROMA2016), Ipoh, Malaysia, 25–27 September 2016; pp. 5631–5637. [Google Scholar] [CrossRef]
- McDaniel, M.W.; Nishihata, T.; Brooks, C.A.; Salesses, P.; Iagnemma, K. Terrain classification and identification of tree stems using ground-based LiDAR. J. Field Robot. 2012, 29, 891–910. [Google Scholar] [CrossRef]
- Dima, C.; Vandapel, N.; Hebert, M. Classifier fusion for outdoor obstacle detection. In Proceedings of the 2004 IEEE International Conference on Robotics and Automation, New Orleans, LA, USA, 26 April–1 May 2004; pp. 665–671. [Google Scholar] [CrossRef] [Green Version]
- Huertas, A.; Matthies, L.; Rankin, A. Stereo-Based Tree Traversability Analysis for Autonomous Off-Road Navigation. In Proceedings of the IEEE Workshop on Applications of Computer Vision (WACV), Breckenridge, CO, USA, 5–7 January 2005; pp. 210–217. [Google Scholar] [CrossRef] [Green Version]
- Maturana, D.; Chou, P.-W.; Uenoyama, M.; Scherer, S. Real-Time Semantic Mapping for Autonomous Off-Road Navigation. Field Serv. Robot. 2017, 5, 335–350. [Google Scholar] [CrossRef]
- Manderson, T.; Wapnick, S.; Meger, D.; Dudek, G. Learning to Drive Off Road on Smooth Terrain in Unstructured Environments Using an On-Board Camera and Sparse Aerial Images. In Proceedings of the 2020 International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–4 June 2020; pp. 1263–1269. [Google Scholar] [CrossRef]
- Nadav, I.; Katz, E. Off-road path and obstacle detection using monocular camera. In Proceedings of the 20th International Computer Science and Engineering Conference 2016, Chiang Mai, Thailand, 14–17 December 2017; pp. 22–26. [Google Scholar] [CrossRef]
- Broggi, A.; Caraffi, C.; Fedriga, R.; Grisleri, P. Obstacle Detection with Stereo Vision for Off-Road Vehicle Navigation. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Diego, CA, USA, 21–23 September 2005. [Google Scholar] [CrossRef]
- Labayrade, R.; Aubert, D. A single framework for vehicle roll, pitch, yaw estimation and obstacles detection by stereovision. In Proceedings of the 2003 IEEE Symposium on Intelligent Vehicle, Columbus, OH, USA, 9–11 June 2003; pp. 31–36. [Google Scholar] [CrossRef]
- Foroutan, M.; Tian, W.; Goodin, C.T. Assessing Impact of Understory Vegetation Density on Solid Obstacle Detection for Off-Road Autonomous Ground Vehicles. ASME Lett. Dyn. Syst. Control 2020, 1, 021008. [Google Scholar] [CrossRef]
- Chen, W.; Liu, Q.; Hu, H.; Liu, J.; Wang, S.; Zhu, Q. Novel Laser-Based Obstacle Detection for Autonomous Robots on Unstructured Terrain. Sensors 2020, 20, 5048. [Google Scholar] [CrossRef] [PubMed]
- Zhang, Y.; Xu, X.; Lu, H.; Dai, Y. Two-Stage Obstacle Detection Based on Stereo Vision in Unstructured Environment. In Proceedings of the 2014 6th International Conference on Intelligent Human-Machine Systems and Cybernetics (IHMSC 2014), Hangzhou, China, 26–27 August 2014; pp. 168–172. [Google Scholar] [CrossRef]
- Kuthirummal, S.; Das, A.; Samarasekera, S. A graph traversal based algorithm for obstacle detection using lidar or stereo. In Proceedings of the 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems, San Francisco, CA, USA, 25–30 September 2011; pp. 3874–3880. [Google Scholar] [CrossRef]
- Manduchi, R.; Castano, A.; Talukder, A.; Matthies, L. Obstacle Detection and Terrain Classification for Autonomous Off-Road Navigation. Auton. Robot. 2005, 18, 81–102. [Google Scholar] [CrossRef] [Green Version]
- Reina, G.; Milella, A.; Rouveure, R. Traversability analysis for off-road vehicles using stereo and radar data. In Proceedings of the Industrial Technology IEEE International Conference. 2015. (ICIT 2015), Seville, Spain, 17–19 March 2015; pp. 540–546. [Google Scholar]
- Gianni', C.; Balsi, M.; Esposito, S.; Fallavollita, P. Obstacle detection system involving fusion of multiple sensor technologies. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, XLII-2/W6, 127–134. [Google Scholar] [CrossRef]
- Meichen, L.; Jun, C.; Xiang, Z.; Lu, W.; Yongpeng, T. Dynamic obstacle detection based on multi-sensor information fusion. IFAC-PapersOnLine 2018, 51, 861–865. [Google Scholar] [CrossRef]
- Kragh, M.; Underwood, J. Multimodal obstacle detection in unstructured environments with conditional random fields. J. Field Robot. 2019, 37, 53–72. [Google Scholar] [CrossRef] [Green Version]
- Ollis, M.; Jochem, T.M. Structural method for obstacle detection and terrain classification. Unmanned Ground Veh. Technol. V 2013, 5083, 1–12. [Google Scholar] [CrossRef]
- Bradley, D.; Thayer, S.; Stentz, A.; Rander, P. Vegetation Detection for Mobile Robot Navigation; Technical Report CMU-RI-TR-04-12; Robotics Institute, Carnegie Mellon University: Pittsburgh, PA, USA, 2004. [Google Scholar]
- Nguyen, D.-V.; Kuhnert, L.; Thamke, S.; Schlemper, J.; Kuhnert, K.-D. A novel approach for a double-check of passable vegetation detection in autonomous ground vehicles. In Proceedings of the 2012 15th International IEEE Conference on Intelligent Transportation System, Anchorage, AK, USA, 16–19 September 2012; pp. 230–236. [Google Scholar] [CrossRef]
- Larson, J.; Trivedi, M. Lidar based off-road negative obstacle detection and analysis. In Proceedings of the 2011 14th International IEEE Conference on Intelligent Transportation Systems (ITSC), Washington, DC, USA, 5–7 October 2011; pp. 192–197. [Google Scholar] [CrossRef] [Green Version]
- Sinha, A.; Papadakis, P. Mind the gap: Detection and traversability analysis of terrain gaps using LIDAR for safe robot navigation. Robotica 2013, 31, 1085–1101. [Google Scholar] [CrossRef] [Green Version]
- Heckman, N.; Lalonde, J.-F.; Vandapel, N.; Hebert, M. Potential negative obstacle detection by occlusion labeling. In Proceedings of the 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems, San Diego, CA, USA, 29 October–2 November 2007; pp. 2168–2173. [Google Scholar] [CrossRef]
- Karunasekera, H.; Zhang, H.; Xi, T.; Wang, H. Stereo vision based negative obstacle detection. In Proceedings of the 2017 13th IEEE International Conference on Control & Automation (ICCA), Ohrid, Macedonia, 3–6 July 2017; pp. 834–838. [Google Scholar] [CrossRef]
- Hu, Z.; Uchimura, K. U-V-disparity: An efficient algorithm for stereovision based scene analysis. In Proceedings of the 2005 IEEE Intelligent Vehicles Symposium Proceedings, Las Vegas, NV, USA, 6–8 June 2005; pp. 48–54. [Google Scholar] [CrossRef]
- Bajracharya, M.; Ma, J.; Malchano, M.; Perkins, A.; Rizzi, A.A.; Matthies, L. High fidelity day/night stereo mapping with vegetation and negative obstacle detection for vision-in-the-loop walking. In Proceedings of the IROS 2013—IEEE/RSJ International Conference on Intelligent Robots and Systems, Tokyo, Japan, 3–7 November 2013; pp. 3663–3670. [Google Scholar] [CrossRef]
- Hu, T.; Nie, Y.; Wu, T.; He, H. Negative obstacle detection from image sequences. In Proceedings of the ICDIP 2011: 2011 3rd International Conference on Digital Image Processing III ROUND, Chengdu, China, 15–17 April 2011; p. 80090Y. [Google Scholar] [CrossRef]
- Rankin, A.; Huertas, A.; Matthies, L. Evaluation of stereo vision obstacle detection algorithms for off-road autonomous navigation. In Proceedings of the AUVSI's Unmanned Systems North America, Baltimore, MD, USA, 28–30 June 2005; pp. 1197–1211. [Google Scholar]
- Rankin, A.L.; Huertas, A.; Matthies, L.H. Night-time negative obstacle detection for off-road autonomous navigation. Unmanned Syst. Technol. IX 2007, 6561, 656103. [Google Scholar] [CrossRef]
- Goodin, C.; Carrillo, J.; Monroe, J.; Carruth, D.; Hudson, C. An Analytic Model for Negative Obstacle Detection with Lidar and Numerical Validation Using Physics-Based Simulation. Sensors 2021, 21, 3211. [Google Scholar] [CrossRef]
- Goodin, C.; Carruth, D.; Doude, M.; Hudson, C. Predicting the Influence of Rain on LIDAR in ADAS. Electronics 2019, 8, 89. [Google Scholar] [CrossRef] [Green Version]
- Islam, F.; Ball, J.E.; Goodin, C. Dynamic path planning for traversing autonomous vehicle in off-road environment using MAVS. Proc. SPIE 2022, 12115, 210–221. [Google Scholar] [CrossRef]
- Morton, R.D.; Olson, E.; Jaleel, H.; Egerstedt, M. Positive and negative obstacle detection using the HLD classifier. In Proceedings of the 2011 IEEE/RSJ International Conference on Intelligent Robots and System, San Francisco, CA, USA, 25–30 September 2011; pp. 1579–1584. [Google Scholar] [CrossRef]
- Wang, J.; Song, Q.; Jiang, Z.; Zhou, Z. A novel InSAR based off-road positive and negative obstacle detection technique for unmanned ground vehicle. Int. Geosci. Remote Sens. Symp. 2016, 2016, 1174–1177. [Google Scholar] [CrossRef]
- Peasley, B.; Birchfield, S. Real-time obstacle detection and avoidance in the presence of specular surfaces using an active 3D sensor. In Proceedings of the 2013 IEEE Workshop on Robot Vision, Clearwater Beach, FL, USA, 15–17 January 2013; pp. 197–202. [Google Scholar] [CrossRef] [Green Version]
- Matthies, L.H.; Bellutta, P.; McHenry, M. Detecting water hazards for autonomous off-road navigation. Unmanned Gr. Veh. Technol. V 2003, 5083, 231. [Google Scholar] [CrossRef]
- Kocić, J.; Jovičić, N.; Drndarević, V. Sensors and Sensor Fusion in Autonomous Vehicles. In Proceedings of the 2018 26th Telecommunications Forum (TELFOR), Belgrade, Serbia, 20–21 November 2018; pp. 420–425. [Google Scholar] [CrossRef]
- Hollinger, J.; Kutscher, B.; Close, B. Fusion of lidar and radar for detection of partially obscured objects. In Proceedings of the SPIE: Unmanned Systems Technology XVII, Baltimore, MD, USA, 21–23 April 2015; p. 946806. [Google Scholar] [CrossRef]
- Yeong, D.; Velasco-Hernandez, G.; Barry, J.; Walsh, J. Sensor and Sensor Fusion Technology in Autonomous Vehicles: A Review. Sensors 2021, 21, 2140. [Google Scholar] [CrossRef]
- Nabi, M.M.; Senyurek, V.; Gurbuz, A.C.; Kurum, M. Deep Learning-Based Soil Moisture Retrieval in CONUS Using CYGNSS Delay–Doppler Maps. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 6867–6881. [Google Scholar] [CrossRef]
- Geiger, A.; Lenz, P.; Urtasun, R. Are we ready for autonomous driving? The KITTI vision benchmark suite. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Providence, RI, USA, 16–21 June 2012; pp. 3354–3361. [Google Scholar] [CrossRef]
- Everingham, M.; Van Gool, L.; Williams, C.K.I.; Winn, J.; Zisserman, A. The Pascal Visual Object Classes (VOC) Challenge. Int. J. Comput. Vis. 2009, 88, 303–338. [Google Scholar] [CrossRef] [Green Version]
- Alexander, P. Cityscapes. Methodist DeBakey Cardiovasc. J. 2022, 18, 114. [Google Scholar] [CrossRef] [PubMed]
- Fiedler, N.; Bestmann, M.; Hendrich, N. ImageTagger: An Open Source Online Platform for Collaborative Image Labeling. In Robot World Cup; Springer: Cham, Switzerland, 2019; pp. 162–169. [Google Scholar] [CrossRef]
- Zhang, Y.; Wang, Y.; Zhang, H.; Zhu, B.; Chen, S.; Zhang, D. OneLabeler: A Flexible System for Building Data Labeling Tools. In Proceedings of the CHI Conference on Human Factors in Computing Systems 2022, New Orleans, LA, USA, 30 April–6 May 2022; pp. 1–22. [Google Scholar] [CrossRef]
- MMeyer, M.; Kuschk, G. Automotive radar dataset for deep learning based 3d object detection. In Proceedings of the 2019 16th European Radar Conference (EuRAD), Paris, France, 29 September–4 October 2019; pp. 129–132. [Google Scholar]
- Xu, H.; Gao, Y.; Yu, F.; Darrell, T. End-to-End Learning of Driving Models from Large-Scale Video Datasets. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 3530–3538. [Google Scholar] [CrossRef] [Green Version]
- Weyand, T.; Araujo, A.; Cao, B.; Sim, J. Google Landmarks Dataset v2 – A Large-Scale Benchmark for Instance-Level Recognition and Retrieval. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Silver Spring, MD, USA, 13–19 June 2020; pp. 2572–2581. [Google Scholar] [CrossRef]
- Houston, J.; Zuidhof, G.; Bergamini, L.; Ye, Y.; Chen, L.; Jain, A.; Omari, S.; Iglovikov, V.; Ondruska, P. One thousand and one hours: Self-driving motion prediction dataset. arXiv 2020, arXiv:2006.14480. [Google Scholar]
- Caesar, H.; Bankiti, V.; Lang, A.H.; Vora, S.; Liong, V.E.; Xu, Q.; Krishnan, A.; Pan, Y.; Baldan, G.; Beijbom, O. NuScenes: A Multimodal Dataset for Autonomous Driving. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 11618–11628. [Google Scholar] [CrossRef]
- Krylov, I.; Nosov, S.; Sovrasov, V. Open Images V5 Text Annotation and Yet Another Mask Text Spotter. arXiv 2021, arXiv:2106.12326. [Google Scholar]
- Barnes, D.; Gadd, M.; Murcutt, P.; Newman, P.; Posner, I. The Oxford Radar RobotCar Dataset: A Radar Extension to the Oxford RobotCar Dataset. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 6433–6438. [Google Scholar] [CrossRef]
- Xiao, P.; Shao, Z.; Hao, S.; Zhang, Z.; Chai, X.; Jiao, J.; Li, Z.; Wu, J.; Sun, K.; Jiang, K.; et al. Pandaset: Advanced sensor suite dataset for autonomous driving. In Proceedings of the 2021 IEEE International Intelligent Transportation Systems Conference (ITSC), Indianapolis, IN, USA, 19–22 September 2021; pp. 3095–3101. [Google Scholar]
- Sun, P.; Kretzschmar, H.; Dotiwalla, X.; Chouard, A.; Patnaik, V.; Tsui, P.; Guo, J.; Zhou, Y.; Chai, Y.; Caine, B.; et al. Scalability in Perception for Autonomous Driving: Waymo Open Dataset. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Virtual, 14–19 June 2020; pp. 2446–2454. [Google Scholar]
- Jiang, P.; Osteen, P.; Wigness, M.; Saripalli, S. RELLIS-3D Dataset: Data, Benchmarks and Analysis. In Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, 21–23 April 2021; pp. 1110–1116. [Google Scholar] [CrossRef]
- Sharma, S.; Dabbiru, L.; Hannis, T.; Mason, G.; Carruth, D.W.; Doude, M.; Goodin, C.; Hudson, C.; Ozier, S.; Ball, J.E.; et al. CaT: CAVS Traversability Dataset for Off-Road Autonomous Driving. IEEE Access 2022, 10, 24759–24768. [Google Scholar] [CrossRef]
- Gresenz, G.; White, J.; Schmidt, D.C. An Off-Road Terrain Dataset Including Images Labeled with Measures Of Terrain Roughness. In Proceedings of the 2021 IEEE International Conference on Autonomous Systems, Virtual Conference, 11–13 August 2021; pp. 1–5. [Google Scholar] [CrossRef]
- Valada, A.; Mohan, R.; Burgard, W. Self-Supervised Model Adaptation for Multimodal Semantic Segmentation. Int. J. Comput. Vis. 2019, 128, 1239–1285. [Google Scholar] [CrossRef] [Green Version]
- Chustz, G.; Saripalli, S. ROOAD: RELLIS Off-road Odometry Analysis Dataset. In Proceedings of the 2022 IEEE Intelligent Vehicles Symposium (IV), Aachen, Germany, 5–9 June 2022; pp. 1504–1510. [Google Scholar] [CrossRef]
- Debnath, N.; Thangiah, J.B.; Pararasaingam, S.; Abdul, S.; Aljunid, S.A.K. A mobility aid for the blind with discrete distance indicator and hanging object detection. In Proceedings of the 2004 IEEE Region 10 Conference (TENCON), Chiang Mai, Thailand, 21–24 November 2004; pp. 664–667. [Google Scholar] [CrossRef]
- Massimo, B.; Luca, B.; Alberto, B.; Alessandro, C. A Smart vision system for advanced LGV navigation and obstacle detection. In Proceedings of the 15th International IEEE Conference on Intelligent Transportation Systems, Anchorage, AK, USA, 16–19 September 2012; pp. 508–513. [Google Scholar] [CrossRef]
- Redmon, J.; Farhadi, A. Yolov3: An incremental improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
- Lempitsky, V.; Kohli, P.; Rother, C.; Sharp, T. Image segmentation with a bounding box prior. IEEE Int. Conf. Comput. Vis. 2009, Iccv, 277–284. [Google Scholar] [CrossRef]
- Feng, D.; Rosenbaum, L.; Timm, F.; Dietmayer, K. Leveraging Heteroscedastic Aleatoric Uncertainties for Robust Real-Time LiDAR 3D Object Detection. IEEE Intell. Veh. Symp. Proc. 2019, 2019, 1280–1287. [Google Scholar] [CrossRef] [Green Version]
- Hirose, N.; Sadeghian, A.; Vazquez, M.; Goebel, P.; Savarese, S. GONet: A Semi-Supervised Deep Learning Approach for Traversability Estimation. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 3044–3051. [Google Scholar] [CrossRef] [Green Version]
- Shaw, M. Active Learning in Learning to Teach in the Secondary School, 8th ed.; Routledge: London, UK, 2019; pp. 308–329. [Google Scholar] [CrossRef]
- Girshick, R. Fast R-CNN. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015; pp. 1440–1448. [Google Scholar] [CrossRef]
- Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. Ssd: Single shot multibox detector. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 8–16 October 2016; Springer: Amsterdam, The Netherlands, 2016; pp. 21–37. [Google Scholar]
- Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
- Wu, B.; Iandola, F.N.; Jin, P.H.; Keutzer, K. SqueezeDet: Unified, Small, Low Power Fully Convolutional Neural Networks for Real-Time Object Detection for Autonomous Driving. In Proceedings of the CVPR Workshops, Honolulu, HI, USA, 21–26 July 2017; pp. 446–454. [Google Scholar]
- Alaba, S.Y.; Ball, J.E. WCNN3D: Wavelet Convolutional Neural Network-Based 3D Object Detection for Autonomous Driving. Sensors 2022, 22, 7010. [Google Scholar] [CrossRef]
- Alaba, S.; Gurbuz, A.; Ball, J. A Comprehensive Survey of Deep Learning Multisensor Fusion-based 3D Object Detection for Autonomous Driving: Methods, Challenges, Open Issues, and Future Directions. TechRxiv, 2022; ahead of print. [Google Scholar] [CrossRef]
- Alaba, S.; Ball, J. Deep Learning-based Image 3D Object Detection for Autonomous Driving: Review. TechRxiv, 2022; ahead of print. [Google Scholar] [CrossRef]
- Naz, N.; Ehsan, M.K.; Amirzada, M.R.; Ali, Y.; Qureshi, M.A. Intelligence of Autonomous Vehicles: A Concise Revisit. J. Sens. 2022, 2022, 10. [Google Scholar] [CrossRef]
- Liu, J.; Ahmed, M.; Mirza, M.A.; Khan, W.U.; Xu, D.; Li, J.; Aziz, A.; Han, Z. RL/DRL Meets Vehicular Task Offloading Using Edge and Vehicular Cloudlet: A Survey. IEEE Internet Things J. 2022, 9, 8315–8338. [Google Scholar] [CrossRef]
- Ahmed, M.; Raza, S.; Mirza, M.A.; Aziz, A.; Khan, M.A.; Khan, W.U.; Li, J.; Han, Z. A survey on vehicular task offloading: Classification, issues, and challenges. J. King Saud Univ. Comput. Inf. Sci. 2022, 34, 4135–4162. [Google Scholar] [CrossRef]
- Nabi, M.M.; Senyurek, V.; Cafer Gurbuz, A.; Kurum, M. A Deep Learning-Based Soil Moisture Estimation in Conus Region Using Cygnss Delay Doppler Maps. In Proceedings of the 2022 IEEE International Geoscience and Remote Sensing Symposium, Kuala Lumpur, Malaysia, 17–22 July 2022; pp. 6177–6180. [Google Scholar]
- Khan, W.U.; Javed, M.A.; Zeadally, S.; Lagunas, E.; Chatzinotas, S. Intelligent and Secure Radio Environments for 6G Vehicular Aided HetNets: Key Opportunities and Challenges. arXiv 2022, arXiv:2210.02172. [Google Scholar]
- Khan, M.A.; Kumar, N.; Mohsan, S.A.H.; Khan, W.U.; Nasralla, M.M.; Alsharif, M.H.; Zywiolek, J.; Ullah, I. Swarm of UAVs for Network Management in 6G: A Technical Review. IEEE Trans. Netw. Serv. Manag. 2022, 1. [Google Scholar] [CrossRef]
- Khan, W.U.; Mahmood, A.; Bozorgchenani, A.; Jamshed, M.A.; Ranjha, A.; Lagunas, E.; Pervaiz, H.; Chatzinotas, S.; Ottersten, B.; Popovski, P. Opportunities for Intelligent Reflecting Surfaces in 6G-Empowered V2X Communications. arXiv 2022, arXiv:2210.00494. [Google Scholar]
- Khan, W.U.; Jamshed, M.A.; Lagunas, E.; Chatzinotas, S.; Li, X.; Ottersten, B. Energy Efficiency Optimization for Backscatter Enhanced NOMA Cooperative V2X Communications Under Imperfect CSI. IEEE Trans. Intell. Transp. Syst. 2022, 1–12. [Google Scholar] [CrossRef]
- Morshed, M.; Nabi, M.M.; Monzur, N.B. Frame By Frame Digital Video Denoising Using Multiplicative Noise Model. Int. J. Technol. Enhanc. Emerg. Eng. Res. 2014, 2, 1–6. [Google Scholar]
- Alaba, S.Y.; Nabi, M.M.; Shah, C.; Prior, J.; Campbell, M.D.; Wallace, F.; Ball, J.E.; Moorhead, R. Class-Aware Fish Species Recognition Using Deep Learning for an Imbalanced Dataset. Sensors 2022, 22, 8268. [Google Scholar] [CrossRef]
- Lee, J.; Hwang, K.-I. YOLO with adaptive frame control for real-time object detection applications. Multimed. Tools Appl. 2021, 81, 36375–36396. [Google Scholar] [CrossRef]
- Gregory, J.M.; Sahu, D.; Lancaster, E.; Sanchez, F.; Rocks, T.; Kaukeinen, B.; Fink, J.; Gupta, S.K. Active Learning for Testing and Evaluation in Field Robotics: A Case Study in Autonomous, Off-Road Navigation. In Proceedings of the 2022 International Conference on Robotics and Automation (ICRA), Philadelphia, PA, USA, 23–27 May 2022; pp. 8217–8223. [Google Scholar]
- Carruth, D.W.; Walden, C.T.; Goodin, C.; Fuller, S.C. Challenges in Low Infrastructure and Off-Road Automated Driving. In Proceedings of the 2022 Fifth International Conference on Connected and Autonomous Driving (MetroCAD), Detroit, MI, USA, 28–29 April 2022; pp. 13–20. [Google Scholar]
- Guan, H.; Wu, S.; Xu, S.; Gong, J.; Zhou, W. A planning framework of environment detection for unmanned ground vehicle in unknown off-road environment. Proc. Inst. Mech. Eng. Part D J. Automob. Eng. 2021, 09544070211065200. [Google Scholar] [CrossRef]
Literature | Published Year | Area Covered | Technology Focused |
---|---|---|---|
Chhaniyara et al. [12] | 2012 | Terrain trafficability analysis | Remote sensing technology |
Papadakis [13] | 2013 | Terrain traversability analysis | Sensor technology |
Ilas [14] | 2013 | Electronic sensing technologies | Sensor technology |
Babak et al. [15] | 2017 | The advancements in AGV technology | Sensor technology |
Lynch et al. [16] | 2019 | Sensor technologies | Sensor technology |
Hu et al. [17] | 2020 | Sensor fusion-based obstacle detection | Sensor technology |
Guastella et al. [18] | 2021 | Environmental perception | Learning-based methods |
This article | 2022 | Ground, positive, and negative obstacles detection | Both sensor technologies and learning-based methods |
Literature | Sensors | Method | Detection |
---|---|---|---|
Gao et al. [20], 2019 | Lidar | Deep Learning | Drivable ground |
Chen et al. [21], 2017 | Lidar | Lidar-histogram | Obstacles and drivable ground |
Liu et al. [19], 2019 Zhu et al. [25], 2019 | Radial and Transverse feature Bayesian Network | ||
Katramados et al. [22], 2009 | Camera | Vision/Traversability map | Drivable ground |
Shaban et al. [23], 2021 | Lidar Camera | Semantic Segmentation | Drivable ground |
Gao et al. [24], 2021 Dabbiru et al. [9], 2021 | |||
Tang et al. [28], 2018 | Laser + Camera | Unsupervised Learning | Drivable ground |
Reina et al. [30], 2016 Dahlkamp et al. [26], 2006 | Lidar/Laser + Camera | Supervised Learning/SVM | Drivable ground |
Sock et al. [31], 2016 McDaniel et al. [32], 2012 |
Literature | Sensors | Method | Detection |
---|---|---|---|
Huertas et al. [34], 2005, Broggi et al. [38], 2005 | Camera | Stereo Algorithm V-disparity | Positive obstacles/vegetation |
Maturana et al. [35], 2018, Foroutan et al. [40], 2021 | Lidar/Camera | CNN/Machine learning | Positive obstacles |
Manderson et al. [36], 2020, Nadav et al. [37], 2017 | Camera | Supervised Learning | Positive obstacles |
Kuthirummal et al. [43], 2011 | Lidar/Camera | Graph Traversal Algorithm | Positive + Negative obstacles |
Chen et al. [41], 2020 | Laser | (LM-BP) neural network | Positive obstacles |
Zhang et al. [42], 2014 | Camera | Stereo vision + SVR | Positive obstacles |
Bradley et al. [50], 2004 | Infrared + Camera | Vision | Positive obstacles/vegetation |
Reina et al. [45], 2017, Manduchi et al. [44], 2005 | Camera + Ladar/Radar | Supervised Learning | Positive obstacles |
Giannì et al. [46], 2017 | Radar+Lidar + Sonar | Kalman filter | Positive obstacles |
Ollis and Jochem [49], 2013 | Radar + Ladar | Density Map | Positive obstacles + drivable ground |
Kragh and Underwood [48], 2020 | Camera + Lidar | Deep learning | |
Nguyen et al. [51], 2012 | Blowing object | Motion compensation and detection | Positive obstacles/vegetation |
Literature | Sensors | Method | Detection |
---|---|---|---|
Larson and Trivedi [52], 2011 Sinha et al. [53], 2013 Heckman et al. [54], 2007 | Lidar/Laser | Missing data analysis | Negative obstacles |
Shang et al. [4], 2014 | Lidar | SVM | Negative obstacles |
Rankin et al. [60], 2007 | Infrared | Thermal property analysis | Negative obstacles |
Rankin et al. [59], 2005 Hu et al. [58], 2011 Bajracharya et al. [57], 2013 Karunasekera et al. [55], 2017 | Camera | Vision | Negative obstacles |
Goodin et al. [61], 2021 | Lidar | Curvature analysis | Negative obstacles |
Dima et al. [33], 2004 | Camera+ Lidar | HLD | Positive + Negative obstacles |
Morton and Olson [64], 2011 | Camera+ Infrared camera+ Laser | Color and texture analysis | |
Wang et al. [65], 2016 | InSAR | Data Fusion |
Sensors | Data Type | Resolution |
---|---|---|
Camera [66] | Images | High |
Lidar [46] | Point clouds | High |
Radar [46] | Radio frequency | Low |
Laser [66] | Signal reflectivity | High |
Ladar [67] | Signal reflectivity | High |
Infrared [60] | Electromagnetic radiation | Low |
Stereo [42] | Radio frequency | High |
Sonar [46,66] | Sound reflection | Low |
Dataset Name | Purpose | Data Type | Application |
---|---|---|---|
Astyx Dataset HiRes2019 [77] | 3D Object detection | Radar-centric information | On-road |
Berkeley DeepDrive [78] | Obstacles, drivable areas, and lane detection | Video sequence | On-road |
Landmarks [79] | Landmark detection | Camera images | On-road and Off-road |
Level 5 [80] | Traffic agent and path detection | Camera and lidar images | On-road |
nuScenes Dataset [81] | Object detection | Camera and lidar images | On-road |
Open Images V5 [82] | Object detection | Camera images | On-road |
Oxford Radar RobotCar [83] | Path planning | Radar route | On-road |
Pandaset [84] | Understanding scenarios | Camera and lidar images | On-road |
Waymo Open Dataset [85] | Understanding scenarios | Video sequence | On-road and Off-road |
RELLIS-3D Dataset [86] | Object detection | Camera and lidar images | Off-road |
CaT: CAVS Traversability Dataset [87] | Traversability | Camera images | Off-road |
Off-road Terrain Dataset [88] | Understanding scenarios | Camera images | Off-road |
Freiburg Forest [89] | Understanding scenarios | Camera images | On-road and Off-road |
ROOAD [90] | Localization | Camera images | Off-road |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Islam, F.; Nabi, M.M.; Ball, J.E. Off-Road Detection Analysis for Autonomous Ground Vehicles: A Review. Sensors 2022, 22, 8463. https://doi.org/10.3390/s22218463
Islam F, Nabi MM, Ball JE. Off-Road Detection Analysis for Autonomous Ground Vehicles: A Review. Sensors. 2022; 22(21):8463. https://doi.org/10.3390/s22218463
Chicago/Turabian StyleIslam, Fahmida, M M Nabi, and John E. Ball. 2022. "Off-Road Detection Analysis for Autonomous Ground Vehicles: A Review" Sensors 22, no. 21: 8463. https://doi.org/10.3390/s22218463