Survey of Autonomous Vehicles’ Collision Avoidance Algorithms
Abstract
:1. Introduction
- This paper comprehensively surveys various collision avoidance algorithms for autonomous vehicles (AVs), encompassing methodologies like sensor-based approaches, path planning, decision-making strategies, and machine-learning techniques.
- A detailed comparative analysis is presented, highlighting each methodology’s strengths, limitations, and real-world applicability, enabling readers to grasp the advantages and disadvantages of different approaches quickly.
- This survey identifies current projects and challenges while suggesting potential areas for future research to improve collision avoidance algorithms.
2. Sensor Technologies in Collision Avoidance
2.1. Camera
2.2. Radar
2.3. LiDAR (Light Detection and Ranging)
2.4. Fusion Sensor
2.5. Summary of Sensor-Based Approaches
Sensor | Approach | Strengths | Limitations |
---|---|---|---|
Camera | Ref. [16] VLC-based (Visible Light Communication) positioning using monocular camera | Fast, low-latency communication, cost-effective | Requires line-of-sight, sensitive to lighting conditions |
Ref. [17] Synthetic dataset to train vision algorithms for autonomous driving | Customizable for various conditions (weather, lighting) | May not generalize to real-world environments | |
Refs. [18,21] Provides synthetic data for training vision models for AVs | Accurate position and orientation estimation | May not generalize to real-world scenarios | |
Ref. [19] DL with inverse perspective mapping for vehicle orientation | Effectively predicts collision risks, even in complex environments | High computational demand, requires large datasets | |
Ref. [20] VLC for vehicle-to-vehicle tracking | Real-time tracking, low cost | Sensitive to lighting | |
Radar | Ref. [22] Radar-based solution for vehicle detection in ADAS | Reliable for real-time applications in ADAS | High computational requirements |
Ref. [23] Machine Learning with radar data to enhance vehicle position estimation | Robustness in adverse weather conditions such as rain or fog | Requires large datasets and computational resources | |
Ref. [24] A data-driven method to improve radar accuracy for vehicle position estimation | Better accuracy compared to conventional radar estimation methods | Real-time application and dataset dependency | |
Ref. [25] Deep learning-based radar for collision avoidance | Accurate detection using DL | High computational requirements | |
LIDAR | Ref. [26] Point cloud map generation and localization using 3D LiDAR scans for autonomous vehicles | Highly accurate 3D mapping, useful for real-time localization in AVs | High computational cost, large datasets in dynamic environments |
Ref. [27] Object detection for autonomous vehicles using YOLO algorithm with sensor-based technology | High detection accuracy, fast processing times for object detection using YOLO | Performance depends on the data quality, affected by adverse weather | |
Ref. [28] 3D LIDAR for real-time obstacle detection and tracking | Accurate dynamic obstacle detection | High computational demand, large storage and processing capacity | |
Fusion Sensor | Refs. [30,31] Fuses LIDAR depth data with camera visuals for real-time vehicle detection and 3D object tracking | Combines depth and visual data for better accuracy | Requires complex fusion algorithms, high computational cost |
Ref. [32] Real-time obstacle detection using YOLO model | Fast detection, effective for small objects | Performance drops in low-light conditions | |
Ref. [33] Cross-modal supervision for radar and vision object detection | Combining radar and vision-based data for object tracking | High computational demands, potentially limiting its efficiency in real-time applications | |
Ref. [34] Use CNNs to improve vehicle detection performance | Relevant for precise navigation and collision avoidance | High computational cost, large datasets required | |
Ref. [35] 3D LIDAR point cloud fusion with image segmentation | High accuracy in object localization | Computational complexity, high data processing demands | |
Ref. [36] Cross-modal supervision combines radar and vision for object detection | Leverages supervision to improve accuracy and robustness | Requires large datasets for training, high computational cost | |
Ref. [37] Combines LIDAR and millimeter-wave radar for robust navigation and collision avoidance | Effective in low-visibility environments, combines depth and radar data | High processing demand due to radar and LIDAR fusion |
3. Collision Avoidance Techniques
3.1. Overview of Collision Avoidance Algorithms
- Path-Planning Algorithms
- Decision-Making Methods
- Machine Learning Approaches
3.2. Path Planning Algorithms
3.2.1. Path Planning: Classical Approaches
A Algorithm*
Dijkstra’s Algorithm
Rapidly Exploring Random Tree (RRT)
Probabilistic Roadmap (PRM)
Dynamic Window Approach (DWA)
Artificial Potential Fields (APF)
3.2.2. Path Planning: Machine and Deep Learning Technique
Deep Supervised Learning Techniques
Reinforcement Learning Techniques (RL)
Deep Reinforcement Learning Techniques (DRL)
Value-Based/Policy-Based Methods
Actor–Critic Methods
3.2.3. Path Planning: Meta-Heuristic Optimization Technique
3.2.4. Summary of Path Planning
3.3. Decision-Making Strategies
3.3.1. Rule-Based Models
3.3.2. Probability-Based Models
3.3.3. Learning-Based Models
Model | Approach | Application | Key Findings |
---|---|---|---|
Rule-Based | Ref. [124] Behavioral Decision-Making based on Driving Risk | Risk assessment of intelligent vehicles in real-time driving scenarios. | Developed a model based on Lagrange’s equations to assess driving risks and propose optimal lateral and vertical accelerations for CA |
Ref. [125] Fuzzy Risk Assessment | Risk assessment with interval numbers and assessment distributions. | Developed a fuzzy inference system to address uncertainties and improve vehicle safety in complex environments. | |
Ref. [126] Decision Tree based on Trajectory Data | Prediction of pedestrian behaviors at intersections. | Applied gradient boosting decision trees to predict pedestrians’ decisions at signalized intersections, enhancing vehicle safety. | |
Ref. [127] Risk-aware Decision-Making | Planning at uncontrolled intersections. | Used a strategy tree to guide vehicles through uncontrolled intersections with risk-aware decision-making. | |
Ref. [128] Game Theoretic Decision-Making | Non-cooperative decision-making in autonomous driving. | Incorporated driving characteristics into a non-cooperative game-theoretic model for decision-making. | |
Probability-Based | Ref. [129] Interactive Decision-Making | Left-turning of autonomous vehicles. | Presented an interactive model for left-turning at uncontrolled intersections to reduce collision risks. |
Ref. [130] Random-Forest-based (RF) Lane Change Strategy | Intelligent driving systems for lane change recognition | Utilized an RF algorithm for lane change strategy analysis, improving decision-making accuracy in lane-changing scenarios. | |
Learning-Based | Ref. [132] Self-learning Optimal Cruise Control | Cruise control decision-making based on individual driving | Applied self-learning techniques to optimize cruise control decisions based on car-following styles, improving driving performance. |
Ref. [133] DRL and Planning | Tactical decision-making for autonomous driving | Combined DRL with planning algorithms to improve tactical decision making. | |
Ref. [134] Mixed Strategy Nash Equilibrium | Autonomous driving at uncontrolled intersections | Proposed a decision-making framework based on intention prediction and mixed strategy Nash equilibrium for safer and efficient navigation | |
Ref. [135] Reinforcement Learning (RL) | Autonomous decision making on highways | Developed an RL approach for autonomous decision making on highways, improving vehicle performance in lane changes and overtaking. | |
Ref. [136] Multi-Objective Multi-Agent Cooperative Decision-Making | Multi-agent decision-making for autonomous vehicles | Introduced MO-MIX, a DRL-based framework for multi objective, multi agent cooperative decision making, enhancing performance in complex environments. | |
Ref. [137] DRL | Decision-making at intersections without traffic signals | Proposed decision making models for AVs at unsignalized intersections using DRL to improve traffic efficiency. | |
Ref. [138] RL-Based Autonomous Driving | Driving at intersections in CARLA simulator | Applied RL to autonomous driving in intersection scenarios using the CARLA simulator, demonstrating improved decision-making. | |
Ref. [139] DRL for Task Transfer | Decision-making in the intersections that do not have traffic signals | Developed a DRL approach for transferring driving tasks in non-signalized intersections, improving decision-making. | |
Ref. [140] Risk-Aware Decision-Making with RL | Automated driving at occluded intersections | Proposed a risk-aware high-level decision-making framework using RL for navigating intersections. | |
Ref. [141] DRL with Camera Sensors | Automated driving in complex intersections | Introduced a DRL approach based on camera sensor data for driving automation in intersection scenarios. |
3.3.4. Summary of Decision-Making Strategies
3.4. Machine Learning Approaches
3.4.1. Deep Learning, Reinforcement Learning, and Supervised Learning
3.4.2. Hybrid Learning Approaches
3.4.3. Summary of Machine Learning
3.5. Summary of Key Techniques
4. Real-World Applications and Testing
4.1. Real-World AV Testing Projects
4.2. Real-World Performance of Collision Avoidance Algorithms
- Sensor-Based Approaches: Sensor fusion (e.g., LIDAR, radar, cameras) is essential for detecting real-time obstacles. In Waymo’s testing, for instance, LIDAR provides accurate 3D mapping of the environment, enabling the vehicle to detect obstacles even in low-light conditions. However, real-world issues such as sensor blindness due to heavy rain or snow have been reported, affecting detection accuracy. While effective in clear weather, Tesla’s camera-based system faces challenges in low-visibility conditions.
- Path Planning and Decision-Making: Decision-making algorithms incorporating rule-based and optimization-based techniques have been successfully applied in controlled environments like highways (e.g., Tesla Autopilot’s Navigate on Autopilot). However, these algorithms often need help with unpredictable human behaviors in urban environments with mixed traffic, such as jaywalking pedestrians or erratic drivers. Reinforcement Learning models, like those used in Baidu Apollo, have shown potential in these settings by learning from interaction with the environment but still require extensive training and simulation before deployment on real roads.
- Machine Learning Approaches: In real-world testing, Machine Learning models and intense learning have been integral. Waymo’s use of convolutional neural networks (CNNs) for image-based obstacle detection has proven effective in various driving conditions. Machine Learning models require considerable amounts of tagged data for training, and their performance can be debased when encountering scenarios that are not represented in the training data. This limitation is particularly problematic in urban settings where unusual events (e.g., a pedestrian running into traffic) may occur.
4.2.1. Real-World Accident Cases
4.2.2. Challenges in Real-World Environments
4.3. Cross-Domain Technologies in Collision Avoidance Methods
4.3.1. Edge Computing
4.3.2. V2X Communication
4.3.3. Virtual Reality (VR)
4.3.4. Other Information and Communication Technologies
4.4. Recommendations for Improving Real-World Testing
- Integration of Real-World and Simulated Data: Using real-world driving data and simulated environments can help train Machine Learning models more effectively.
- Edge Computing for Real-Time Processing: Leveraging edge computing for faster data processing can help reduce latency, allowing AVs to react quickly to obstacles and collisions.
- Multi-Sensor Fusion: Integrating data from multiple sensors (LIDAR, radar, cameras) improves performance in challenging conditions. Refining sensor fusion algorithms is essential for real-world performance.
- Ethical and Safety Considerations: AV systems must prioritize human safety in decision-making, particularly in ambiguous situations where inevitable collisions occur. Ethical frameworks must be integrated into decision-making to ensure AVs prioritize human life.
4.5. Summary
5. Conclusions and Future Direction
5.1. Expand on Future Research Directions
- Edge Computing for Real-Time Decision Making: Deep learning and Reinforcement Learning are computationally intensive. Edge computing reduces latency by processing data closer to the source, making it suitable for real-time AV applications. Future research should focus on optimizing edge computing for handling high-volume sensor data.
- Federated Learning for Data Privacy: Federated learning enables AVs to train models locally, sharing updates instead of raw data, thereby preserving privacy. Research should explore its application in improving collision avoidance algorithms, particularly in adapting to different driving conditions.
- Improving Sim-to-Real Transfer in RL: The gap between simulated and real-world environments challenges RL in AVs. Future work should improve simulation accuracy and apply domain adaptation techniques to bridge this gap.
5.2. Conclusions
Author Contributions
Funding
Conflicts of Interest
References
- Ma, Y.; Wang, Z.; Yang, H.; Yang, L. Artificial intelligence applications in the development of autonomous vehicles: A survey. IEEE/CAA J. Autom. Sin. 2020, 7, 315–329. [Google Scholar] [CrossRef]
- Li, J.; Cheng, H.; Guo, H.; Qiu, S. Survey on Artificial Intelligence for Vehicles. Automot. Innov. 2018, 1, 2–14. [Google Scholar] [CrossRef]
- Tong, W.; Hussain, A.; Bo, W.X.; Maharjan, S. Artificial Intelligence for Vehicle-to-Everything: A Survey. IEEE Access 2019, 7, 10823–10843. [Google Scholar] [CrossRef]
- Chen, L.; Ivan, L.; Huang, C.; Xing, Y.; Tian, D.; Li, L.; Hu, Z.; Teng, S.; Lv, C.; Wang, J.; et al. Milestones in Autonomous Driving and Intelligent Vehicles—Part I: Control, Computing System Design, Communication, HD Map, Testing, and Human Behaviors. IEEE Trans. Syst. Man Cybern. Syst. 2023, 53, 5831–5847. [Google Scholar] [CrossRef]
- Fu, Y.; Li, C.; Yu, F.R.; Luan, T.H.; Zhang, Y. A Survey of Driving Safety With Sensing, Vehicular Communications, and Artificial Intelligence-Based Collision Avoidance. IEEE Trans. Intell. Transp. Syst. 2022, 23, 6142–6163. [Google Scholar] [CrossRef]
- Tijani, A. Obstacle Avoidance Path Design for Autonomous Vehicles – A Review. Tech. Rom. J. Appl. Sci. Technol. 2021, 3, 64–81. [Google Scholar]
- Dahl, J.; de Campos, G.R.; Olsson, C.; Fredriksson, J. Collision Avoidance: A Literature Review on Threat-Assessment Techniques. IEEE Trans. Intell. Veh. 2019, 4, 101–113. [Google Scholar] [CrossRef]
- Muzahid, A.J.M.; Kamarulzaman, S.F.; Rahman, M.A.; Murad, S.A.; Kamal, M.A.S.; Alenezi, A.H. Multiple vehicle cooperation and collision avoidance in automated vehicles: Survey and an AI-enabled conceptual framework. Sci. Rep. 2023, 13, 603. [Google Scholar]
- Liu, Y.; Bucknall, R. A survey of formation control and motion planning of multiple unmanned vehicles. Robotica 2018, 36, 1–29. [Google Scholar] [CrossRef]
- Verstraete, T.; Muhammad, N. Pedestrian Collision Avoidance in Autonomous Vehicles: A Review. Computers 2024, 13, 78. [Google Scholar] [CrossRef]
- Rasouli, A.; Tsotsos, J.K. Autonomous Vehicles That Interact With Pedestrians: A Survey of Theory and Practice. IEEE Trans. Intell. Transp. Syst. 2020, 21, 900–918. [Google Scholar] [CrossRef]
- Ahmed, S.; Huda, M.N.; Rajbhandari, S.; Saha, C.; Elshaw, M.; Kanarachos, S. Pedestrian and Cyclist Detection and Intent Estimation for Autonomous Vehicles: A Survey. Appl. Sci. 2019, 9, 2335. [Google Scholar] [CrossRef]
- Kabil, A.; Rabieh, K.; Kaleem, F.; Azer, M.A. Vehicle to Pedestrian Systems: Survey, Challenges and Recent Trends. IEEE Access 2022, 10, 123981–123994. [Google Scholar] [CrossRef]
- Lu, L.; Fasano, G.; Carrio, A.; Lei, M.; Bavle, H.; Campoy, P. A comprehensive survey on non-cooperative collision avoidance for micro aerial vehicles: Sensing and obstacle detection. J. Field Robot. 2023, 40, 1697–1720. [Google Scholar] [CrossRef]
- Huang, S.; Teo, R.S.H.; Tan, K.K. Collision avoidance of multi unmanned aerial vehicles: A review. Annu. Rev. Control 2019, 48, 147–164. [Google Scholar]
- He, J.; Tang, K.; He, J.; Shi, J. Effective vehicle-to-vehicle positioning method using monocular camera based on VLC. Opt Express 2020, 28, 4433–4443. [Google Scholar] [CrossRef]
- Cabon, Y.; Murray, N.; Humenberger, M. Virtual KITTI 2. arXiv 2020, arXiv:2001.10773. [Google Scholar] [CrossRef]
- Mallik, A.; Gaopande, M.L.; Singh, G.; Ravindran, A.; Iqbal, Z.; Chao, S.; Revalla, H.; Nagasamy, V. Real-time Detection and Avoidance of Obstacles in the Path of Autonomous Vehicles Using Monocular RGB Camera. SAE Int. J. Adv. Curr. Pract. Mobil. 2022, 5, 622–632. [Google Scholar] [CrossRef]
- Rill, R.A.; Faragó, K.B. Collision avoidance using deep learning-based monocular vision. SN Comput. Sci. 2021, 2, 375. [Google Scholar] [CrossRef]
- Joa, E.; Sun, Y.; Borrelli, F. Monocular Camera Localization for Automated Vehicles Using Image Retrieval. arXiv 2021, arXiv:2109.06296. [Google Scholar]
- Zhe, T.; Huang, L.; Wu, Q.; Zhang, J.; Pei, C.; Li, L. Inter-Vehicle Distance Estimation Method Based on Monocular Vision Using 3D Detection. IEEE Trans. Veh. Technol. 2020, 69, 4907–4919. [Google Scholar] [CrossRef]
- Muckenhuber, S.; Museljic, E.; Stettinger, G. Performance evaluation of a state-of-the-art automotive radar and corresponding modeling approaches based on a large labeled dataset. J. Intell. Transp. Syst. 2022, 26, 655–674. [Google Scholar]
- Sohail, M.; Khan, A.U.; Sandhu, M.; Shoukat, I.A.; Jafri, M.; Shin, H. Radar sensor based Machine Learning approach for precise vehicle position estimation. Sci. Rep. 2023, 13, 13837. [Google Scholar] [CrossRef]
- Choi, W.Y.; Yang, J.H.; Chung, C.C. Data-Driven Object Vehicle Estimation by Radar Accuracy Modeling with Weighted Interpolation. Sensors 2021, 21, 2317. [Google Scholar] [CrossRef] [PubMed]
- Srivastav, A.; Mandal, S. Radars for autonomous driving: A review of deep learning methods and challenges. IEEE Access 2023, 11, 97147–97168. [Google Scholar] [CrossRef]
- Poulose, A.; Baek, M.; Han, D.S. Point cloud map generation and localization for autonomous vehicles using 3D lidar scans. In Proceedings of the 2022 27th Asia Pacific Conference on Communications (APCC), Jeju, Republic of Korea, 19–21 October 2022; pp. 336–341. [Google Scholar]
- Dazlee, N.M.A.A.; Khalil, S.A.; Rahman, S.A.; Mutalib, S. Object detection for autonomous vehicles with sensor-based technology using yolo. Int. J. Intell. Syst. Appl. Eng. 2022, 10, 129–134. [Google Scholar] [CrossRef]
- Saha, A.; Dhara, B.C. 3D LiDAR-based obstacle detection and tracking for autonomous navigation in dynamic environments. Int. J. Intell. Robot. Appl. 2024, 8, 39–60. [Google Scholar]
- Li, Y.; Ibanez-Guzman, J. Lidar for autonomous driving: The principles, challenges, and trends for automotive lidar and perception systems. IEEE Signal Process. Mag. 2020, 37, 50–61. [Google Scholar]
- Guan, L.; Chen, Y.; Wang, G.; Lei, X. Real-time vehicle detection framework based on the fusion of LiDAR and camera. Electronics 2020, 9, 451. [Google Scholar] [CrossRef]
- Kotur, M.; Lukić, N.; Krunić, M.; Lukač, Ž. Camera and LiDAR sensor fusion for 3d object tracking in a collision avoidance system. In Proceedings of the 2021 Zooming Innovation in Consumer Technologies Conference (ZINC), Novi Sad, Serbia, 26–27 May 2021; pp. 198–202. [Google Scholar]
- Kim, J.; Kim, Y.; Kum, D. Low-level sensor fusion network for 3d vehicle detection using radar range-azimuth heatmap and monocular image. In Proceedings of the Asian Conference on Computer Vision, Kyoto, Japan, 30 November–4 December 2020. [Google Scholar]
- Lim, S.; Jung, J.; Lee, B.H.; Choi, J.; Kim, S.C. Radar sensor-based estimation of vehicle orientation for autonomous driving. IEEE Sens. J. 2022, 22, 21924–21932. [Google Scholar]
- Choi, W.Y.; Kang, C.M.; Lee, S.H.; Chung, C.C. Radar accuracy modeling and its application to object vehicle tracking. Int. J. Control. Autom. Syst. 2020, 18, 3146–3158. [Google Scholar] [CrossRef]
- Caesar, H.; Bankiti, V.; Lang, A.H.; Vora, S.; Liong, V.E.; Xu, Q.; Krishnan, A.; Pan, Y.; Baldan, G.; Beijbom, O. nuscenes: A multimodal dataset for autonomous driving. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 11621–11631. [Google Scholar]
- Wang, Y.; Jiang, Z.; Gao, X.; Hwang, J.N.; Xing, G.; Liu, H. Rodnet: Radar object detection using cross-modal supervision. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Virtual, 5–9 January 2021; pp. 504–513. [Google Scholar]
- Robsrud, D.N.; Øvsthus, Ø.; Muggerud, L.; Amendola, J.; Cenkeramaddi, L.R.; Tyapin, I.; Jha, A. Lidar-mmW Radar Fusion for Safer UGV Autonomous Navigation with Collision Avoidance. In Proceedings of the 2023 11th International Conference on Control, Mechatronics and Automation (ICCMA), Agder, Norway, 1–3 November 2023; pp. 189–194. [Google Scholar]
- Yeong, D.J.; Velasco-Hernandez, G.; Barry, J.; Walsh, J. Sensor and sensor fusion technology in autonomous vehicles: A review. Sensors 2021, 21, 2140. [Google Scholar] [CrossRef]
- Lin, Z.; Tian, Z.; Zhang, Q.; Zhuang, H.; Lan, J. Enhanced Visual SLAM for Collision-Free Driving with Lightweight Autonomous Cars. Sensors 2024, 24, 6258. [Google Scholar] [CrossRef]
- Reda, M.; Onsy, A.; Haikal, A.Y.; Ghanbari, A. Path planning algorithms in the autonomous driving system: A comprehensive review. Robot. Auton. Syst. 2024, 174, 104630. [Google Scholar] [CrossRef]
- Maw, A.A.; Tyan, M.; Lee, J.W. iADA*: Improved anytime path planning and replanning algorithm for autonomous vehicle. J. Intell. Robot. Syst. 2020, 100, 1005–1013. [Google Scholar] [CrossRef]
- Thoresen, M.; Nielsen, N.H.; Mathiassen, K.; Pettersen, K.Y. Path planning for UGVs based on traversability hybrid A. IEEE Robot. Autom. Lett. 2021, 6, 1216–1223. [Google Scholar] [CrossRef]
- Wang, H.; Qi, X.; Lou, S.; Jing, J.; He, H.; Liu, W. An efficient and robust improved A* algorithm for path planning. Symmetry 2021, 13, 2213. [Google Scholar] [CrossRef]
- Kim, D.; Kim, G.; Kim, H.; Huh, K. A hierarchical motion planning framework for autonomous driving in structured highway environments. IEEE Access 2022, 10, 20102–20117. [Google Scholar] [CrossRef]
- Liu, T.; Zhang, J. An improved path planning algorithm based on fuel consumption. J. Supercomput. 2022, 78, 12973–13003. [Google Scholar] [CrossRef]
- Chen, R.; Hu, J.; Xu, W. An RRT-Dijkstra-based path planning strategy for autonomous vehicles. Appl. Sci. 2022, 12, 11982. [Google Scholar] [CrossRef]
- Zhu, D.D.; Sun, J.Q. A new algorithm based on Dijkstra for vehicle path planning considering intersection attribute. IEEE Access 2021, 9, 19761–19775. [Google Scholar] [CrossRef]
- Wang, J.; Chi, W.; Li, C.; Wang, C.; Meng, M.Q.H. Neural RRT*: Learning-based optimal path planning. IEEE Trans. Autom. Sci. Eng. 2020, 17, 1748–1758. [Google Scholar] [CrossRef]
- Li, X.; Li, G.; Bian, Z. Research on Autonomous Vehicle Path Planning Algorithm Based on Improved RRT* Algorithm and Artificial Potential Field Method. Sensors 2024, 24, 3899. [Google Scholar] [CrossRef]
- Feraco, S.; Luciani, S.; Bonfitto, A.; Amati, N.; Tonoli, A. A local trajectory planning and control method for autonomous vehicles based on the RRT algorithm. In Proceedings of the 2020 AEIT International Conference of Electrical and Electronic Technologies for Automotive (AEIT Automotive), Torino, Italy, 18–20 November 2020; pp. 1–6. [Google Scholar]
- Wang, J.; Li, B.; Meng, M.Q.H. Kinematic Constrained Bi-directional RRT with Efficient Branch Pruning for robot path planning. Expert Syst. Appl. 2021, 170, 114541. [Google Scholar] [CrossRef]
- Huang, G.; Ma, Q. Research on path planning algorithm of autonomous vehicles based on improved RRT algorithm. Int. J. Intell. Transp. Syst. Res. 2022, 20, 170–180. [Google Scholar] [CrossRef]
- Zhang, X.; Zhu, T.; Xu, Y.; Liu, H.; Liu, F. Local Path Planning of the Autonomous Vehicle Based on Adaptive Improved RRT Algorithm in Certain Lane Environments. Actuators 2022, 11, 109. [Google Scholar] [CrossRef]
- Rong, J.; Arrigoni, S.; Luan, N.; Braghin, F. Attention-based sampling distribution for motion planning in autonomous driving. In Proceedings of the 2020 39th Chinese Control Conference (CCC), Shenyang, China, 27–29 July 2020; pp. 5671–5676. [Google Scholar]
- Jin, X.; Yan, Z.; Yang, H.; Wang, Q. A practical sampling-based motion planning method for autonomous driving in unstructured environments. IFAC-PapersOnLine 2021, 54, 449–453. [Google Scholar] [CrossRef]
- Kuutti, S.; Fallah, S.; Katsaros, K.; Dianati, M.; Mccullough, F.; Mouzakitis, A. A survey of the state-of-the-art localization techniques and their potentials for autonomous vehicle applications. IEEE Internet Things J. 2018, 5, 829–846. [Google Scholar] [CrossRef]
- Tyagi, A.K.; Aswathy, S. Autonomous Intelligent Vehicles (AIV): Research statements, open issues, challenges and road for future. Int. J. Intell. Netw. 2021, 2, 83–102. [Google Scholar] [CrossRef]
- Zhang, Y.; Wang, S. LSPP: A novel path planning algorithm based on perceiving line segment feature. IEEE Sens. J. 2021, 22, 720–731. [Google Scholar] [CrossRef]
- Gopika, M.; Bindu, G.; Ponmalar, M.; Usha, K.; Haridas, T. Smooth prm implementation for autonomous ground vehicle. In Proceedings of the 2022 IEEE 1st International Conference on Data, Decision and Systems (ICDDS), Bangalore, India, 2–3 December 2022; pp. 1–5. [Google Scholar]
- Daniel, K.L.; Poonia, R.C. An In-Depth Analysis of Collision Avoidance Path Planning Algorithms in Autonomous Vehicles. Recent Adv. Comput. Sci. Commun. 2024, 17, 62–72. [Google Scholar] [CrossRef]
- Rekabi Bana, F.; Krajník, T.; Arvin, F. Evolutionary optimization for risk-aware heterogeneous multi-agent path planning in uncertain environments. Front. Robot. AI 2024, 11, 1375393. [Google Scholar] [CrossRef] [PubMed]
- Rakita, D.; Mutlu, B.; Gleicher, M. Single-query path planning using sample-efficient probability informed trees. IEEE Robot. Autom. Lett. 2021, 6, 4624–4631. [Google Scholar] [CrossRef] [PubMed]
- Yeong-Ho, L.; Yeong-Jun, K.; Da-Un, J.; Ihn-Sik, W. Development of an integrated path planning algorithm for autonomous driving of unmanned surface vessel. In Proceedings of the 2020 20th International Conference on Control, Automation and Systems (ICCAS), Online, 13–16 October 2020; pp. 27–32. [Google Scholar]
- Li, Y.; Jin, R.; Xu, X.; Qian, Y.; Wang, H.; Xu, S.; Wang, Z. A mobile robot path planning algorithm based on improved A* algorithm and dynamic window approach. IEEE Access 2022, 10, 57736–57747. [Google Scholar] [CrossRef]
- Zhong, X.; Tian, J.; Hu, H.; Peng, X. Hybrid path planning based on safe A* algorithm and adaptive window approach for mobile robot in large-scale dynamic environment. J. Intell. Robot. Syst. 2020, 99, 65–77. [Google Scholar] [CrossRef]
- Liu, L.S.; Lin, J.F.; Yao, J.X.; He, D.W.; Zheng, J.S.; Huang, J.; Shi, P. Path planning for smart car based on Dijkstra algorithm and dynamic window approach. Wirel. Commun. Mob. Comput. 2021, 2021, 8881684. [Google Scholar] [CrossRef]
- Lu, B.; Li, G.; Yu, H.; Wang, H.; Guo, J.; Cao, D.; He, H. Adaptive Potential Field-Based Path Planning for Complex Autonomous Driving Scenarios. IEEE Access 2020, 8, 225294–225305. [Google Scholar] [CrossRef]
- Lin, P.; Choi, W.Y.; Lee, S.H.; Chung, C.C. Model Predictive Path Planning Based on Artificial Potential Field and Its Application to Autonomous Lane Change. In Proceedings of the 2020 20th International Conference on Control, Automation and Systems (ICCAS), Online, 13–16 October 2020; pp. 731–736. [Google Scholar]
- Li, H.; Wu, C.; Chu, D.; Lu, L.; Cheng, K. Combined Trajectory Planning and Tracking for Autonomous Vehicle Considering Driving Styles. IEEE Access 2021, 9, 9453–9463. [Google Scholar] [CrossRef]
- Huang, C.; Lv, C.; Hang, P.; Xing, Y. Toward Safe and Personalized Autonomous Driving: Decision-Making and Motion Control with DPF and CDT Techniques. IEEE/ASME Trans. Mechatron. 2021, 26, 611–620. [Google Scholar] [CrossRef]
- Zhang, Z.; Zheng, L.; Li, Y.; Zeng, P.; Liang, Y. Structured Road-Oriented Motion Planning and Tracking Framework for Active Collision Avoidance of Autonomous Vehicles. Sci. China Technol. Sci. 2021, 64, 2427–2440. [Google Scholar] [CrossRef]
- Li, H.; Liu, W.; Yang, C.; Wang, W.; Qie, T.; Xiang, C. An Optimization-Based Path Planning Approach for Autonomous Vehicles Using dynEFWA-Artificial Potential Field. IEEE Trans. Intell. Veh. 2021, 7, 263–272. [Google Scholar] [CrossRef]
- Wang, C.; Wang, Z.; Zhang, L.; Yu, H.; Cao, D. Post-Impact Motion Planning and Tracking Control for Autonomous Vehicles. Chin. J. Mech. Eng. 2022, 35, 54. [Google Scholar] [CrossRef]
- Kicki, P.; Gawron, T.; Skrzypczyński, P. A Self-Supervised Learning Approach to Rapid Path Planning for Car-Like Vehicles Maneuvering in Urban Environment. arXiv 2020, arXiv:2003.00946. [Google Scholar]
- Guo, N.; Li, C.; Wang, D.; Song, Y.; Liu, G.; Gao, T. Local Path Planning of Mobile Robot Based on Long Short-Term Memory Neural Network. Autom. Remote Control 2021, 55, 53–65. [Google Scholar]
- Lee, D.H.; Liu, J.L. End-to-End Deep Learning of Lane Detection and Path Prediction for Real-Time Autonomous Driving. Signal Image Video Process. 2022, 17, 199–205. [Google Scholar] [CrossRef]
- Drake, D.S. Using Ensemble Learning Techniques to Solve the Blind Drift Calibration Problem. Ph.D. Thesis, Old Dominion University, Norfolk, VA, USA, 2022. [Google Scholar]
- Moraes, G.; Mozart, A.; Azevedo, P.; Piumbini, M.; Cardoso, V.B.; Oliveira-Santos, T.; Souza, A.F.D.; Badue, C. Image-Based Real-Time Path Generation Using Deep Neural Networks. In Proceedings of the 2020 International Joint Conference on Neural Networks (IJCNN), Glasgow, UK, 19–24 July 2020; pp. 1–8. [Google Scholar]
- Yang, G.; Yao, Y. Vehicle local path planning and time consistency of unmanned driving system based on convolutional neural network. Neural Comput. Appl. 2022, 34, 12385–12398. [Google Scholar] [CrossRef]
- Markolf, L.; Eilbrecht, J.; Stursberg, O. Trajectory Planning for Autonomous Vehicles Combining Nonlinear Optimal Control and Supervised Learning. IFAC-PapersOnLine 2020, 53, 15608–15614. [Google Scholar] [CrossRef]
- Sakurai, M.; Ueno, Y.; Kondo, M. Path Planning and Moving Obstacle Avoidance with Neuromorphic Computing. In Proceedings of the 2021 IEEE International Conference on Intelligence and Safety for Robotics (ISR), Nagoya, Japan, 4–6 March 2021; pp. 209–215. [Google Scholar]
- Kalathil, D.; Mandal, V.K.; Gune, A.; Talele, K.; Chimurkar, P.; Bansode, M. Self-Driving Car Using Neural Networks. In Proceedings of the 2022 International Conference on Applied Artificial Intelligence and Computing (ICAAIC), Salem, India, 9–11 May 2022; pp. 213–217. [Google Scholar]
- Wang, D.; Wang, C.; Wang, Y.; Wang, H.; Pei, F. An Autonomous Driving Approach Based on Trajectory Learning Using Deep Neural Networks. Int. J. Automot. Technol. 2021, 22, 1517–1528. [Google Scholar] [CrossRef]
- Kiran, B.R.; Sobh, I.; Talpaert, V.; Mannion, P.; Sallab, A.A.A.; Yogamani, S.; Pérez, P. Deep Reinforcement Learning for Autonomous Driving: A Survey. IEEE Trans. Intell. Transp. Syst. 2021, 23, 4909–4926. [Google Scholar] [CrossRef]
- Wang, P.; Liu, D.; Chen, J.; Li, H.; Chan, C.Y. Decision Making for Autonomous Driving via Augmented Adversarial Inverse Reinforcement Learning. In Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 30 May–5 June 2021; pp. 1036–1042. [Google Scholar]
- Chen, C.; Jiang, J.; Lv, N.; Li, S. An Intelligent Path Planning Scheme of Autonomous Vehicles Platoon Using Deep Reinforcement Learning on Network Edge. IEEE Access 2020, 8, 99059–99069. [Google Scholar] [CrossRef]
- Kim, H.; Lee, W. Real-Time Path Planning Through Q-Learning’s Exploration Strategy Adjustment. In Proceedings of the 2021 International Conference on Electronics, Information, and Communication (ICEIC), Jeju, Republic of Korea, 31 January–3 February 2021; pp. 1–3. [Google Scholar]
- Chang, L.; Shan, L.; Jiang, C.; Dai, Y. Reinforcement-Based Mobile Robot Path Planning with Improved Dynamic Window Approach in Unknown Environment. Auton. Robot. 2021, 45, 51–76. [Google Scholar] [CrossRef]
- Low, E.S.; Ong, P.; Low, C.Y.; Omar, R. Modified Q-Learning with Distance Metric and Virtual Target on Path Planning of Mobile Robot. Expert Syst. Appl. 2022, 199, 117191. [Google Scholar] [CrossRef]
- Liu, X.; Zhang, D.; Zhang, T.; Cui, Y.; Chen, L.; Liu, S. Novel Best Path Selection Approach Based on Hybrid Improved A* Algorithm and Reinforcement Learning. Appl. Intell. 2021, 51, 9015–9029. [Google Scholar] [CrossRef]
- Liu, X.; Zhang, D.; Zhang, T.; Zhang, J.; Wang, J. A New Path Plan Method Based on Hybrid Algorithm of Reinforcement Learning and Particle Swarm Optimization. Eng. Comput. 2021, 39, 993–1019. [Google Scholar] [CrossRef]
- Rousseas, P.; Bechlioulis, C.P.; Kyriakopoulos, K.J. Optimal Motion Planning in Unknown Workspaces Using Integral Reinforcement Learning. IEEE Robot. Autom. Lett. 2022, 7, 6926–6933. [Google Scholar] [CrossRef]
- Sutton, R.S.; Barto, A.G. Reinforcement Learning: An Introduction; MIT Press: Cambridge, MA, USA, 2018. [Google Scholar]
- Zhao, J.; Qu, T.; Xu, F. A Deep Reinforcement Learning Approach for Autonomous Highway Driving. In Proceedings of the IFAC-PapersOnLine, Berlin, Germany, 12–17 July 2020; Volume 53, pp. 542–546. [Google Scholar]
- Liao, X.; Wang, Y.; Xuan, Y.; Wu, D. AGV Path Planning Model Based on Reinforcement Learning. In Proceedings of the 2020 Chinese Automation Congress (CAC), Shanghai, China, 6–8 November 2020; pp. 6722–6726. [Google Scholar]
- Chen, L.; Hu, X.; Tang, B.; Cheng, Y. Conditional DQN-Based Motion Planning with Fuzzy Logic for Autonomous Driving. IEEE Trans. Intell. Transp. Syst. 2020, 23, 2966–2977. [Google Scholar] [CrossRef]
- Wen, S.; Zhao, Y.; Yuan, X.; Wang, Z.; Zhang, D.; Manfredi, L. Path Planning for Active SLAM Based on Deep Reinforcement Learning Under Unknown Environments. Intell. Serv. Robot. 2020, 13, 263–272. [Google Scholar] [CrossRef]
- Li, J.; Chen, Y.; Zhao, X.; Huang, J. An Improved DQN Path Planning Algorithm. J. Supercomput. 2022, 78, 616–639. [Google Scholar] [CrossRef]
- Peng, Y.; Tan, G.; Si, H.; Li, J. DRL-GAT-SA: Deep Reinforcement Learning for Autonomous Driving Planning Based on Graph Attention Networks and Simplex Architecture. J. Syst. Archit. 2022, 126, 102505. [Google Scholar] [CrossRef]
- Pérez-Gil, Ó.; Barea, R.; López-Guillén, E.; Bergasa, L.M.; Gómez-Huélamo, C.; Gutiérrez, R.; Díaz-Díaz, A. Deep Reinforcement Learning Based Control for Autonomous Vehicles in Carla. Multimed. Tools Appl. 2022, 81, 3553–3576. [Google Scholar] [CrossRef]
- Naveed, K.B.; Qiao, Z.; Dolan, J.M. Trajectory Planning for Autonomous Vehicles Using Hierarchical Reinforcement Learning. In Proceedings of the 2021 IEEE International Intelligent Transportation Systems Conference (ITSC), Indianapolis, IN, USA, 19–22 September 2021; pp. 601–606. [Google Scholar]
- Karuppasamy Pandiyan, M.; Sainath, V.; Reddy, S. Deep Learning for Autonomous Driving System. In Proceedings of the 2021 Second International Conference on Electronics and Sustainable Communication Systems (ICESC), Coimbatore, India, 4–6 August 2021; pp. 1744–1749. [Google Scholar]
- Wang, Z.; Tu, J.; Chen, C. Reinforcement Learning Based Trajectory Planning for Autonomous Vehicles. In Proceedings of the 2021 China Automation Congress (CAC), Beijing, China, 22–24 October 2021; pp. 7995–8000. [Google Scholar]
- Zhang, L.; Zhang, R.; Wu, T.; Weng, R.; Han, M.; Zhao, Y. Safe Reinforcement Learning with Stability Guarantee for Motion Planning of Autonomous Vehicles. IEEE Trans. Neural Networks Learn. Syst. 2021, 32, 5435–5444. [Google Scholar] [CrossRef] [PubMed]
- Xu, C.; Zhao, W.; Chen, Q.; Wang, C. An actor–critic Based Learning Method for Decision-Making and Planning of Autonomous Vehicles. Sci. China Technol. Sci. 2021, 64, 984–994. [Google Scholar] [CrossRef]
- Choi, J.; Lee, G.; Lee, C. Reinforcement-Learning-based Dynamic Obstacle Avoidance and Integration of Path Planning. Intell. Serv. Robot. 2021, 14, 663–677. [Google Scholar] [CrossRef] [PubMed]
- Tang, X.; Huang, B.; Liu, T.; Lin, X. Highway Decision-Making and Motion Planning for Autonomous Driving via Soft actor–critic. IEEE Trans. Veh. Technol. 2022, 71, 4706–4717. [Google Scholar] [CrossRef]
- Receveur, J.B.; Victor, S.; Melchior, P. Autonomous Car Decision Making and Trajectory Tracking Based on Genetic Algorithms and Fractional Potential Fields. Intell. Serv. Robot. 2020, 13, 315–330. [Google Scholar] [CrossRef]
- Zhang, L.; Zhang, Y.; Li, Y. Mobile Robot Path Planning Based on Improved Localized Particle Swarm Optimization. IEEE Sens. J. 2020, 21, 6962–6972. [Google Scholar] [CrossRef]
- Qiuyun, T.; Hongyan, S.; Hengwei, G.; Ping, W. Improved Particle Swarm Optimization Algorithm for AGV Path Planning. IEEE Access 2021, 9, 33522–33531. [Google Scholar] [CrossRef]
- Ali, H.; Gong, D.; Wang, M.; Dai, X. Path Planning of Mobile Robot with Improved Ant Colony Algorithm and MDP to Produce Smooth Trajectory in Grid-Based Environment. Front. Neurorobotics 2020, 14, 44. [Google Scholar] [CrossRef]
- Wu, C.; Zhou, S.; Xiao, L. Dynamic Path Planning Based on Improved Ant Colony Algorithm in Traffic Congestion. IEEE Access 2020, 8, 180773–180783. [Google Scholar] [CrossRef]
- Zhou, Y.; Huang, N. Airport AGV Path Optimization Model Based on Ant Colony Algorithm to Optimize Dijkstra Algorithm in Urban Systems. Sustain. Comput. Inform. Syst. 2022, 35, 100716. [Google Scholar] [CrossRef]
- Pohan, M.A.R.; Trilaksono, B.R.; Santosa, S.P.; Rohman, A.S. Path Planning Algorithm Using Hybridization of RRT and Ant Colony Systems. IEEE Access 2021, 9, 153599–153615. [Google Scholar] [CrossRef]
- Shang, Z.; Gu, J.; Wang, J. An Improved Simulated Annealing Algorithm for the Capacitated Vehicle Routing Problem. Comput. Integr. Manuf. Syst. 2020, 13. [Google Scholar]
- Nayyar, A.; Nguyen, N.G.; Kumari, R.; Kumar, S. Robot Path Planning Using Modified Artificial Bee Colony Algorithm. In Frontiers in Intelligent Computing: Theory and Applications: Proceedings of the 7th International Conference on FICTA (2018); Springer: Singapore, 2020; pp. 25–36. [Google Scholar]
- Xu, F.; Li, H.; Pun, C.M.; Hu, H.; Li, Y.; Song, Y.; Gao, H. A New Global Best Guided Artificial Bee Colony Algorithm with Application in Robot Path Planning. Appl. Soft Comput. 2020, 88, 106037. [Google Scholar] [CrossRef]
- Bisen, A.S.; Kaundal, V. Mobile Robot for Path Planning Using Firefly Algorithm. In Proceedings of the 2020 Research, Innovation, Knowledge Management and Technology Application for Business Sustainability (INBUSH), Greater Noida, India, 19–21 February 2020; pp. 232–235. [Google Scholar]
- Li, F.; Fan, X.; Hou, Z. A Firefly Algorithm with Self-Adaptive Population Size for Global Path Planning of Mobile Robot. IEEE Access 2020, 8, 168951–168964. [Google Scholar] [CrossRef]
- Fusic, S.J.; Kanagaraj, G.; Hariharan, K.; Karthikeyan, S. Optimal Path Planning of Autonomous Navigation in Outdoor Environment via Heuristic Technique. Transp. Res. Interdiscip. Perspect. 2021, 12, 100473. [Google Scholar]
- Zhang, S.; Pu, J.; Si, Y. An Adaptive Improved Ant Colony System Based on Population Information Entropy for Path Planning of Mobile Robot. IEEE Access 2021, 9, 24933–24945. [Google Scholar]
- Wahab, M.N.A.; Lee, C.M.; Akbar, M.F.; Hassan, F.H. Path Planning for Mobile Robot Navigation in Unknown Indoor Environments Using Hybrid PSO-FS Algorithm. IEEE Access 2020, 8, 161805–161815. [Google Scholar] [CrossRef]
- Wang, Y.; Jiang, J.; Li, S.; Li, R.; Xu, S.; Wang, J.; Li, K. Decision-Making Driven by Driver Intelligence and Environment Reasoning for High-Level Autonomous Vehicles: A Survey. IEEE Trans. Intell. Transp. Syst. 2023, 24, 10362–10381. [Google Scholar] [CrossRef]
- Zheng, X.; Huang, H.; Wang, J.; Zhao, X.; Xu, Q. Behavioral decision-making model of the intelligent vehicle based on driving risk assessment. Comput.-Aided Civ. Infrastruct. Eng. 2021, 36, 820–837. [Google Scholar] [CrossRef]
- Tian, D.; Wang, Y.; Yu, T. Fuzzy Risk Assessment Based on Interval Numbers and Assessment Distributions. Int. J. Fuzzy Syst. 2020, 22, 1142–1157. [Google Scholar] [CrossRef]
- Xin, X.; Jia, N.; Ling, S.; He, Z. Prediction of pedestrians’ wait-or-go decision using trajectory data based on gradient boosting decision tree. Transp. B Transp. Dyn. 2022, 10, 693–717. [Google Scholar] [CrossRef]
- Zhang, T.; Fu, M.; Song, W. Risk-Aware Decision-Making and Planning Using Prediction-Guided Strategy Tree for the Uncontrolled Intersections. IEEE Trans. Intell. Transp. Syst. 2023, 24, 10791–10803. [Google Scholar] [CrossRef]
- Hang, P.; Lv, C.; Xing, Y.; Huang, C.; Hu, Z. Human-like decision making for autonomous driving: A noncooperative game theoretic approach. IEEE Trans. Intell. Transp. Syst. 2021, 22, 2076–2087. [Google Scholar] [CrossRef]
- Sun, C.; Leng, J.; Lu, B. Interactive Left-Turning of Autonomous Vehicles at Uncontrolled Intersections. IEEE Trans. Autom. Sci. Eng. 2022, 21, 204–221. [Google Scholar] [CrossRef]
- Sun, Q.; Wang, C.; Fu, R.; Guo, Y.; Yuan, W.; Li, Z. Lane change strategy analysis and recognition for intelligent driving systems based on random forest. Expert Syst. Appl. 2021, 186, 115781. [Google Scholar] [CrossRef]
- Huang, Y.; Chen, Y. Autonomous driving with deep learning: A survey of state-of-art technologies. arXiv 2020, arXiv:2006.06091. [Google Scholar]
- Chu, H.Q.; Guo, L.L.; Yan, Y.J.; Gao, B.Z.; Chen, H. Self-Learning Optimal Cruise Control Based on Individual Car-Following Style. IEEE Trans. Intell. Transp. Syst. 2021, 22, 6622–6633. [Google Scholar] [CrossRef]
- Hoel, C.; Driggs-Campbell, K.; Wolff, K.; Laine, L.; Kochenderfer, M.J. Combining planning and deep Reinforcement Learning in tactical decision making for autonomous driving. IEEE Trans. Intell. Veh. 2020, 5, 294–305. [Google Scholar]
- Nan, J.; Deng, W.; Zheng, B. Intention Prediction and Mixed Strategy Nash Equilibrium-Based Decision-Making Framework for Autonomous Driving in Uncontrolled Intersection. IEEE Trans. Veh. Technol. 2022, 71, 10316–10326. [Google Scholar] [CrossRef]
- Xu, X.; Zuo, L.; Li, X.; Qian, L.L.; Ren, J.K.; Sun, Z.P. A Reinforcement Learning Approach to Autonomous Decision Making of Intelligent Vehicles on Highways. IEEE Trans. Syst. Man Cybern. Syst. 2020, 50, 3884–3897. [Google Scholar] [CrossRef]
- Hu, T.; Luo, B.; Yang, C.; Huang, T. MO-MIX: Multi-Objective Multi-Agent Cooperative Decision-Making with Deep Reinforcement Learning. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 45, 12098–12112. [Google Scholar] [CrossRef]
- Xu, S.Y.; Chen, X.M.; Wang, Z.J.; Hu, Y.H.; Han, X.T. Decision-Making Models for Autonomous Vehicles at Unsignalized Intersections Based on Deep Reinforcement Learning. In Proceedings of the 7th IEEE International Conference on Advanced Robotics and Mechatronics, Guilin, China, 9–11 July 2022; pp. 672–677. [Google Scholar]
- Gutierrez-Moreno, R.; Barea, R.; Lopez-Guillen, E.; Araluce, J.; Bergasa, L.M. Reinforcement-Learning-based Autonomous Driving at Intersections in CARLA Simulator. Sensors 2022, 22, 8373. [Google Scholar] [CrossRef] [PubMed]
- Shu, H.; Liu, T.; Mu, X.; Cao, D. Driving Tasks Transfer Using Deep Reinforcement Learning for Decision-Making of Autonomous Vehicles in Unsignalized Intersection. IEEE Trans. Veh. Technol. 2022, 71, 41–52. [Google Scholar] [CrossRef]
- Kamran, D.; Lopez, C.F.; Lauer, M.; Stiller, C. Risk-Aware High-level Decisions for Automated Driving at Occluded Intersections with Reinforcement Learning. In Proceedings of the 31st IEEE Intelligent Vehicles Symposium (IV), Las Vegas, NV, USA, 19 October–12 November 2020; pp. 1205–1212. [Google Scholar]
- Li, G.; Lin, S.; Li, S.; Qu, X. Learning Automated Driving in Complex Intersection Scenarios Based on Camera Sensors: A Deep Reinforcement Learning Approach. IEEE Sens. J. 2022, 22, 4687–4696. [Google Scholar] [CrossRef]
- He, W. Research on the application of recognition and detection technology in automatic driving. Highlights Sci. Eng. Technol. 2024, 94, 504–509. [Google Scholar] [CrossRef]
- Zhang, H. Joint resource allocation and security redundancy for autonomous driving based on deep Reinforcement Learning algorithm. IET Intell. Transp. Syst. 2024, 18, 1109–1120. [Google Scholar] [CrossRef]
- Kuutti, S.; Bowden, R.; Fallah, S. Weakly supervised Reinforcement Learning for autonomous highway driving via virtual safety cages. Sensors 2021, 21, 2032. [Google Scholar] [CrossRef] [PubMed]
- Yuan, Y.; Tasik, R.; Adhatarao, S.; Yuan, Y.; Liu, Z.; Fu, X. Race: Reinforced cooperative autonomous vehicle collision avoidance. IEEE Trans. Veh. Technol. 2020, 69, 9279–9291. [Google Scholar] [CrossRef]
- Yurtsever, E.; Capito, L.; Redmill, K.; Ozguner, U. Integrating Deep Reinforcement Learning with Model-based Path Planners for Automated Driving. In Proceedings of the 2020 IEEE Intelligent Vehicles Symposium (IV), Las Vegas, NV, USA, 19 October–12 November 2020; pp. 1311–1316. [Google Scholar] [CrossRef]
- Peng, B.; Sun, Q.; Li, S.; Kum, D.; Yin, Y.; Wei, J.; Gu, T. End-to-End Autonomous Driving Through Dueling Double Deep Q-Network. Automot. Innov. 2021, 4, 328–337. [Google Scholar] [CrossRef]
- Merola, F.; Falchi, F.; Gennaro, C.; Di Benedetto, M. Reinforced Damage Minimization in Critical Events for Self-driving Vehicles. In Proceedings of the 17th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, Online, 6–8 February 2022. [Google Scholar] [CrossRef]
- Cao, Z.; Biyik, E.; Wang, W.; Raventos, A.; Gaidon, A.; Rosman, G.; Sadigh, D. Reinforcement Learning based Control of Imitative Policies for Near-Accident Driving. arXiv 2020, arXiv:2007.00178. [Google Scholar] [CrossRef]
- Makantasis, K.; Kontorinaki, M.; Nikolos, I. Deep Reinforcement-Learning-Based Driving Policy for Autonomous Road Vehicles. IET Intell. Transp. Syst. 2020, 14, 1212–1220. [Google Scholar] [CrossRef]
- Knox, W.; Allievi, A.; Banzhaf, H.; Schmitt, F.; Stone, P. Reward (Mis)design for Autonomous Driving. Artif. Intell. 2023, 316, 103829. [Google Scholar] [CrossRef]
- Wang, Y.; Wei, H.; Yang, L.; Hu, B.; Lv, C. A Review of Dynamic State Estimation for the Neighborhood System of Connected Vehicles. SAE Int. J. Veh. Dyn. Stability NVH 2023, 7, 367–385. [Google Scholar] [CrossRef]
- Lu, S.; Xu, R.; Li, Z.; Wang, B.; Zhao, Z. Lunar Rover Collaborated Path Planning with Artificial Potential Field-Based Heuristic on Deep Reinforcement Learning. Aerospace 2024, 11, 253. [Google Scholar] [CrossRef]
- Xi, Z.; Han, H.; Cheng, J.; Lv, M. Reducing Oscillations for Obstacle Avoidance in a Dense Environment Using Deep Reinforcement Learning and Time-Derivative of an Artificial Potential Field. Drones 2024, 8, 85. [Google Scholar] [CrossRef]
- Elallid, B.B.; Abouaomar, A.; Benamar, N.; Kobbane, A. Vehicles control: Collision avoidance using federated deep Reinforcement Learning. In Proceedings of the GLOBECOM 2023-2023 IEEE Global Communications Conference, Kuala Lumpur, Malaysia, 4–8 December 2023; pp. 4369–4374. [Google Scholar]
- Issa, R.; Das, M.; Rahman, M.; Barua, M.; Rhaman, M.; Ripon, K.; Alam, M. Double deep q-learning and faster r-cnn-based autonomous vehicle navigation and obstacle avoidance in dynamic environment. Sensors 2021, 21, 1468. [Google Scholar] [CrossRef] [PubMed]
- Dubey, V.; Barde, S.; Patel, B. Optimum path planning using Dragonfly–Fuzzy hybrid controller for autonomous vehicle. Preprints 2023. [CrossRef]
- Huang, J.; Liu, Z.; Chi, X.; Hong, F.; Su, H. Search-based path planning algorithm for autonomous parking: Multi-heuristic hybrid a*. arXiv 2022, arXiv:2210.08828. [Google Scholar] [CrossRef]
- Yang, W. Integrated spatial kinematics–dynamics Model Predictive Control for collision-free autonomous vehicle tracking. Actuators 2024, 13, 153. [Google Scholar] [CrossRef]
- Liu, Q. Design of real-time pedestrian trajectory prediction system based on Jetson Xavier. Front. Comput. Intell. Syst. 2023, 4, 109–113. [Google Scholar] [CrossRef]
- Jiang, C.; Fu, J.; Liu, W. Research on Vehicle Routing Planning Based on Adaptive Ant Colony and Particle Swarm Optimization Algorithm. Int. J. Intell. Transp. Syst. Res. 2021, 19, 83–91. [Google Scholar]
- Gao, P. Hybrid path planning for unmanned surface vehicles in inland rivers based on collision avoidance regulations. Sensors 2023, 23, 8326. [Google Scholar] [CrossRef] [PubMed]
- Kim, M.; Lee, D.; Ahn, J.; Park, J. Model Predictive Control method for autonomous vehicles using time-varying and non-uniformly spaced horizon. IEEE Access 2021, 9, 86475–86487. [Google Scholar] [CrossRef]
- Lee, D.; Woo, J. Reactive collision avoidance of an unmanned surface vehicle through gaussian mixture model-based online mapping. J. Mar. Sci. Eng. 2022, 10, 472. [Google Scholar] [CrossRef]
- Bezerra, C.; Vieira, F.; Soares, A. Deep-q-network hybridization with extended Kalman Filter for accelerate learning in autonomous navigation with the auxiliary security module. Trans. Emerg. Telecommun. Technol. 2022, 35, e4946. [Google Scholar] [CrossRef]
- Zhai, S. The dynamic path planning of autonomous vehicles on icy and snowy roads based on an improved artificial Potential Field. Sustainability 2023, 15, 15377. [Google Scholar] [CrossRef]
- Kumar, S.; Sikander, A. Optimum Mobile Robot Path Planning Using Improved Artificial Bee Colony Algorithm and Evolutionary Programming. Arab. J. Sci. Eng. 2022, 47, 3519–3539. [Google Scholar]
- Chen, H.; Cao, X.; Guvenc, L.; Aksun-Guvenc, B. Deep-Reinforcement-Learning-Based Collision Avoidance of Autonomous Driving System for Vulnerable Road User Safety. Electronics 2024, 13, 1952. [Google Scholar] [CrossRef]
- Jihong, X.; Xiang, Z.; CHENG, L. Edge Computing for Real-Time Decision Making in Autonomous Driving: Review of Challenges, Solutions, and Future Trends. Int. J. Adv. Comput. Sci. Appl. 2024, 15, 598. [Google Scholar]
- Yusuf, S.A.; Khan, A.; Souissi, R. Vehicle-to-everything (V2X) in the autonomous vehicles domain–A technical review of communication, sensor, and AI technologies for road user safety. Transp. Res. Interdiscip. Perspect. 2024, 23, 100980. [Google Scholar]
- Meftah, L.H.; Braham, R. A virtual simulation environment using deep learning for autonomous vehicles obstacle avoidance. In Proceedings of the 2020 IEEE International Conference on Intelligence and Security Informatics (ISI), Virtual, 9–10 November 2020; pp. 1–7. [Google Scholar]
- Gohar, A.; Nencioni, G. The role of 5G technologies in a smart city: The case for intelligent transportation system. Sustainability 2021, 13, 5188. [Google Scholar] [CrossRef]
Method | Approach | Strengths | Limitations |
---|---|---|---|
A* | Ref. [41] iADA*: Anytime path planning for real-time applications | Dynamic replanning, fast response times | Complex implementation compared to standard A* |
Ref. [42] Combines A* with traversability analysis for UGVs | Incorporates terrain difficulty into planning | Higher complexity, requires detailed terrain data | |
Ref. [43] Proposes a more efficient and robust A* algorithm | Handles dynamic environments better than standard A* | More complex implementation due to additional heuristics | |
Ref. [44] Combines motion planning with hierarchical framework | Effective for structured highway scenarios | Limited by assumptions of structured environments | |
Ref. [45] Fuel-Efficient A*:Minimize fuel consumption | Reduces energy usage in vehicles | May increase travel time | |
Dijkstra | Ref. [46] Combines RRT for initial planning with Dijkstra for optimization | Efficient path planning with improved real-time performance | Complexity increases with larger environments |
Ref. [47] Dijkstra’s algorithm to optimize vehicle path planning | Enhanced safety and optimal route at intersections | Increased computational complexity at complex intersections | |
RRT | Ref. [48] Learning-based RRT* for optimal path planning | Efficient path selection; adapts well to dynamic changes | May struggle with real-time processing in complex environments |
Ref. [49] RRT* for exploration; APF for obstacle avoidance | Handles dynamic environments; resolves APF local minima | APF’s local minima; path smoothness of RRT* | |
Ref. [50] Standard RRT for local trajectory control | Stability in trajectory control during local planning | Limited in handling unexpected obstacles dynamically | |
Ref. [51] Bi-directional RRT with pruning for efficient path planning | Enhanced planning efficiency and accuracy | Complex implementation | |
Ref. [52] Applies improved RRT for collision-free paths in AVs | Optimized for real-time planning | Computationally heavy in dense environments | |
Ref. [53] Adaptive version of RRT for lane-based AV navigation | Efficient in structured lane environments | Limited to well-structured lanes | |
Ref. [54] Attention-driven sampling distribution for motion planning | Efficient in complex settings; adaptive sampling focuses | High computational needs due to attention mechanism | |
Ref. [55] Sampling-based approach for unstructured environments | Robust in highly unstructured settings | Computationally intensive | |
PRM | Ref. [58] Proposes a new path planning algorithm using line segment features for perception | Efficient path planning with improved accuracy in dynamic environments | Limited application in highly unstructured environments |
Ref. [59] Generates smooth paths using PRM while avoiding sharp turns | Provides smooth and continuous trajectories | Limited scalability in very large or complex environments | |
Ref. [61] Combines PRM with evolutionary optimization for MAS in uncertain domains | Optimizes collision avoidance in dynamic environments | High computational complexity | |
Ref. [62] Single-query with probability-informed sampling | Low computation needs; efficient for quick planning | May not be as effective in dense, highly constrained spaces | |
DWA | Ref. [63] Combines PDWA and A* algorithms for path planning of autonomous surface vessels | Adaptable to dynamic marine environments | Computationally demanding in complex scenarios |
Refs. [64,65] Combines improved A* for global planning with DWA | Improved path efficiency with real-time adjustments | Increased complexity due to combined algorithms | |
Ref. [66] Combines Dijkstra for global path planning and DWA | Efficient collision avoidance in dynamic settings | Increased complexity when scaling to large environments | |
APF | Ref. [67] Applies adaptive Potential Field for path planning in complex driving environments | Adapts to a range of scenarios, effective in complex scenes | Performance drops with high-density obstacles |
Ref. [69] Integrates planning and tracking for autonomous vehicles considering driving styles | Personalizes driving behavior, improves safety | Requires significant computational resources | |
Ref. [70] Combines DPF and CDT for safe, personalized motion control | Safe and personalized driving behavior | Computationally demanding, requires large datasets | |
Ref. [71] Road-oriented motion planning framework for active collision avoidance | Suitable for structured road environments, enhances safety | Limited application to unstructured or dynamic environments | |
Ref. [72] Combines optimization techniques with Potential Field for path planning | Ensures optimal path in complex scenarios | Performance may degrade in dynamic environments | |
Ref. [73] Focuses on post-collision motion planning and control for autonomous vehicles | Ensures stability and safety after collisions | Limited to post-impact scenarios, not applicable for general path planning |
Method | Approach | Strengths | Limitations |
---|---|---|---|
Self-Supervised Learning | Ref. [74] Rapid path planning for car-like vehicles in urban environments | Suitable for urban navigation, fast response | Struggles with complex real-time constraints. |
LSTM Neural Network | Ref. [75] Local path planning for mobile robots using LSTM | Handles dynamic and changing environments effectively | Requires significant computational power |
Deep Learning | Ref. [76] End-to-End lane detection and path prediction for autonomous driving | High accuracy in real-time lane detection | Limited to structured environments |
Ensemble Learning | Ref. [77] Solving blind drift calibration problem using ensemble learning | Increases accuracy and robustness | Specific to calibration tasks |
Deep Neural Networks | Ref. [78] Real-time path generation for autonomous driving using DNNs | Suitable for real-time applications | Requires large amounts of training data |
CNN-Based | Ref. [79] Uses convolutional neural networks for local path planning | Strong adaptability to dynamic environments | High computational requirements for real-time applications |
Supervised Learning | Ref. [80] Trajectory planning using a hybrid of supervised learning and nonlinear control | High efficiency in trajectory optimization | Limited adaptability to unstructured environments |
Neuromorphic Computing | Ref. [81] Path planning and obstacle avoidance using neuromorphic computing | Low power consumption and high efficiency | Complexity in hardware and implementation |
Neural Networks | Ref. [82] Self-driving car using neural networks for path planning | Suitable for dynamic environments | High computational demand |
Deep Neural Networks | Ref. [83] Trajectory learning using deep neural networks | Accurate trajectory prediction | Requires extensive computational resources |
Inverse RL | Ref. [85] Augmented adversarial inverse Reinforcement Learning | Strong performance in decision making, path planning | Requires extensive training |
DRL | Ref. [86] Path planning for vehicle platoons on edge networks | Efficient in real-time applications | Requires specialized hardware |
Q-Learning | Ref. [87] Real-time path planning through Q-learning’s strategy adjustment | Improved real-time performance | Learning process can be slow |
RL | Ref. [88] Path planning with improved dynamic window approach | High adaptability in unknown environments | High computational overhead |
Q-Learning | Ref. [89] Modified Q-learning with distance metric and virtual target for path planning | Efficient in short-distance navigation | Performance decreases in larger environments |
A* + RL | Ref. [90] Path selection using A* and Reinforcement Learning | Enhanced the accuracy | Computationally intensive |
RL + PSO | Ref. [91] Path planning using a hybrid of Reinforcement Learning and Particle Swarm Optimization | High adaptability in dynamic environments | Complex implementation |
RL | Ref. [92] RL Optimal motion planning in unknown workspaces using IRL | Suitable for unknown environments | May require high computational power |
DRL | Ref. [94] DRL-based approach for highway driving | Accurate decision making | Limited generalization to other environments |
Q-learning | Ref. [95] Combining Deep Q-Learning and Potential Field | Suitable for dynamic environments | Requires extensive training |
DQN | Ref. [96] Conditional DQN for motion planning with fuzzy logic | High adaptability in dynamic environments | Limited to structured environments |
DRL | Ref. [97] Path planning for active SLAM in unknown environments | Suitable for unknown environments | High computational requirements |
DQN | Ref. [98] Improved DQN for path planning | Accurate path planning | Limited in complex environments |
DRL | Ref. [99] Planning with DRL and GATs (Graph Attention Networks) | High accuracy in path planning | High computational complexity |
DRL | Ref. [100] DRL-based control for autonomous vehicles in CARLA | High control accuracy | Limited to simulation environments |
Hierarchical RL | Ref. [101] Trajectory planning using hierarchical RL | Effective for complex trajectories | High computational cost |
DL | Ref. [102] Deep learning for autonomous driving systems | Effective in complex scenarios | Requires large training datasets |
RL | Ref. [103] Trajectory planning for autonomous vehicles using RL | Efficient trajectory planning | High computational cost in real-time applications |
RL | Ref. [104] Motion planning with stability guarantees | Ensures safe motion planning in dynamic environments | Limited flexibility in highly uncertain environments |
actor–critic RL | Ref. [105] Actor–critic based learning for decision-making and planning | High efficiency in decision-making | Complex training process |
RL | Ref. [106] Dynamic obstacle avoidance with path planning | Adaptive to dynamic obstacles | Performance depends heavily on environment structure |
Soft actor–critic | Ref. [107] Decision-making and motion planning on highways using soft actor–critic | Effective in highway scenarios | Limited to structured highway environments |
Method | Approach | Strengths | Limitations |
---|---|---|---|
GA and FPF | Ref. [108] Combines GA for global path planning with the Fractional Potential Field (FPF) for local paths | Effective for trajectory tracking | May not adapt well to highly dynamic environments |
Improved PSO | Ref. [109] Mobile robot path planning using localized PSO | Optimized for local path planning | May struggle with global optimization |
PSO | Ref. [110] AGV path planning using improved PSO | Enhanced path optimization for AGVs | Performance may degrade in unpredictable scenarios |
Ant Colony, MDP | Ref. [111] Smooth trajectory generation in grid-based environments | Effective trajectory generation | Requires further testing in dynamic environments |
Ant Colony | Ref. [112] Dynamic path planning for traffic congestion | Efficient in congested environments | Performance may degrade in highly dynamic traffic |
Ant Colony, Dijkstra | Ref. [113] AGV path optimization model | Effective in optimizing AGV paths | Limited scalability for large urban systems |
RRT and Ant Colony | Ref. [114] Combined Rapidly Exploring Random Trees (RRT) and ant colony algorithms | Enhanced the Path planning for autonomous navigation. | Requires further testing in real-world applications |
Simulated Annealing | Ref. [115] Improved algorithm for vehicle routing problem | Efficient in routing large fleets | High computational complexity in large-scale problems |
Artificial Bee Colony (ABC) | Ref. [116] Modified ABC algorithm for robot path planning | Effective for obstacle avoidance | Limited to static environments |
Global Best Guided ABC | Ref. [117] New guided ABC algorithm for robot path planning | Effective in global optimization | Limited validation in real-world applications |
Firefly Algorithm | Ref. [118] Path planning for mobile robots using Firefly algorithm | Optimized for smaller spaces | Limited real-world testing |
Firefly Algorithm | Ref. [119] Self-adaptive population size for global path planning | Adaptable to environment changes | Requires fine-tuning for larger environments |
PSO Variants | Ref. [120] Satellite PSO (SPSO) and five PSO variants with satellite image input | Explores path planning variations in PSO | SPSO underperformed all variants; lacks consideration of other meta-heuristics |
Improved ACO | Ref. [121] Adaptive Improved ACS using entropy (AIACSE) | Integrates information entropy for enhanced population diversity; surpassed RAS, PS-ACO, and ACS | Needs optimization for dynamic environments |
Ref./Method | Approach | Strengths | Limitations |
---|---|---|---|
Ref. [143] DRL | Proposes a DRL-based solution to optimize resource allocation with enhanced security features for autonomous vehicles | Improve both resource efficiency and cybersecurity in AVs, addressing two crucial aspects simultaneously | May require extensive computational resources and real-time data for effective decision-making in complex environments |
Ref. [144] Weakly supervised RL | Introduces a safety mechanism with virtual safety cages in a weakly supervised Reinforcement Learning framework for autonomous highway driving | Enables safer highway navigation by enforcing safety zones, reducing collision risk | Limited by its reliance on simulated environments, which may not fully translate to real-world performance |
Ref. [145] RL | Proposes a cooperative, Reinforcement-Learning-based approach for collision avoidance between multiple autonomous vehicles | Enhances collision avoidance through cooperative interaction, making it suitable for multi-vehicle scenarios | Complex implementation, with challenges in real-time coordination and potential communication limitations |
Ref. [146] DRL | Integrates DRL with model-based planners for automated driving, ensuring safety and efficiency in path planning. | Balances model accuracy with safety and efficiency. | Integration complexity and requires careful tuning for different environments. |
Ref. [147] Deep Q-Network | Proposes an End-to-End autonomous driving system leveraging dueling double DQN for optimal decision- making in driving tasks. | Enhanced performance with more stable learning than conventional DQN. | Requires large amounts of data and struggles with rare edge cases. |
Ref. [148] RL | Introduces a reinforced damage minimization strategy for autonomous vehicles in critical events, focusing on reducing damage. | Effective damage minimization in collision scenarios. | Limited to extreme scenarios, performance in normal driving is untested. |
Ref. [149] RL | Develops an RL-based control system for imitative driving policies in near-accident scenarios, improving decision-making during critical moments. | Mimics human-like behavior in near-accident scenarios. | High complexity in imitative learning and sensitivity to parameter tuning. |
Ref. [150] DRL | Focuses on DRL-based driving policy designed for autonomous road vehicles, improving safety and control in real-world scenarios. | Strong policy learning for real-world environments. | Computationally expensive and data-hungry. |
Ref. [151] RL | Discusses the reward misdesign problem in autonomous driving and how improper reward functions can negatively affect RL-based driving policies. | Highlights the importance of reward shaping in RL. | Hard to design universally effective reward functions. |
Ref. [152] RL | Reviews dynamic state estimation for connected vehicles, improving decision-making under uncertainty. | Effective in handling uncertain environments. | High reliance on accurate state estimation models. |
Ref. [153] DRL | Combines heuristic algorithms with DRL for path planning and obstacle avoidance in lunar exploration missions. | Strong adaptability to unstructured environments. | Specialized for space missions; may not generalize to road driving. |
Ref. [154] DRL | Proposes a method for reducing oscillations in obstacle avoidance using DRL and time-derivative of an artificial Potential Field. | Improved stability in obstacle avoidance. | May struggle with highly dynamic obstacles. |
Ref. [155] Federated DRL | Uses federated DRL for collision avoidance in autonomous vehicles, focusing on collaborative learning from decentralized sources. | Enables learning from distributed data while maintaining privacy. | Computationally expensive and coordination complexity in federated learning. |
Ref./Method | Approach | Strengths | Limitations |
---|---|---|---|
Ref. [157] Dragonfly–Fuzzy Hybrid Controller | Combines Dragonfly Algorithm with Fuzzy Logic for path planning. | Provides a hybrid method for autonomous vehicle path planning in dynamic environments, ensuring smooth and adaptive control. | High computational complexity due to hybridization; may struggle with real-time applications in highly dynamic environments. |
Ref. [158] Multi-Heuristic Hybrid A* | Search-based algorithm tailored for autonomous parking in complex scenarios. | Enhances path planning efficiency in autonomous parking by reducing computation time. | The use of multiple heuristics can increase the complexity and computational time; not well suited for real-time applications in highly dynamic scenarios. |
Ref. [159] Spatial Kinematics Dynamics MPC | Integrates spatial kinematics and dynamics for collision-free vehicle tracking. | Provides robust tracking control, improving vehicle safety in dynamic environments. | Computationally expensive and difficult to implement in real time; requires precise kinematic and dynamic modeling of the environment. |
Ref. [160] Pedestrian Trajectory Prediction System | Real-time pedestrian prediction system based on Jetson Xavier. | Enhances real-time collision avoidance with accurate pedestrian behavior prediction. | High dependence on accurate pedestrian detection and tracking; may face issues in crowded environments with occlusions. |
Ref. [161] Hybrid (ACO-PSO) | Hybrid ACO-PSO with environmental feedback | Outperformed traditional ACO, showing better adaptability | Not assessed for low-dimensional problems |
Ref. [162] Hybrid Path Planning with Collision Avoidance Regulations | Focuses on unmanned surface vehicle (USV) navigation in inland rivers, incorporating collision avoidance regulations. | Efficient collision avoidance for USVs in dynamic water environments. | Limited adaptability to sudden environmental changes; heavy reliance on pre-set regulations may not cover all possible scenarios. |
Ref. [163] Model Predictive Control (MPC) | Utilizes a time-varying, non-uniform horizon for predictive control in autonomous vehicles. | Improves trajectory prediction and control in highly dynamic environments. | Requires precise modeling of the environment, which may be difficult in complex, real-world scenarios; computationally expensive for long horizons. |
Ref. [164] Gaussian Mixture Model-Based Online Mapping | Collision avoidance for unmanned surface vehicles (USV) using real-time mapping and navigation. | Reactively avoids obstacles in marine environments with low computational cost. | Struggles with high-speed navigation in cluttered environments due to the time needed for real-time mapping and collision avoidance decisions. |
Ref. [165] DQN with Extended Kalman Filter | Hybrid approach combining DQN and EKF for autonomous navigation. | Accelerates learning and improves decision-making in dynamic environments. | Requires significant training data and time; performance can degrade in highly dynamic environments with unpredictable obstacles. |
Ref. [166] Improved Artificial Potential Field (APF) | Path planning in icy and snowy road conditions using an enhanced APF model. | Focuses on vehicle safety and control in extreme weather conditions. | Susceptible to local minima, particularly in dense or complex environments; may not perform well in highly dynamic settings with moving obstacles. |
Ref. [167] Hybrid (ABC-EP) | Combination of Artificial Bee Colony (ABC) with Evolutionary Programming (EP) | Achieved 5.75% shorter paths than ABC–EP | Further testing needed in complex environments |
Ref. [168] Deep Reinforcement Learning (DRL) | Focuses on collision avoidance in autonomous driving with a focus on vulnerable road users (pedestrians, cyclists). | Provides a learning-based system for real-time obstacle detection and collision avoidance, particularly enhancing safety for vulnerable road users. | Requires extensive training time and data; may not generalize well to unseen environments; struggles with real-time response in highly dynamic environments. |
Technique | Specific Limitations |
---|---|
Deep Learning (CNNs, RNNs, LSTMs) | Requires vast labeled datasets, computationally expensive, lacks interpretability (“black box” nature), and struggles with generalization in rare or extreme scenarios. |
Reinforcement Learning (RL) | Long training times, challenges in sim-to-real transfer, and potential risks in real-world exploration due to the exploration-exploitation trade-off. |
Hybrid Approaches | Increased complexity in integration, potential latency in decision-making, and requires extensive tuning and optimization for smooth performance. |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Hamidaoui, M.; Talhaoui, M.Z.; Li, M.; Midoun, M.A.; Haouassi, S.; Mekkaoui, D.E.; Smaili, A.; Cherraf, A.; Benyoub, F.Z. Survey of Autonomous Vehicles’ Collision Avoidance Algorithms. Sensors 2025, 25, 395. https://doi.org/10.3390/s25020395
Hamidaoui M, Talhaoui MZ, Li M, Midoun MA, Haouassi S, Mekkaoui DE, Smaili A, Cherraf A, Benyoub FZ. Survey of Autonomous Vehicles’ Collision Avoidance Algorithms. Sensors. 2025; 25(2):395. https://doi.org/10.3390/s25020395
Chicago/Turabian StyleHamidaoui, Meryem, Mohamed Zakariya Talhaoui, Mingchu Li, Mohamed Amine Midoun, Samia Haouassi, Djamel Eddine Mekkaoui, Abdelkarim Smaili, Amina Cherraf, and Fatima Zahra Benyoub. 2025. "Survey of Autonomous Vehicles’ Collision Avoidance Algorithms" Sensors 25, no. 2: 395. https://doi.org/10.3390/s25020395
APA StyleHamidaoui, M., Talhaoui, M. Z., Li, M., Midoun, M. A., Haouassi, S., Mekkaoui, D. E., Smaili, A., Cherraf, A., & Benyoub, F. Z. (2025). Survey of Autonomous Vehicles’ Collision Avoidance Algorithms. Sensors, 25(2), 395. https://doi.org/10.3390/s25020395