Next Article in Journal
Object and Pedestrian Detection on Road in Foggy Weather Conditions by Hyperparameterized YOLOv8 Model
Previous Article in Journal
Neural Network SNR Prediction for Improved Spectral Efficiency in Land Mobile Satellite Networks
Previous Article in Special Issue
Cybersecurity in Autonomous Vehicles—Are We Ready for the Challenge?
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

A Critical AI View on Autonomous Vehicle Navigation: The Growing Danger

by
Tymoteusz Miller
1,2,3,
Irmina Durlik
3,4,*,
Ewelina Kostecka
3,5,
Piotr Borkowski
6,* and
Adrianna Łobodzińska
3,7
1
Institute of Marine and Environmental Sciences, University of Szczecin, Wąska 13, 71-415 Szczecin, Poland
2
Information Technology and Data Science, INTI International University, Persiaran Perdana BBN, Putra Nilai, Nilai 71800, Negeri Sembilan, Malaysia
3
Polish Society of Bioinformatics and Data Science BIODATA, Popiełuszki 4c, 71-214 Szczecin, Poland
4
Faculty of Navigation, Maritime University of Szczecin, Waly Chrobrego 1-2, 70-500 Szczecin, Poland
5
Faculty of Mechatronics and Electrical Engineering, Maritime University of Szczecin, Waly Chrobrego 1-2, 70-500 Szczecin, Poland
6
Faculty of Computer Science and Telecommunications, Maritime University of Szczecin, Waly Chrobrego 1-2, 70-500 Szczecin, Poland
7
Institute of Biology, University of Szczecin, Wąska 13, 71-415 Szczecin, Poland
*
Authors to whom correspondence should be addressed.
Electronics 2024, 13(18), 3660; https://doi.org/10.3390/electronics13183660
Submission received: 13 June 2024 / Revised: 12 September 2024 / Accepted: 13 September 2024 / Published: 14 September 2024
(This article belongs to the Special Issue Autonomous and Connected Vehicles)

Abstract

:
Autonomous vehicles (AVs) represent a transformative advancement in transportation technology, promising to enhance travel efficiency, reduce traffic accidents, and revolutionize our road systems. Central to the operation of AVs is the integration of artificial intelligence (AI), which enables these vehicles to navigate complex environments with minimal human intervention. This review critically examines the potential dangers associated with the increasing reliance on AI in AV navigation. It explores the current state of AI technologies, highlighting key techniques such as machine learning and neural networks, and identifies significant challenges including technical limitations, safety risks, and ethical and legal concerns. Real-world incidents, such as Uber’s fatal accident and Tesla’s crash, underscore the potential risks and the need for robust safety measures. Future threats, such as sophisticated cyber-attacks, are also considered. The review emphasizes the importance of improving AI systems, implementing comprehensive regulatory frameworks, and enhancing public awareness to mitigate these risks. By addressing these challenges, we can pave the way for the safe and reliable deployment of autonomous vehicles, ensuring their benefits can be fully realized.

1. Introduction

Autonomous vehicles (AVs) represent a significant technological advancement in the field of transportation, promising to revolutionize how we travel, reduce traffic accidents, and improve efficiency on our roads. These vehicles leverage a variety of sensors, algorithms, and computational power to navigate complex environments without human intervention. The integration of artificial intelligence (AI) in AVs is pivotal, allowing these systems to perceive their surroundings, make real-time decisions, and execute navigational tasks with a high degree of autonomy [1,2,3].
The sensors utilized in AVs, including LiDAR, radar, cameras, and ultrasonic sensors, collectively provide a comprehensive understanding of the vehicle’s immediate environment. AI algorithms process the vast amounts of data generated by these sensors to identify objects, predict the actions of other road users, and determine the optimal path for the vehicle to follow. Machine learning techniques, particularly deep learning, have been instrumental in improving the accuracy and reliability of these perception and decision-making systems [4,5,6].
Moreover, the potential benefits of AVs extend beyond safety and efficiency. They promise significant economic advantages by reducing the costs associated with human drivers and increasing productivity through the potential for vehicles to operate continuously without fatigue. AVs also offer mobility solutions for populations unable to drive, such as the elderly and disabled, thus enhancing social inclusion and accessibility [2,7,8].
As the development and deployment of AVs continue to accelerate, they are becoming an increasingly common sight on our roads, with major automotive manufacturers and technology companies investing heavily in this transformative technology. Companies such as Tesla, Waymo, and Uber are at the forefront of AV innovation, conducting extensive testing and pilot programs in various urban and suburban environments. Governments and regulatory bodies are also adapting to this rapid technological progress by developing frameworks and standards to govern the safe deployment of AVs on public roads [9,10,11].
However, alongside these advancements, the integration of AI in AV navigation systems presents substantial challenges and risks. The complexity of real-world driving environments, combined with the inherent unpredictability of human behavior, poses significant hurdles for AI systems to overcome. Additionally, the reliance on AI introduces vulnerabilities related to cybersecurity, data privacy, and the potential for algorithmic bias, which can have serious implications for the safety and trustworthiness of AVs [12,13,14].
In this context, a critical examination of the dangers associated with AI in AV navigation is essential to ensure the responsible development and deployment of autonomous vehicles. This review will delve into the technical limitations, safety risks, ethical concerns, and future threats posed by AI-driven AV navigation systems, providing a comprehensive overview of the challenges that lie ahead [2,15,16].
This review aims to critically examine the growing dangers associated with the application of AI in autonomous vehicle navigation. While AI has undoubtedly enabled remarkable capabilities in AVs, it also introduces significant risks and challenges that must be addressed to ensure the safe and reliable operation of these vehicles. This review will explore the technical limitations, safety risks, ethical concerns, and potential future threats posed by AI-driven AV navigation systems. By systematically analyzing these aspects, the review seeks to provide a comprehensive understanding of the dangers inherent in current AI applications within AVs.
The critical examination of AI in AV navigation is of paramount importance for several reasons. For researchers, understanding the limitations and risks associated with AI in AVs is essential for guiding future research directions and improving current technologies. This understanding must extend beyond the known challenges to encompass emerging issues such as the AI’s handling of complex urban environments, the unpredictability of human behavior, and the evolving nature of cybersecurity threats. For developers and engineers, awareness of these dangers is crucial for designing more robust and reliable systems. This requires not only refining existing models but also innovating new approaches that address the limitations of current AI technologies, particularly in edge cases that represent rare but high-risk scenarios. Policymakers need to be informed about these risks to create appropriate regulations and standards that ensure the safety and reliability of AVs. As AI technology continues to evolve, regulatory frameworks must be adaptable, incorporating the latest advancements and lessons learned from both historical and recent incidents. Furthermore, public awareness and informed discourse on this topic are necessary to foster societal acceptance and trust in autonomous vehicle technologies. Transparent communication about the capabilities and limitations of AI in AVs is essential to build this trust, especially as these vehicles become more prevalent on public roads. By addressing these issues comprehensively, we can work towards mitigating the dangers associated with AI in AV navigation and realize the full potential of autonomous vehicles in a safe and responsible manner [17,18,19,20].
In this paper, “navigation” refers to the process by which autonomous vehicles perceive their environment, make decisions, and execute maneuvers to move from one location to another safely and efficiently. This includes tasks such as route planning, obstacle detection, path optimization, and the interaction with dynamic and static elements in the environment.
This review specifically focuses on AI integration in road-based autonomous vehicles. While AI applications in other domains such as aerial, maritime, and rail transport share some similarities, the challenges and risks discussed here are particularly relevant to road vehicles. This focus allows for a detailed exploration of the unique technical, safety, and ethical considerations in this domain. Moreover, this paper seeks to bridge the gap between the theoretical advancements in AI and their practical implications, providing insights that are crucial for the continued development and deployment of AV technologies in real-world scenarios.

2. Overview of AI in AV Navigation

2.1. Current State

Autonomous vehicle (AV) navigation is at the forefront of technological innovation, integrating a variety of advanced technologies to enable vehicles to operate independently of human drivers. The current state of AV navigation involves a complex interplay of hardware and software components that work together to perceive the environment, make decisions, and execute driving tasks [21,22,23].
Perception systems in AVs rely on an array of sensors, including LiDAR, radar, cameras, and ultrasonic sensors, which provide real-time data about the vehicle’s surroundings. LiDAR (Light Detection and Ranging) creates high-resolution 3D maps by measuring the time it takes for laser pulses to bounce back from objects. Radar is particularly useful for detecting objects and their speed, especially in poor weather conditions, while cameras capture visual information to identify road signs, traffic lights, and lane markings. Ultrasonic sensors are used for close-range detection and parking assistance. The data from these diverse sensors are fused to create a comprehensive and accurate representation of the vehicle’s environment. Sensor fusion algorithms combine the strengths of each sensor type, compensating for their individual weaknesses and improving overall perception reliability [24,25,26].
Localization and mapping are critical components of AV navigation. AVs use Simultaneous Localization and Mapping (SLAM) techniques to build and update maps of their environment while simultaneously keeping track of their location within that environment. This involves integrating sensor data with pre-existing maps to maintain an accurate position of the vehicle. High-definition (HD) maps provide detailed and precise information about road geometry, lane configurations, and traffic rules, which are essential for safe navigation [27,28,29].
Path planning in AVs involves determining the optimal path for the vehicle to follow, considering the vehicle’s current state, destination, and dynamic obstacles. This includes both short-term planning, such as immediate maneuvers, and long-term planning, such as the route to the destination. The control system executes the planned path by adjusting the vehicle’s steering, acceleration, and braking, requiring precise and real-time adjustments to ensure smooth and safe driving [27,30,31].

2.2. Key AI Techniques

AI plays a critical role in enabling AV navigation, with several key techniques underpinning the functionality of these systems. Machine learning, particularly supervised learning, involves training models on large datasets labeled with correct outputs. For instance, supervised learning is used to train models to recognize pedestrians, cyclists, and other vehicles by exposing the model to thousands of labeled images. Unsupervised learning, used for clustering and anomaly detection, helps AVs identify unusual or unexpected events in their environment that were not explicitly labeled during training [25,26,32].
Deep learning, and specifically convolutional neural networks (CNNs), are widely used for image processing tasks such as object detection, classification, and semantic segmentation. In AVs, CNNs process camera images to identify road signs, lane markings, and obstacles. Recurrent neural networks (RNNs) and their variants, such as long short-term memory (LSTM) networks, are used for sequence prediction tasks, such as predicting the trajectory of moving objects like pedestrians and other vehicles [33,34,35].
Reinforcement learning, including Q-Learning and policy gradient methods, is employed to teach AVs how to make decisions through trial and error, optimizing for long-term rewards. For instance, reinforcement learning can be used to improve the efficiency of path planning and collision avoidance strategies. Decision-making algorithms, such as Bayesian networks and Markov decision processes (MDPs), help AVs make decisions under uncertainty by estimating the likelihood of various outcomes based on sensor data. Bayesian networks are probabilistic models that assist in decision-making under uncertainty, while MDPs provide a mathematical framework for modeling decision-making in environments where outcomes are partly random and partly under the control of the decision maker [24,36,37].
Expert systems in AVs encode human knowledge and decision-making rules into the software, enabling the vehicle to make informed decisions based on predefined rules and conditions. These AI techniques are integral to the development and operation of AVs, enabling them to perceive their environment accurately, make informed decisions, and navigate safely and efficiently [2,38,39].
Recent advancements in autonomous vehicle (AV) perception systems have increasingly incorporated bird’s eye view (BEV) representations and occupancy networks, which significantly enhance the spatial understanding and decision-making capabilities of AVs.
BEV frameworks transform multi-modal sensor data into a top-down, 2D map of the environment surrounding the vehicle. This view simplifies the spatial relationships between objects, aiding in tasks like lane detection, object tracking, and trajectory planning. BEV representations are particularly valuable in urban settings, where complex scenarios like intersections and roundabouts require precise navigation. These frameworks integrate semantic segmentation, allowing the AV to differentiate between various elements (e.g., roads, vehicles, and pedestrians), thereby enhancing context-aware decision-making.
Occupancy networks provide a continuous, probabilistic 3D model of the environment, predicting whether specific regions are occupied by objects. Unlike traditional voxel grids, occupancy networks offer higher resolution and better capture complex geometries, which is crucial for obstacle detection and collision avoidance. These networks are trained on diverse datasets to generalize well in new environments, and they integrate data from multiple sensors (LiDAR, radar, and cameras) to create a robust and accurate environmental model.
Combining BEV representations with occupancy networks enables AVs to utilize the strengths of both approaches: BEV’s broad 2D overview supports strategic planning, while occupancy networks ensure detailed 3D awareness for precise navigation. This integration enhances the AV’s ability to safely and efficiently navigate through complex and dynamic environments.
This concise summary highlights how BEV and occupancy networks complement each other, improving the AV’s perception and navigation capabilities.

2.2.1. Descriptions of AI Techniques in Autonomous Vehicles

Autonomous vehicles (AVs) rely heavily on advanced AI techniques to perceive their environment, make decisions, and navigate complex driving scenarios. This section provides a detailed overview of how machine learning algorithms, neural networks, and sensor fusion techniques are applied in AVs, along with specific examples to illustrate their practical implementation (Figure 1).
Figure 1 illustrates the data flow in a system that integrates sensor data processing, decision-making, and actuation within a continuous feedback loop.
  • Sensors—Initially, the system collects raw data from various sensors that monitor the environment or the system itself.
  • Sensor Fusion—The raw data from different sensors are then merged and processed in the sensor fusion step. This allows the system to enhance the accuracy and reliability of the input data by combining information from multiple sources.
  • Data Processing—The fused data undergo further processing using algorithms or machine learning models, which transform them into a more usable format, extracting relevant features that will be needed for decision-making.
  • Decision Making—Based on the processed data, the system makes decisions regarding the appropriate actions to take. This step often involves rule-based logic, machine learning algorithms, or predefined models to choose the optimal course of action.
  • Actuators—The decisions made are translated into control commands that are sent to the actuators. These actuators perform physical actions in response to the commands, such as controlling motors, adjusting settings, or interacting with other hardware components.
  • Feedback Loop—The arrow from the actuators to the sensors represents a critical feedback loop in the system. After the actuators perform their actions, updated sensor data are collected to monitor the results of the actions taken. This feedback allows the system to continuously evaluate and adjust its operations. By comparing the new sensor data with the expected outcomes, the system can adapt its behavior in real-time, ensuring that the actions taken are optimized and any discrepancies are corrected in future cycles.

2.2.2. Machine Learning Algorithms and Neural Networks

Machine learning, particularly deep learning, plays a crucial role in the functioning of autonomous vehicles. Neural networks, especially convolutional neural networks (CNNs), are widely used for image processing tasks. CNNs are designed to automatically and adaptively learn spatial hierarchies of features from input images. In the context of AVs, CNNs process data from vehicle-mounted cameras to detect and classify objects such as pedestrians, vehicles, traffic signs, and lane markings.
For example, a CNN might be used to analyze video feeds from the vehicle’s cameras to identify a pedestrian crossing the street. The CNN layers progressively extract features from the raw pixel data, such as edges, textures, and shapes, which are then combined to form a high-level representation of the pedestrian. This representation is used to classify the object as a pedestrian, allowing the AV to react appropriately.
Reinforcement learning (RL) is another key machine learning technique employed in AVs, particularly in decision-making processes. In RL, the AV learns to make driving decisions by interacting with its environment and receiving feedback in the form of rewards or penalties. Over time, the AV optimizes its behavior to maximize cumulative rewards, such as safe driving, efficient fuel consumption, or smooth navigation through traffic.
For instance, RL might be used to train an AV to navigate through a crowded urban environment. The vehicle is rewarded for maintaining safe distances from other vehicles and pedestrians, following traffic rules, and avoiding sudden stops or accelerations. Through trial and error, the AV learns the best strategies for handling complex driving situations.

2.2.3. Sensor Fusion Techniques

Sensor fusion is the process of combining data from multiple sensors to create a comprehensive understanding of the vehicle’s surroundings. Autonomous vehicles typically use a variety of sensors, including LiDAR, radar, and cameras, each providing different types of information. LiDAR generates precise 3D maps of the environment by measuring the time it takes for laser pulses to bounce back from objects. Radar is particularly effective in detecting the speed and distance of objects, especially in adverse weather conditions. Cameras provide rich visual information, essential for recognizing traffic signs, lane markings, and other road users.
The data from these sensors are processed and integrated through sensor fusion techniques, which involve sophisticated algorithms that combine the strengths of each sensor while compensating for their individual weaknesses. For example, while a camera might struggle to identify objects in low light, radar can still provide accurate distance measurements, and LiDAR can detect the object’s shape and position.

2.2.4. Real-Time Decision-Making Example: Pedestrian Detection and Reaction

To illustrate the workflow of these AI techniques, consider the scenario of an AV detecting and reacting to a pedestrian crossing the road (Figure 2).
  • Perception: The AV’s sensors (cameras, LiDAR, and radar) continuously scan the environment. The camera feeds are processed by a CNN to identify potential pedestrians. Simultaneously, LiDAR generates a 3D point cloud, and radar provides distance and velocity data for the detected objects.
  • Sensor Fusion: The data from the camera, LiDAR, and radar are combined in real-time. The sensor fusion algorithm integrates these data streams to form a comprehensive model of the environment, ensuring that the pedestrian is accurately identified and localized in the vehicle’s path.
  • Decision-Making: Once the pedestrian is detected and localized, the AV’s decision-making system, possibly using reinforcement learning algorithms, assesses the situation. The system calculates the optimal response, such as slowing down, stopping, or steering to avoid the pedestrian while maintaining safety.
  • Action: Finally, the control system executes the decision by adjusting the vehicle’s speed and steering. If the pedestrian is too close, the AV might apply the brakes to avoid a collision, demonstrating the seamless integration of perception, decision-making, and action in real-time.
This example highlights how various AI techniques work together to enable autonomous vehicles to navigate safely and efficiently, even in complex and dynamic environments (Figure 3).
Despite the significant progress made, challenges and risks remain, necessitating ongoing research and development to address these issues and enhance the reliability and safety of AI-driven AV navigation systems (Table 1).

3. Potential Dangers and Challenges

3.1. Technical Limitations

The integration of AI in autonomous vehicle (AV) navigation presents several technical limitations, particularly in handling complex and unpredictable environments. While AI systems have shown remarkable capabilities, their performance is still constrained by various factors, which can pose significant safety risks and operational challenges [45,46].
One of the primary technical limitations is the AI’s ability to generalize from training data to real-world scenarios. AVs are trained using extensive datasets that capture a wide range of driving conditions and scenarios. However, no dataset can encompass the full diversity and unpredictability of real-world environments. As a result, AVs may encounter novel situations that were not present in the training data, leading to potential failures in perception, decision-making, and control. For example, unusual weather conditions, unexpected road hazards, and atypical driving behaviors from other road users can significantly challenge an AV’s AI systems [3,22,45].
Another significant limitation is the reliance on sensor data, which can be prone to inaccuracies and failures. Sensors such as LiDAR, radar, and cameras are crucial for the vehicle’s perception of its environment. However, these sensors can be affected by environmental conditions like fog, rain, snow, and glare from the sun. LiDAR, for instance, may struggle to provide accurate distance measurements in heavy fog or rain, while cameras can be blinded by direct sunlight or obstructed by dirt and debris. These sensor limitations can lead to incomplete or inaccurate environmental perception, compromising the AV’s ability to navigate safely [3,42,47].
The complexity of urban environments also poses a challenge for current AI systems. Urban areas are characterized by high levels of traffic, numerous pedestrians, complex road layouts, and dynamic interactions between various road users. Navigating such environments requires the AI to make quick, accurate, and contextually appropriate decisions. However, the computational models used in AVs may not always be able to process and respond to this complexity in real-time. This can result in suboptimal decision-making, such as inappropriate responses to pedestrian crossings, misinterpretation of traffic signals, or failure to predict the actions of other vehicles accurately [2,14,48].
Moreover, the interpretability of AI models remains a critical issue. Many AI systems, particularly those based on deep learning, operate as “black boxes”, making it difficult to understand how they arrive at specific decisions. This lack of transparency can hinder the identification and correction of errors within the system. In safety-critical applications like AVs, the inability to fully comprehend the AI’s decision-making process can lead to distrust and uncertainty among users and regulators (Table 2) [38,49,50].
The problem of edge cases—rare and unusual scenarios that are not well-represented in training data—also underscores the limitations of current AI systems. Edge cases can include unexpected pedestrian behavior, unconventional vehicle types, or rare traffic configurations. These scenarios, while infrequent, can pose significant risks if the AV is not adequately prepared to handle them. Ensuring that AI systems can robustly address these edge cases is a major challenge in the development of safe AV navigation [53,54].
Furthermore, the computational demands of real-time AI processing present another technical limitation. AVs require substantial computational resources to process sensor data, run complex algorithms, and make rapid decisions. Balancing the need for high computational power with the constraints of onboard hardware can be challenging, especially in terms of power consumption, heat dissipation, and the physical size of computing units [55,56,57].
In urban environments, the complexity of sensor fusion is particularly pronounced. For instance, integrating data from LiDAR and radar to accurately detect and classify objects in real-time poses significant challenges, especially in scenarios involving occlusions, varying weather conditions, and the need for precise localization within densely populated areas [57,58].
While AI has significantly advanced the capabilities of AVs, several technical limitations still need to be addressed to ensure the safe and reliable operation of these vehicles in complex and unpredictable environments. These limitations include the AI’s generalization ability, sensor reliability, urban navigation complexity, model interpretability, handling of edge cases, and computational demands. Addressing these challenges requires ongoing research, technological innovation, and rigorous testing to enhance the robustness and safety of AI-driven AV navigation systems [58,59].

Sensor Fusion and Integration Challenges

In autonomous vehicle navigation, sensor fusion is a critical process that combines data from multiple sensors—such as LiDAR, radar, and cameras—to create a comprehensive understanding of the vehicle’s environment. Each sensor type has its strengths and weaknesses. For instance, LiDAR provides high-resolution 3D mapping of surroundings, but its effectiveness can be compromised in adverse weather conditions like heavy rain or fog. Radar, while robust in detecting objects and their velocities, often lacks the resolution needed to identify smaller or more complex objects accurately. Cameras are essential for interpreting traffic signals and lane markings, but they are vulnerable to issues like glare, low light, and obstructions.
The integration of these diverse data streams presents significant challenges. For example, aligning data from sensors with different sampling rates and fields of view requires sophisticated algorithms that can accurately synchronize and process information in real-time. Additionally, sensor fusion systems must account for and mitigate the potential inaccuracies from individual sensors to avoid compounding errors. A well-known issue in this domain is sensor redundancy, which can either enhance reliability through cross-verification or introduce conflicting data that the system must resolve.

3.2. Safety Risks

The deployment of autonomous vehicles (AVs) introduces several potential safety risks, encompassing failure scenarios and real-world incidents that highlight the vulnerabilities of current AI-driven navigation systems. These risks can significantly impact the safety and reliability of AVs, necessitating thorough examination and mitigation strategies [2,3,60].
One of the foremost safety risks in AV navigation arises from sensor failures or inaccuracies. As AVs rely heavily on sensors such as LiDAR, radar, and cameras to perceive their surroundings, any malfunction or misinterpretation of sensor data can lead to hazardous situations. For instance, a radar sensor failing to detect an approaching vehicle due to interference or a camera misinterpreting a traffic signal due to adverse weather conditions can result in accidents. The dependency on multiple sensors for comprehensive environmental perception also means that the failure of even one sensor can compromise the vehicle’s ability to make safe driving decisions [61,62,63].
Another significant safety risk is the potential for software and algorithmic errors. AI systems in AVs are complex and require meticulous programming and extensive testing. However, software bugs, coding errors, or flaws in the underlying algorithms can lead to incorrect decision-making. For example, a bug in the path planning algorithm might cause the vehicle to miscalculate its trajectory, leading to collisions. The complexity of these systems makes it challenging to anticipate all possible failure modes, increasing the risk of unforeseen errors during operation [64,65,66].
Real-world incidents involving AVs provide concrete examples of these safety risks. One notable incident occurred in 2018 when an Uber self-driving car struck and killed a pedestrian in Arizona. The incident investigation revealed that the AV’s software failed to classify the pedestrian correctly as a human until it was too late to take evasive action. This tragedy highlighted the limitations of the AI system’s object detection and classification capabilities, underscoring the potential consequences of AI failures in AV navigation [67,68,69].
Similarly, Tesla’s Autopilot system has been involved in several high-profile accidents. In 2016, a Tesla Model S operating in Autopilot mode failed to recognize a white truck crossing the highway, resulting in a fatal crash. The system’s reliance on cameras for object detection was deemed insufficient to distinguish the truck against a bright sky. Such incidents emphasize the critical need for robust and redundant perception systems to ensure the safe operation of AVs under various conditions [70,71,72].
The risk of cybersecurity threats also poses a significant safety concern for AVs. As these vehicles become increasingly connected, they are susceptible to cyber-attacks that could compromise their operation. Hackers could potentially gain control of an AV, disrupt its navigation systems, or manipulate its data, leading to catastrophic outcomes. Ensuring robust cybersecurity measures is essential to protect AVs from such threats and maintain their operational integrity [67,73,74].
Furthermore, the unpredictable behavior of other road users adds another layer of complexity to AV safety. Human drivers, pedestrians, and cyclists can exhibit erratic and unexpected behaviors that challenge the AI’s predictive models. AVs must be capable of accurately anticipating and responding to these behaviors to avoid accidents. However, the inherent unpredictability of human actions makes it difficult to develop AI systems that can handle all possible scenarios reliably [60,75,76].
The issue of system redundancy and fail-safe mechanisms is also critical in ensuring AV safety. In the event of a system failure, AVs must have robust fallback mechanisms to safely manage the situation. For example, if the primary navigation system fails, an AV should be able to switch to a secondary system or safely bring the vehicle to a stop. The lack of adequate redundancy can lead to uncontrolled vehicle behavior in the event of a failure, posing significant safety risks [5,45,77].
The safety risks associated with AV navigation are multifaceted and stem from sensor reliability, software and algorithmic errors, cybersecurity threats, and the unpredictability of other road users. Real-world incidents have demonstrated the potential consequences of these risks, highlighting the critical need for ongoing research, rigorous testing, and robust safety measures. Addressing these safety risks is essential to ensure the reliable and safe operation of AVs, thereby building public trust and advancing the adoption of autonomous transportation technologies.

3.3. Ethical and Legal Concerns

The integration of AI in autonomous vehicle (AV) navigation brings forth a host of ethical and legal concerns that must be addressed to ensure the responsible deployment and acceptance of these technologies. These concerns revolve around the decision-making processes of AI systems, accountability in the event of accidents, data privacy, and the broader societal impacts of AV adoption [78,79,80].
  • Ethical Implications
One of the primary ethical dilemmas in AV navigation is the decision-making process during unavoidable accidents. Known as the “trolley problem” in ethics, this scenario questions how an AV should prioritize different outcomes when harm is unavoidable. For instance, if an AV must choose between colliding with a group of pedestrians or swerving and potentially harming its passengers, how should it decide? These decisions involve moral judgments that are challenging to codify into algorithms. The ethical frameworks that guide these decisions must be transparent and reflect societal values, yet consensus on these values can be elusive [81,82,83].
The potential for algorithmic bias in AI systems also raises ethical concerns. AI algorithms are trained on large datasets, and if these datasets contain biases, the AI may inadvertently learn and propagate these biases. This could lead to discriminatory behaviors, such as misidentifying pedestrians from minority groups or favoring certain traffic scenarios over others. Ensuring fairness and inclusivity in AI decision-making processes is crucial to prevent discrimination and build public trust in AV technologies [84,85,86].
Another ethical issue is the impact of AVs on employment. The widespread adoption of AVs is likely to disrupt industries reliant on human drivers, such as trucking, taxi services, and delivery. This could lead to significant job losses, raising questions about the ethical responsibility of companies and governments to manage these societal impacts. Policies and initiatives to retrain displaced workers and mitigate the economic consequences of automation are essential to address these concerns [87,88,89].
  • Legal Challenges
The legal landscape for AVs is still evolving, and several challenges must be addressed to create a robust regulatory framework. One of the foremost legal issues is determining liability in the event of an accident. Traditional traffic laws assume a human driver is in control, but in AVs, control is shared between the human and the AI system. Determining whether the manufacturer, the software developer, or the human operator is liable can be complex and may require new legal definitions and standards [1,48,90].
Data privacy is another significant legal concern. AVs collect vast amounts of data from their sensors, including images, video, and location information. These data are essential for navigation and improving AI systems but privacy issues are raised regarding how the data are stored, shared, and used. Legal frameworks must ensure that AV data collection practices comply with privacy laws and protect individuals’ rights [91,92,93].
The regulatory environment for AVs varies widely across different jurisdictions, creating challenges for manufacturers and operators who must navigate a patchwork of regulations. Standardizing AV regulations can facilitate the broader adoption and testing of AVs, but achieving international consensus on these standards is challenging. Harmonizing regulations across borders would enable more efficient deployment and ensure consistent safety and operational standards [46,94,95].
Another legal challenge involves ensuring cybersecurity for AVs. Given the potential for cyber-attacks to compromise vehicle safety, robust legal requirements for cybersecurity protections are necessary. This includes mandating security standards for software, regular updates, and protocols for responding to cyber-threats [90,96,97].
  • Societal Impact
The broader societal impacts of AVs also encompass ethical and legal dimensions. AVs have the potential to reshape urban planning, reduce traffic congestion, and lower emissions by optimizing driving patterns and encouraging shared mobility. However, these benefits must be weighed against the potential for increased surveillance, loss of privacy, and the monopolization of AV technologies by large corporations [3,98,99].
Ensuring equitable access to AV technology is another important consideration. If AVs primarily benefit affluent individuals and communities, they could exacerbate existing social inequalities. Legal and policy measures should aim to make AV technology accessible to diverse populations, including those in rural and underserved urban areas [19,30,48].
  • Deeper Socio-Economic Impact Analysis
The widespread adoption of AVs is expected to have significant socio-economic implications, particularly concerning employment. Industries such as trucking, taxi services, and delivery are likely to experience substantial job displacement as AVs replace human drivers. For instance, the trucking industry, which employs millions of drivers globally, may see a drastic reduction in demand for human labor as AV technology becomes capable of handling long-haul transport autonomously.
Conversely, the rise of AVs will also create new employment opportunities in fields such as AI development, system maintenance, and AV-related services. As the demand for skills in AI programming, sensor technology, and data analysis grows, there will be a corresponding increase in jobs related to these technologies. Moreover, new roles may emerge in fleet management, cybersecurity, and infrastructure development to support the autonomous transportation ecosystem.
Economic benefits of AV adoption include potential cost savings for businesses and individuals. By reducing traffic congestion and optimizing driving patterns, AVs can contribute to significant time savings and productivity gains. The enhancement in fuel efficiency and the reduction in vehicle wear and tear could lower operational costs, especially for businesses heavily dependent on transportation. Additionally, the decrease in traffic accidents, largely attributed to human error, could result in substantial savings in healthcare costs, insurance premiums, and vehicle repairs.
However, the potential for AV technology to disproportionately benefit affluent communities poses a risk of exacerbating existing social inequalities. The initial high cost of AV technology might limit access to wealthier individuals or businesses, leaving lower-income populations at a disadvantage. Furthermore, urban areas with better infrastructure might reap the benefits of AV deployment more quickly than rural or underserved regions.
To mitigate these inequalities, policymakers should consider implementing subsidies or tax incentives that make AV technology more accessible across different socio-economic groups. Investments in public AV services, particularly in underserved areas, can help ensure that the advantages of autonomous transportation are distributed more equitably. Additionally, reskilling programs for workers displaced by AVs, such as training in AI and related fields, are essential to counteract the negative employment impacts and provide new opportunities for those affected.
The following table summarizes the key challenges identified in AI-driven autonomous vehicle navigation and provides targeted recommendations to address each issue (Table 3).
As the deployment of autonomous vehicles (AVs) continues to expand, it is critical to recognize and address the emerging risks that accompany this rapid technological advancement. Beyond the well-documented technical and operational challenges, new concerns have surfaced, particularly in the areas of cybersecurity, AI interpretability, and ethical decision-making. These issues present significant hurdles that could undermine the safety and reliability of AV systems if not adequately addressed. The following table provides a summary of these emerging challenges and outlines potential strategies for mitigating these risks (Table 4).

3.3.1. AI Challenges in Perception Modules

The perception module is one of the most critical components of autonomous vehicle (AV) navigation. It involves gathering and processing data from sensors such as LiDAR, radar, and cameras to create an accurate representation of the vehicle’s surroundings. The AI techniques employed in perception must handle vast amounts of data in real-time, making them susceptible to various challenges. One significant challenge is the complexity of sensor fusion. AI systems must integrate data from multiple sensors to create a coherent environmental model; however, inconsistencies in sensor data, such as varying resolutions or data acquisition rates, can lead to inaccuracies.
Additionally, adverse environmental conditions, such as fog, rain, or snow, pose substantial difficulties. Sensors like cameras and LiDAR struggle in these conditions, and the current AI models often falter, leading to decreased perception accuracy. The need for real-time processing further compounds these challenges, as computational models must quickly process and interpret data. Any lag or error in this process can have severe consequences.
Recent advancements in deep learning, particularly convolutional neural networks (CNNs) for image processing and sensor fusion algorithms, are being developed to address these challenges. While these solutions show promise, they are still evolving, and further research is needed to enhance their robustness and reliability under all conditions.

3.3.2. AI Challenges in Planning and Decision-Making Modules

The planning and decision-making modules of AVs are responsible for determining the vehicle’s path and actions based on its perception of the environment. These modules rely heavily on AI to predict possible scenarios and select the optimal course of action. However, AI systems in these modules face significant challenges, particularly when dealing with unpredictable scenarios. For instance, the AI must account for the erratic behavior of other road users, such as sudden pedestrian movements or erratic driving by other vehicles. Traditional rule-based systems struggle in these scenarios, necessitating more adaptive AI models.
Another challenge is ethical decision-making. In situations where potential accidents are unavoidable, AI must make complex decisions that carry ethical implications, such as those posed by the trolley problem. Designing AI that can handle such moral dilemmas is a significant challenge. Moreover, the AI must continuously update the vehicle’s path in response to changing conditions, a task that requires not only real-time data processing but also the ability to anticipate future events—a particularly difficult challenge in congested urban environments.
Reinforcement learning and decision-theoretic approaches, such as Markov decision processes (MDPs), are being used to improve decision-making under uncertainty. These techniques enable AI to learn from experience, adapting its decisions based on previous outcomes. However, the ethical dimensions of these decisions remain an ongoing area of research.

3.3.3. AI Challenges in Localization and Mapping Modules

Localization and mapping are crucial for enabling AVs to determine their precise position within an environment and understand the spatial relationships between objects. This module relies on AI to fuse data from various sensors and update maps in real-time. One of the primary challenges in this module is achieving precision in complex environments. Urban settings, characterized by dense infrastructure and frequent occlusions, present significant challenges for AI-driven localization. Even small errors in localization can lead to substantial navigation issues, especially in scenarios requiring precise movements, such as lane changes or parking.
Simultaneous Localization and Mapping (SLAM) techniques are foundational in this module, but they have limitations in dynamic environments where objects are constantly moving or where GPS signals are weak or unavailable. Additionally, over time, the accumulation of small errors in sensor data can lead to “drift”, where the perceived position deviates from the actual position. AI must correct for this drift, but current techniques are not foolproof, particularly over long distances or in complex environments.
Recent advancements in improving SLAM algorithms through AI, particularly deep learning-based visual odometry and enhanced sensor fusion techniques, are promising. These innovations aim to improve the accuracy and reliability of localization and mapping in challenging environments, but further refinement is necessary for widespread deployment.

3.3.4. AI Challenges in Control Modules

The control module is responsible for executing the decisions made by the planning module, including steering, acceleration, and braking. The AI in this module must ensure that these actions are carried out safely and smoothly. However, this task is fraught with challenges. One major challenge is minimizing latency in executing control commands, especially in high-speed scenarios where milliseconds can make a difference in avoiding accidents. The need for rapid and precise responses in dynamic environments places significant demands on AI systems.
Another challenge lies in the safety of redundant systems. Control modules often rely on redundant systems to enhance safety, but AI must manage these redundancies effectively, ensuring that control actions remain consistent and safe even if one system fails. Furthermore, integrating AI-driven control with legacy automotive systems can be challenging. Many AVs are built on platforms that include legacy systems, and the AI must often work within the constraints of these older technologies, which may not have been designed for autonomous operations.
Advanced control algorithms, including model predictive control (MPC) and adaptive control systems, are being developed to address these challenges. These systems are designed to anticipate potential issues and adjust control commands accordingly. However, ensuring that these solutions can handle all potential edge cases remains a key area of research.
We summarize the key AI challenges across the AV modules discussed above in the following table (Table 5).

4. Case Studies and Examples

4.1. Real-World Incidents

Autonomous vehicles (AVs) have been involved in several notable incidents, underscoring the limitations and risks associated with AI navigation systems. Three prominent cases are often cited for their illustrative value:
  • Uber’s fatal accident in Tempe, Arizona (2018): In March 2018, an Uber self-driving car struck and killed a pedestrian in Tempe, Arizona. The vehicle’s sensors detected the pedestrian, but the AI system failed to classify her accurately as a human, leading to the collision. The safety driver, responsible for monitoring the vehicle, was distracted by a mobile device and did not intervene in time to prevent the accident. This incident highlighted critical weaknesses in object recognition and the importance of human oversight in AV operations [99,100,101].
  • Tesla Model S Crash in Florida (2016): In May 2016, a Tesla Model S operating on Autopilot mode collided with a tractor-trailer in Florida, resulting in the death of the Tesla driver. The vehicle’s AI system failed to distinguish the white side of the truck from the bright sky, leading to the fatal collision. The system’s reliance on visual data and the absence of a more comprehensive sensor fusion approach were significant factors in this incident. The driver, who had over-relied on the Autopilot system, was also not attentive at the moment of the crash, emphasizing the need for improved driver monitoring and alertness mechanisms [102,103,104].
  • Waymo near-miss incident (2021): A Waymo autonomous vehicle narrowly avoided a collision when it encountered an unexpected situation on a busy urban street. The AI, unable to handle the complexity of the scenario, handed control back to the human operator. This incident underscores that even with advanced AI systems, human intervention is still crucial in ensuring safety in unpredictable environments [105].
These incidents serve as stark reminders of the current limitations in AI navigation systems, particularly in terms of object recognition and environmental understanding. They underscore the necessity for continuous advancements in sensor technology, AI algorithms, and the integration of robust safety measures to prevent such occurrences in the future [3,19,26]. By thoroughly examining these real-world incidents, the industry can identify critical areas for improvement and develop more reliable and safer autonomous vehicles (Table 6).

4.2. Lessons Learned

Analyzing these incidents provides valuable insights into what went wrong and what could have been done differently. In the Uber case, the failure to accurately classify the pedestrian as a human being highlighted a critical weakness in the AI’s object recognition capabilities. Improving the accuracy of these systems, particularly in low-light conditions and complex environments, is essential. Additionally, ensuring that safety drivers remain vigilant and adequately trained to intervene when necessary is crucial [100,106,107,108].
In the Tesla incident, the Autopilot system’s inability to detect the truck due to visual limitations underscores the importance of enhancing sensor fusion techniques. Combining data from multiple sensors, such as radar, LIDAR, and cameras, can improve the vehicle’s understanding of its surroundings and prevent similar accidents. Furthermore, implementing more stringent monitoring and fail-safe mechanisms that prompt human intervention when the system encounters uncertainty could significantly enhance safety [102,103,109,110].
The Waymo near-miss highlights the ongoing need for AI systems that can adapt to complex and rapidly changing scenarios in real-time. This case illustrates the importance of robust real-world testing and the development of AI models capable of handling a wider range of unpredictable situations.
Overall, these cases illustrate the importance of continuous improvement and rigorous testing of AI navigation systems in AVs. By learning from past failures and addressing emerging challenges, developers can work towards creating safer and more reliable autonomous vehicles [104,111,112]. Future efforts must focus on improving AI adaptability, enhancing sensor fusion technologies, and ensuring that human operators remain an integral part of the safety framework.

4.3. Incorporating Metrics for Evaluation

As the deployment of AI-driven autonomous vehicles (AVs) becomes more widespread, it is essential to establish clear and robust metrics that can be used to evaluate the effectiveness and safety of these systems. These metrics not only provide a standardized way to assess performance but also help identify areas where further improvements are needed.

4.4. Proposing Key Safety Metrics

To ensure that AI systems in AVs are operating effectively and safely, we propose several key metrics that should be considered in both the development and evaluation phases:
  • Rate of AI-Initiated Disengagements—This metric tracks the frequency with which the AI system disengages and hands control back to the human driver. A high rate of disengagements may indicate scenarios where the AI is unable to handle certain conditions, suggesting a need for further refinement of the system.
  • Accuracy of Object Detection Under Varying Conditions—Given the importance of reliable object detection in preventing collisions, this metric measures how accurately the AI system can identify and classify objects under different environmental conditions, such as low light, inclement weather, or high-traffic scenarios.
  • Speed of Human Intervention When AI Fails—This metric evaluates how quickly a human driver can respond and take control of the vehicle when the AI system encounters a situation it cannot manage. A shorter response time is critical in preventing accidents, especially in high-risk scenarios.
  • Robustness Against Edge Cases—This metric assesses the AI system’s ability to handle rare but potentially dangerous situations, known as edge cases. These scenarios, which are not commonly represented in training data, can test the limits of the AI’s decision-making capabilities.
  • System Redundancy and Fail-Safe Mechanisms—Evaluating the presence and effectiveness of redundant systems and fail-safe mechanisms is crucial to ensure that the AV can safely manage unexpected failures or malfunctions without posing a risk to passengers or other road users.
By incorporating these metrics into the evaluation process, developers and regulators can gain a more comprehensive understanding of the AI system’s capabilities and limitations. These metrics serve as critical benchmarks for ensuring that AVs meet the highest standards of safety and reliability before they are deployed on public roads.

5. The Growing Danger

5.1. Escalating Risks

As autonomous vehicle (AV) technology advances, the increasing reliance on artificial intelligence (AI) introduces significant new dangers. The complexity of AI systems means they can exhibit unpredictable behaviors, particularly in rare or unforeseen scenarios. For instance, the inability of AI to fully understand and predict human behavior or respond appropriately to highly dynamic environments can lead to catastrophic failures. Moreover, as AVs become more integrated into our transportation infrastructure, the potential for widespread disruptions caused by system-wide failures or coordinated cyber-attacks grows [60,113,114,115].
AI systems in AVs often rely on vast amounts of data to learn and make decisions. However, the quality and diversity of these data can vary, leading to biases and gaps in the AI’s understanding. These limitations can exacerbate risks, particularly in environments or situations that deviate from the norm. The more we depend on AI for critical decision-making in AVs, the higher the stakes become, as failures can lead to severe consequences including loss of life and property [60,113,116,117].
Furthermore, AI’s inability to perfectly interpret sensory inputs in real-time can result in misjudgments during critical moments. For example, adverse weather conditions, such as heavy rain or fog, can significantly impair the sensors and cameras AVs rely on, leading to incorrect or delayed responses. These sensor limitations can cause the AI to fail in recognizing obstacles or navigating safely, thereby increasing the risk of accidents [78,118,119,120].
Additionally, the inherent opacity of AI decision-making processes, often described as the “black box” problem, poses significant challenges. Without clear understanding or transparency in how decisions are made, it becomes difficult to predict and rectify potential errors or biases. This lack of transparency not only hampers troubleshooting and improvement efforts but also erodes public trust in AV technology [58,121,122,123].
Another escalating risk is the over-reliance on AI by human drivers when AVs operate in semi-autonomous modes. Drivers might become complacent, assuming the AI will handle all situations perfectly, which can lead to slower reaction times in emergencies where human intervention is crucial. This over-reliance can be especially dangerous if the AI system unexpectedly encounters a scenario beyond its programmed capabilities [124,125,126].
Finally, the rapid pace of AI integration in AVs may outstrip the development of adequate regulatory and safety standards. As AV technology advances, the regulatory landscape must also evolve to address new risks and ensure robust safety measures are in place. Without timely updates to regulations and industry standards, the deployment of AVs might proceed without sufficient safeguards, exposing users and the public to greater risks [2,3,79].
The escalating reliance on AI in AVs amplifies existing dangers and introduces new ones. Continuous improvement in AI technology, comprehensive testing in diverse environments, and the development of transparent decision-making processes are critical to mitigating these risks. Moreover, enhanced regulatory frameworks and public awareness are essential to ensuring the safe integration of AVs into our transportation systems.

5.2. Future Threats

Looking ahead, the evolution of AV technology could introduce new risks as it becomes more prevalent. One significant threat is the potential for sophisticated cyber-attacks targeting the AI systems that control AVs. As these vehicles become more connected, they may become attractive targets for hackers seeking to exploit vulnerabilities for malicious purposes. Such attacks could lead to large-scale disruptions, accidents, and even loss of life [1,67,127].
Another future risk involves the ethical and legal ramifications of AI decision-making in AVs. As these systems become more autonomous, questions about liability in the event of accidents become more complex. Determining who is responsible—whether it is the manufacturer, the software developer, or another party—can become increasingly difficult, complicating legal processes and potentially slowing the adoption of AV technology [128,129,130]
Furthermore, the rapid pace of AI development in AVs could outstrip the ability of regulatory frameworks to keep up. This lag can result in inadequate safety standards and insufficient oversight, increasing the risk of deploying unsafe technologies on public roads. It is crucial for policymakers to proactively address these issues, ensuring that regulations evolve in tandem with technological advancements to mitigate future risks [46,96,101].
While AI in AVs holds great promise for transforming transportation, it also brings significant dangers that must be carefully managed. As reliance on AI grows, so do the potential risks and threats. Continuous improvement in AI systems, robust safety measures, and comprehensive regulatory frameworks are essential to ensuring the safe and reliable deployment of autonomous vehicles [80,131,132] (Table 7).

6. Recommendations for Mitigation

6.1. Improved AI Systems

Enhancing the reliability and safety of AI in AV navigation requires several key actions. First, it is essential to develop more robust and diverse datasets for training AI models. This approach helps in minimizing biases and improves the AI’s ability to manage a broad range of scenarios. Advanced sensor fusion techniques, which integrate data from cameras, LIDAR, radar, and other sensors, can significantly boost situational awareness and accuracy. Additionally, implementing continuous learning frameworks allows AI systems to adapt and improve over time by updating themselves with new data, leading to enhanced performance [2,3,138].
Moreover, increasing transparency in AI decision-making processes is crucial. Developing explainable AI (XAI) techniques can assist engineers and safety personnel in understanding how decisions are made, thus facilitating the identification and correction of potential issues. Regular and rigorous testing in diverse and challenging environments is also critical to ensure that AI systems can effectively handle real-world complexities [1,39,95,139].

6.2. Regulatory Measures

Establishing robust regulatory frameworks is essential for ensuring the safety of AVs. Governments and regulatory bodies should create comprehensive standards for AV testing and deployment. These standards should include rigorous safety assessments, mandatory incident reporting, and clear guidelines for liability in case of accidents. Developing international standards for AV safety will help promote consistency and interoperability across different regions [2,77,140,141].
Additionally, regulators should mandate the inclusion of fail-safe mechanisms and manual override options in AVs, allowing for human intervention when necessary. Continuous monitoring and updating of regulations to keep pace with technological advancements are crucial to addressing emerging risks and ensuring public safety [129,141,142,143].

6.3. Public Awareness

Raising public awareness and fostering informed discourse are critical for the successful integration of AVs into society. Educating the public about the capabilities, limitations, and potential risks of AVs can help manage expectations and build trust. Public awareness campaigns should emphasize the importance of remaining vigilant, even when using semi-autonomous features, and the necessity of manual intervention in emergencies [2,3,77,144].
Engaging with communities and stakeholders through public forums, workshops, and educational programs can promote informed dialogue about AV technology. Transparent communication from manufacturers and policymakers regarding safety measures, regulatory actions, and incident reports can further build public confidence and support for the adoption of AVs [1,62,145,146].
Mitigating the risks associated with AI in AV navigation demands a multifaceted approach. By improving AI systems, implementing robust regulatory measures, and raising public awareness, we can enhance the safety and reliability of autonomous vehicles, paving the way for their successful integration into our transportation systems [36,147,148,149] (Table 8).

7. Conclusions

This review has critically examined the potential dangers associated with the increasing reliance on AI in autonomous vehicle (AV) navigation. We discussed the current state of AI technologies and highlighted key techniques such as machine learning and neural networks. Several significant challenges have been identified, including technical limitations, safety risks, and ethical and legal concerns. Real-world incidents, like Uber’s fatal accident in Tempe, Arizona, and Tesla’s crash in Florida, underscore the tangible risks associated with AI navigation systems. These cases illustrate how failures in AI perception, decision-making, and system reliability can lead to severe consequences. We also explored future threats posed by the rapid evolution of AV technology, such as sophisticated cyber-attacks and the potential for regulatory lag.
Moreover, the review emphasized the necessity of improving AI systems to handle complex and unpredictable environments better. Ensuring the quality and diversity of data, enhancing sensor fusion techniques, and developing transparent AI decision-making processes are crucial steps towards mitigating these dangers. Regulatory measures need to evolve to address the unique challenges posed by AVs, including establishing rigorous safety standards, mandatory reporting, and clear guidelines for liability. Public awareness and education are equally important to manage expectations and build trust in AV technology.
Addressing the dangers associated with AI in AV navigation is crucial to ensuring the safe and reliable deployment of autonomous vehicles. As technology continues to evolve, it is imperative that developers, regulators, and the public work together to mitigate risks and promote safety. Continuous improvement in AI systems, comprehensive regulatory frameworks, and informed public discourse are essential to building trust and achieving the full potential of autonomous vehicles. By proactively addressing these challenges, we can pave the way for a safer and more efficient transportation future.
The potential of AVs to transform transportation is immense, promising significant benefits in terms of efficiency, safety, and convenience. However, realizing this potential requires a concerted effort to understand and mitigate the associated risks. Developers must prioritize safety and reliability in their AI systems, ensuring that these technologies can handle the vast array of scenarios they will encounter on the roads. Regulators must keep pace with technological advancements, implementing and updating standards that ensure public safety without stifling innovation. Finally, fostering a well-informed public dialogue about the capabilities and limitations of AVs will be essential in building the societal trust necessary for their widespread adoption.
By focusing on these areas, we can address the current and future dangers of AI in AV navigation, ensuring that the deployment of autonomous vehicles is both safe and beneficial for all.
While the principles of AI in autonomous navigation can be applied across various domains, this review has highlighted the specific challenges and risks associated with road-based autonomous vehicles. Future research should continue to explore these domain-specific issues, ensuring that AI systems are robust, reliable, and capable of handling the unique demands of road environments.

Author Contributions

Conceptualization, T.M. and I.D.; methodology, T.M. and I.D.; investigation, T.M., I.D. and A.Ł.; resources, A.Ł.; writing—original draft preparation, T.M., I.D., E.K., P.B. and A.Ł.; writing—review and editing, T.M., I.D., E.K., P.B. and A.Ł.; visualization, T.M., I.D. and P.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Parekh, D.; Poddar, N.; Rajpurkar, A.; Chahal, M.; Kumar, N.; Joshi, G.P.; Cho, W. A Review on Autonomous Vehicles: Progress, Methods and Challenges. Electronics 2022, 11, 2162. [Google Scholar] [CrossRef]
  2. Khayyam, H.; Javadi, B.; Jalili, M.; Jazar, R.N. Artificial Intelligence and Internet of Things for Autonomous Vehicles. In Nonlinear Approaches in Engineering Applications; Springer International Publishing: Cham, Switzerland, 2020; pp. 39–68. [Google Scholar]
  3. Ma, Y.; Wang, Z.; Yang, H.; Yang, L. Artificial Intelligence Applications in the Development of Autonomous Vehicles: A Survey. IEEE/CAA J. Autom. Sin. 2020, 7, 315–329. [Google Scholar] [CrossRef]
  4. Rapp, J.; Tachella, J.; Altmann, Y.; McLaughlin, S.; Goyal, V.K. Advances in Single-Photon Lidar for Autonomous Vehicles: Working Principles, Challenges, and Recent Advances. IEEE Signal Process. Mag. 2020, 37, 62–71. [Google Scholar] [CrossRef]
  5. Li, W.; Su, Z.; Li, R.; Zhang, K.; Wang, Y. Blockchain-Based Data Security for Artificial Intelligence Applications in 6G Networks. IEEE Netw. 2020, 34, 31–37. [Google Scholar] [CrossRef]
  6. Zou, Q.; Sun, Q.; Chen, L.; Nie, B.; Li, Q. A Comparative Analysis of LiDAR SLAM-Based Indoor Navigation for Autonomous Vehicles. IEEE Trans. Intell. Transp. Syst. 2022, 23, 6907–6921. [Google Scholar] [CrossRef]
  7. Dixit, A.; Kumar Chidambaram, R.; Allam, Z. Safety and Risk Analysis of Autonomous Vehicles Using Computer Vision and Neural Networks. Vehicles 2021, 3, 595–617. [Google Scholar] [CrossRef]
  8. Feng, S.; Sun, H.; Yan, X.; Zhu, H.; Zou, Z.; Shen, S.; Liu, H.X. Dense Reinforcement Learning for Safety Validation of Autonomous Vehicles. Nature 2023, 615, 620–627. [Google Scholar] [CrossRef]
  9. Kopelias, P.; Demiridi, E.; Vogiatzis, K.; Skabardonis, A.; Zafiropoulou, V. Connected & Autonomous Vehicles—Environmental Impacts—A Review. Sci. Total Environ. 2020, 712, 135237. [Google Scholar] [CrossRef]
  10. Stoma, M.; Dudziak, A.; Caban, J.; Droździel, P. The Future of Autonomous Vehicles in the Opinion of Automotive Market Users. Energies 2021, 14, 4777. [Google Scholar] [CrossRef]
  11. Clayton, W.; Paddeu, D.; Parkhurst, G.; Parkin, J. Autonomous Vehicles: Who Will Use Them, and Will They Share? Transp. Plan. Technol. 2020, 43, 343–364. [Google Scholar] [CrossRef]
  12. Gonsalves, T.; Upadhyay, J. Integrated Deep Learning for Self-Driving Robotic Cars. In Artificial Intelligence for Future Generation Robotics; Elsevier: Amsterdam, The Netherlands, 2021; pp. 93–118. [Google Scholar]
  13. Blasch, E.; Pham, T.; Chong, C.-Y.; Koch, W.; Leung, H.; Braines, D.; Abdelzaher, T. Machine Learning/Artificial Intelligence for Sensor Data Fusion–Opportunities and Challenges. IEEE Aerosp. Electron. Syst. Mag. 2021, 36, 80–93. [Google Scholar] [CrossRef]
  14. Li, Y.; Ibanez-Guzman, J. Lidar for Autonomous Driving: The Principles, Challenges, and Trends for Automotive Lidar and Perception Systems. IEEE Signal Process. Mag. 2020, 37, 50–61. [Google Scholar] [CrossRef]
  15. Feng, S.; Yan, X.; Sun, H.; Feng, Y.; Liu, H.X. Intelligent Driving Intelligence Test for Autonomous Vehicles with Naturalistic and Adversarial Environment. Nat. Commun. 2021, 12, 748. [Google Scholar] [CrossRef] [PubMed]
  16. Alghodhaifi, H.; Lakshmanan, S. Autonomous Vehicle Evaluation: A Comprehensive Survey on Modeling and Simulation Approaches. IEEE Access 2021, 9, 151531–151566. [Google Scholar] [CrossRef]
  17. Pedersen, T.A.; Glomsrud, J.A.; Ruud, E.-L.; Simonsen, A.; Sandrib, J.; Eriksen, B.-O.H. Towards Simulation-Based Verification of Autonomous Navigation Systems. Saf. Sci. 2020, 129, 104799. [Google Scholar] [CrossRef]
  18. Fremont, D.J.; Kim, E.; Pant, Y.V.; Seshia, S.A.; Acharya, A.; Bruso, X.; Wells, P.; Lemke, S.; Lu, Q.; Mehta, S. Formal Scenario-Based Testing of Autonomous Vehicles: From Simulation to the Real World. In Proceedings of the 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC), Rhodes, Greece, 20–23 September 2020; IEEE: Piscataway Township, NJ, USA, 2020; pp. 1–8. [Google Scholar]
  19. Di, X.; Shi, R. A Survey on Autonomous Vehicle Control in the Era of Mixed-Autonomy: From Physics-Based to AI-Guided Driving Policy Learning. Transp. Res. Part C Emerg. Technol. 2021, 125, 103008. [Google Scholar] [CrossRef]
  20. Cascetta, E.; Cartenì, A.; Di Francesco, L. Do Autonomous Vehicles Drive like Humans? A Turing Approach and an Application to SAE Automation Level 2 Cars. Transp. Res. Part C Emerg. Technol. 2022, 134, 103499. [Google Scholar] [CrossRef]
  21. Bai, Y.; Zhang, B.; Xu, N.; Zhou, J.; Shi, J.; Diao, Z. Vision-Based Navigation and Guidance for Agricultural Autonomous Vehicles and Robots: A Review. Comput. Electron. Agric. 2023, 205, 107584. [Google Scholar] [CrossRef]
  22. Abdallaoui, S.; Aglzim, E.-H.; Chaibet, A.; Kribèche, A. Thorough Review Analysis of Safe Control of Autonomous Vehicles: Path Planning and Navigation Techniques. Energies 2022, 15, 1358. [Google Scholar] [CrossRef]
  23. Jalal, F.; Nasir, F. Underwater Navigation, Localization and Path Planning for Autonomous Vehicles: A Review. In Proceedings of the 2021 International Bhurban Conference on Applied Sciences and Technologies (IBCAST), Islamabad, Pakistan, 12–16 January 2021; IEEE: Piscataway Township, NJ, USA, 2021; pp. 817–828. [Google Scholar]
  24. Bautista-Montesano, R.; Galluzzi, R.; Ruan, K.; Fu, Y.; Di, X. Autonomous Navigation at Unsignalized Intersections: A Coupled Reinforcement Learning and Model Predictive Control Approach. Transp. Res. Part C Emerg. Technol. 2022, 139, 103662. [Google Scholar] [CrossRef]
  25. Benterki, A.; Boukhnifer, M.; Judalet, V.; Maaoui, C. Artificial Intelligence for Vehicle Behavior Anticipation: Hybrid Approach Based on Maneuver Classification and Trajectory Prediction. IEEE Access 2020, 8, 56992–57002. [Google Scholar] [CrossRef]
  26. Rezwan, S.; Choi, W. Artificial Intelligence Approaches for UAV Navigation: Recent Advances and Future Challenges. IEEE Access 2022, 10, 26320–26339. [Google Scholar] [CrossRef]
  27. Erke, S.; Bin, D.; Yiming, N.; Qi, Z.; Liang, X.; Dawei, Z. An Improved A-Star Based Path Planning Algorithm for Autonomous Land Vehicles. Int. J. Adv. Robot. Syst. 2020, 17, 172988142096226. [Google Scholar] [CrossRef]
  28. Reda, M.; Onsy, A.; Haikal, A.Y.; Ghanbari, A. Path Planning Algorithms in the Autonomous Driving System: A Comprehensive Review. Rob. Auton. Syst. 2024, 174, 104630. [Google Scholar] [CrossRef]
  29. Sánchez-Ibáñez, J.R.; Pérez-del-Pulgar, C.J.; García-Cerezo, A. Path Planning for Autonomous Mobile Robots: A Review. Sensors 2021, 21, 7898. [Google Scholar] [CrossRef]
  30. Kassens-Noor, E.; Dake, D.; Decaminada, T.; Kotval-K, Z.; Qu, T.; Wilson, M.; Pentland, B. Sociomobility of the 21st Century: Autonomous Vehicles, Planning, and the Future City. Transp. Policy 2020, 99, 329–335. [Google Scholar] [CrossRef]
  31. Li, Q.; Queralta, J.P.; Gia, T.N.; Zou, Z.; Westerlund, T. Multi-Sensor Fusion for Navigation and Mapping in Autonomous Vehicles: Accurate Localization in Urban Environments. Unmanned Syst. 2020, 8, 229–237. [Google Scholar] [CrossRef]
  32. Grigorescu, S.; Trasnea, B.; Cocias, T.; Macesanu, G. A Survey of Deep Learning Techniques for Autonomous Driving. J. Field Robot. 2020, 37, 362–386. [Google Scholar] [CrossRef]
  33. Bin Issa, R.; Das, M.; Rahman, M.d.S.; Barua, M.; Rhaman, M.d.K.; Ripon, K.S.N.; Alam, M.d.G.R. Double Deep Q-Learning and Faster R-CNN-Based Autonomous Vehicle Navigation and Obstacle Avoidance in Dynamic Environment. Sensors 2021, 21, 1468. [Google Scholar] [CrossRef]
  34. Ibrahim, H.A.; Azar, A.T.; Ibrahim, Z.F.; Ammar, H.H. A Hybrid Deep Learning Based Autonomous Vehicle Navigation and Obstacles Avoidance. In Proceedings of the International Conference on Artificial Intelligence and Computer Vision (AICV2020) 2020, Cairo, Egypt, 8–10 April 2020; pp. 296–307. [Google Scholar]
  35. Kouris, A.; Bouganis, C.-S. Learning to Fly by MySelf: A Self-Supervised CNN-Based Approach for Autonomous Navigation. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; IEEE: Piscataway Township, NJ, USA, 2018; pp. 1–9. [Google Scholar]
  36. Liu, M.; Zhao, F.; Niu, J.; Liu, Y. ReinforcementDriving: Exploring Trajectories and Navigation for Autonomous Vehicles. IEEE Trans. Intell. Transp. Syst. 2021, 22, 808–820. [Google Scholar] [CrossRef]
  37. AlMahamid, F.; Grolinger, K. Autonomous Unmanned Aerial Vehicle Navigation Using Reinforcement Learning: A Systematic Review. Eng. Appl. Artif. Intell. 2022, 115, 105321. [Google Scholar] [CrossRef]
  38. Naz, N.; Ehsan, M.K.; Amirzada, M.R.; Ali, M.Y.; Qureshi, M.A. Intelligence of Autonomous Vehicles: A Concise Revisit. J. Sens. 2022, 2022, 2690164. [Google Scholar] [CrossRef]
  39. Tang, X.; Yang, K.; Wang, H.; Wu, J.; Qin, Y.; Yu, W.; Cao, D. Prediction-Uncertainty-Aware Decision-Making for Autonomous Vehicles. IEEE Trans. Intell. Veh. 2022, 7, 849–862. [Google Scholar] [CrossRef]
  40. Anjum, M.; Shahab, S. Improving Autonomous Vehicle Controls and Quality Using Natural Language Processing-Based Input Recognition Model. Sustainability 2023, 15, 5749. [Google Scholar] [CrossRef]
  41. Pérez-Gil, Ó.; Barea, R.; López-Guillén, E.; Bergasa, L.M.; Gómez-Huélamo, C.; Gutiérrez, R.; Díaz-Díaz, A. Deep Reinforcement Learning Based Control for Autonomous Vehicles in CARLA. Multimed. Tools Appl. 2022, 81, 3553–3576. [Google Scholar] [CrossRef]
  42. Back, S.; Cho, G.; Oh, J.; Tran, X.-T.; Oh, H. Autonomous UAV Trail Navigation with Obstacle Avoidance Using Deep Neural Networks. J. Intell. Robot. Syst. 2020, 100, 1195–1211. [Google Scholar] [CrossRef]
  43. Yeong, D.J.; Velasco-Hernandez, G.; Barry, J.; Walsh, J. Sensor and Sensor Fusion Technology in Autonomous Vehicles: A Review. Sensors 2021, 21, 2140. [Google Scholar] [CrossRef]
  44. Arafat, M.Y.; Alam, M.M.; Moh, S. Vision-Based Navigation Techniques for Unmanned Aerial Vehicles: Review and Challenges. Drones 2023, 7, 89. [Google Scholar] [CrossRef]
  45. Ahangar, M.N.; Ahmed, Q.Z.; Khan, F.A.; Hafeez, M. A Survey of Autonomous Vehicles: Enabling Communication Technologies and Challenges. Sensors 2021, 21, 706. [Google Scholar] [CrossRef]
  46. Bezai, N.E.; Medjdoub, B.; Al-Habaibeh, A.; Chalal, M.L.; Fadli, F. Future Cities and Autonomous Vehicles: Analysis of the Barriers to Full Adoption. Energy Built Environ. 2021, 2, 65–81. [Google Scholar] [CrossRef]
  47. Kolar, P.; Benavidez, P.; Jamshidi, M. Survey of Datafusion Techniques for Laser and Vision Based Sensor Integration for Autonomous Navigation. Sensors 2020, 20, 2180. [Google Scholar] [CrossRef] [PubMed]
  48. Severino, A.; Curto, S.; Barberi, S.; Arena, F.; Pau, G. Autonomous Vehicles: An Analysis Both on Their Distinctiveness and the Potential Impact on Urban Transport Systems. Appl. Sci. 2021, 11, 3604. [Google Scholar] [CrossRef]
  49. Utesch, F.; Brandies, A.; Pekezou Fouopi, P.; Schießl, C. Towards Behaviour Based Testing to Understand the Black Box of Autonomous Cars. Eur. Transp. Res. Rev. 2020, 12, 48. [Google Scholar] [CrossRef]
  50. Perumal, P.S.; Sujasree, M.; Chavhan, S.; Gupta, D.; Mukthineni, V.; Shimgekar, S.R.; Khanna, A.; Fortino, G. An Insight into Crash Avoidance and Overtaking Advice Systems for Autonomous Vehicles: A Review, Challenges and Solutions. Eng. Appl. Artif. Intell. 2021, 104, 104406. [Google Scholar] [CrossRef]
  51. Fayyad, J.; Jaradat, M.A.; Gruyer, D.; Najjaran, H. Deep Learning Sensor Fusion for Autonomous Vehicle Perception and Localization: A Review. Sensors 2020, 20, 4220. [Google Scholar] [CrossRef]
  52. Gadd, M.; de Martini, D.; Marchegiani, L.; Newman, P.; Kunze, L. Sense–Assess–EXplain (SAX): Building Trust in Autonomous Vehicles in Challenging Real-World Driving Scenarios. In Proceedings of the 2020 IEEE Intelligent Vehicles Symposium (IV), Las Vegas, NV, USA, 13 November 2020; IEEE: Piscataway Township, NJ, USA, 2020; pp. 150–155. [Google Scholar]
  53. Janai, J.; Güney, F.; Behl, A.; Geiger, A. Computer Vision for Autonomous Vehicles: Problems, Datasets and State of the Art. Found. Trends Comput. Graph. Vis. 2020, 12, 1–308. [Google Scholar] [CrossRef]
  54. Thombre, S.; Zhao, Z.; Ramm-Schmidt, H.; Vallet Garcia, J.M.; Malkamaki, T.; Nikolskiy, S.; Hammarberg, T.; Nuortie, H.; Bhuiyan, M.Z.H.; Sarkka, S.; et al. Sensors and AI Techniques for Situational Awareness in Autonomous Ships: A Review. IEEE Trans. Intell. Transp. Syst. 2022, 23, 64–83. [Google Scholar] [CrossRef]
  55. Koh, S.; Zhou, B.; Fang, H.; Yang, P.; Yang, Z.; Yang, Q.; Guan, L.; Ji, Z. Real-Time Deep Reinforcement Learning Based Vehicle Navigation. Appl. Soft. Comput. 2020, 96, 106694. [Google Scholar] [CrossRef]
  56. Tullu, A.; Endale, B.; Wondosen, A.; Hwang, H.-Y. Machine Learning Approach to Real-Time 3D Path Planning for Autonomous Navigation of Unmanned Aerial Vehicle. Appl. Sci. 2021, 11, 4706. [Google Scholar] [CrossRef]
  57. Lu, Y.; Ma, H.; Smart, E.; Yu, H. Real-Time Performance-Focused Localization Techniques for Autonomous Vehicle: A Review. IEEE Trans. Intell. Transp. Syst. 2022, 23, 6082–6100. [Google Scholar] [CrossRef]
  58. Madhav, A.V.S.; Tyagi, A.K. Explainable Artificial Intelligence (XAI): Connecting Artificial Decision-Making and Human Trust in Autonomous Vehicles. In Proceedings of the Third International Conference on Computing, Communications, and Cyber-Security, Ghaziabad, India, 30–31 October 2021; pp. 123–136. [Google Scholar]
  59. Rezgui, J.; Gagne, E.; Blain, G.; St-Pierre, O.; Harvey, M. Platooning of Autonomous Vehicles with Artificial Intelligence V2I Communications and Navigation Algorithm. In Proceedings of the 2020 Global Information Infrastructure and Networking Symposium (GIIS), Tunis, Tunisia, 28–30 October 2020; IEEE: Piscataway Township, NJ, USA, 2020; pp. 1–6. [Google Scholar]
  60. Macrae, C. Learning from the Failure of Autonomous and Intelligent Systems: Accidents, Safety, and Sociotechnical Sources of Risk. Risk Anal. 2022, 42, 1999–2025. [Google Scholar] [CrossRef] [PubMed]
  61. Safavi, S.; Safavi, M.A.; Hamid, H.; Fallah, S. Multi-Sensor Fault Detection, Identification, Isolation and Health Forecasting for Autonomous Vehicles. Sensors 2021, 21, 2547. [Google Scholar] [CrossRef] [PubMed]
  62. Yuen, K.F.; Chua, G.; Wang, X.; Ma, F.; Li, K.X. Understanding Public Acceptance of Autonomous Vehicles Using the Theory of Planned Behaviour. Int. J. Environ. Res. Public Health 2020, 17, 4419. [Google Scholar] [CrossRef] [PubMed]
  63. Al Bitar, N.; Gavrilov, A.; Khalaf, W. Artificial Intelligence Based Methods for Accuracy Improvement of Integrated Navigation Systems During GNSS Signal Outages: An Analytical Overview. Gyroscopy Navig. 2020, 11, 41–58. [Google Scholar] [CrossRef]
  64. Garcia, J.; Feng, Y.; Shen, J.; Almanee, S.; Xia, Y.; Chen, Q.A. A Comprehensive Study of Autonomous Vehicle Bugs. In Proceedings of the ACM/IEEE 42nd International Conference on Software Engineering, Seoul, Republic of Korea, 27 June–19 July 2020; ACM: New York, NY, USA, 2020; pp. 385–396. [Google Scholar]
  65. Geisslinger, M.; Poszler, F.; Lienkamp, M. An Ethical Trajectory Planning Algorithm for Autonomous Vehicles. Nat. Mach. Intell. 2023, 5, 137–144. [Google Scholar] [CrossRef]
  66. Alcon, M.; Tabani, H.; Kosmidis, L.; Mezzetti, E.; Abella, J.; Cazorla, F.J. Timing of Autonomous Driving Software: Problem Analysis and Prospects for Future Solutions. In Proceedings of the 2020 IEEE Real-Time and Embedded Technology and Applications Symposium (RTAS), Sydney, Australia, 21–24 April 2020; IEEE: Piscataway Township, NJ, USA, 2020; pp. 267–280. [Google Scholar]
  67. He, H.; Gray, J.; Cangelosi, A.; Meng, Q.; McGinnity, T.M.; Mehnen, J. The Challenges and Opportunities of Artificial Intelligence for Trustworthy Robots and Autonomous Systems. In Proceedings of the 2020 3rd International Conference on Intelligent Robotic and Control Engineering (IRCE), Oxford, UK, 10–12 August 2020; IEEE: Piscataway Township, NJ, USA, 2020; pp. 68–74. [Google Scholar]
  68. Kauppinen, A. Who Should Bear the Risk When Self-Driving Vehicles Crash? J. Appl. Philos. 2021, 38, 630–645. [Google Scholar] [CrossRef]
  69. Chougule, A.; Chamola, V.; Sam, A.; Yu, F.R.; Sikdar, B. A Comprehensive Review on Limitations of Autonomous Driving and Its Impact on Accidents and Collisions. IEEE Open J. Veh. Technol. 2024, 5, 142–161. [Google Scholar] [CrossRef]
  70. Muzammil, N. AI in Autonomous Vehicles: State-of-the-Art and Future Directions. Int. J. Adv. Eng. Technol. Innov. 2024, 1, 62–79. [Google Scholar]
  71. Raj, M.; Narendra, V.G. Deep Neural Network Approach for Navigation of Autonomous Vehicles. In Proceedings of the 2021 6th International Conference for Convergence in Technology (I2CT), Maharashtra, India, 2–4 April 2021; IEEE: Piscataway Township, NJ, USA, 2021; pp. 1–4. [Google Scholar]
  72. Kumari, D.; Bhat, S. Application of Artificial Intelligence in Tesla- A Case Study. Int. J. Appl. Eng. Manag. Lett. 2021, 5, 205–218. [Google Scholar] [CrossRef]
  73. Lemieszewski, Ł.; Radomska-Zalas, A.; Perec, A.; Dobryakova, L.; Ochin, E. Gnss and Lnss Positioning of Unmanned Transport Systems: The Brief Classification of Terrorist Attacks on Usvs and Uuvs. Electronics 2021, 10, 401. [Google Scholar] [CrossRef]
  74. Dobryakova, L.; Lemieszewski, L.; Ochin, E. The Vulnerability of Unmanned Vehicles to Terrorist Attacks Such as Global Navigation Satellite System Spoofing. Sci. J. Marit. Univ. Szczec.-Zesz. Nauk. Akad. Morskiej W Szczecinie 2016, 46, 181–188. [Google Scholar]
  75. Kolekar, S.; Gite, S.; Pradhan, B.; Kotecha, K. Behavior Prediction of Traffic Actors for Intelligent Vehicle Using Artificial Intelligence Techniques: A Review. IEEE Access 2021, 9, 135034–135058. [Google Scholar] [CrossRef]
  76. Davoli, L.; Martalò, M.; Cilfone, A.; Belli, L.; Ferrari, G.; Presta, R.; Montanari, R.; Mengoni, M.; Giraldi, L.; Amparore, E.G.; et al. On Driver Behavior Recognition for Increased Safety: A Roadmap. Safety 2020, 6, 55. [Google Scholar] [CrossRef]
  77. Othman, K. Public Acceptance and Perception of Autonomous Vehicles: A Comprehensive Review. AI Ethics 2021, 1, 355–387. [Google Scholar] [CrossRef] [PubMed]
  78. Cunneen, M.; Mullins, M.; Murphy, F.; Shannon, D.; Furxhi, I.; Ryan, C. Autonomous Vehicles and Avoiding the Trolley (Dilemma): Vehicle Perception, Classification, and the Challenges of Framing Decision Ethics. Cybern. Syst. 2020, 51, 59–80. [Google Scholar] [CrossRef]
  79. Cunneen, M. Autonomous Vehicles, Artificial Intelligence, Risk and Colliding Narratives. In Connected and Automated Vehicles: Integrating Engineering and Ethics; Springer: Berlin/Heidelberg, Germany, 2023; pp. 175–195. [Google Scholar]
  80. Evans, K.; de Moura, N.; Chauvier, S.; Chatila, R.; Dogan, E. Ethical Decision Making in Autonomous Vehicles: The AV Ethics Project. Sci. Eng. Ethics 2020, 26, 3285–3312. [Google Scholar] [CrossRef]
  81. Ryan, M. The Future of Transportation: Ethical, Legal, Social and Economic Impacts of Self-Driving Vehicles in the Year 2025. Sci. Eng. Ethics 2020, 26, 1185–1208. [Google Scholar] [CrossRef]
  82. Dubljević, V. Toward Implementing the ADC Model of Moral Judgment in Autonomous Vehicles. Sci. Eng. Ethics 2020, 26, 2461–2472. [Google Scholar] [CrossRef]
  83. Etienne, H. The Dark Side of the ‘Moral Machine’ and the Fallacy of Computational Ethical Decision-Making for Autonomous Vehicles. Law Innov. Technol. 2021, 13, 85–107. [Google Scholar] [CrossRef]
  84. Santoni de Sio, F. The European Commission Report on Ethics of Connected and Automated Vehicles and the Future of Ethics of Transportation. Ethics Inf. Technol. 2021, 23, 713–726. [Google Scholar] [CrossRef]
  85. De Freitas, J.; Censi, A.; Walker Smith, B.; Di Lillo, L.; Anthony, S.E.; Frazzoli, E. From Driverless Dilemmas to More Practical Commonsense Tests for Automated Vehicles. Proc. Natl. Acad. Sci. USA 2021, 118, e2010202118. [Google Scholar] [CrossRef] [PubMed]
  86. Hansson, S.O.; Belin, M.-Å.; Lundgren, B. Self-Driving Vehicles—An Ethical Overview. Philos. Technol. 2021, 34, 1383–1408. [Google Scholar] [CrossRef]
  87. Siegel, J.; Pappas, G. Morals, Ethics, and the Technology Capabilities and Limitations of Automated and Self-Driving Vehicles. AI Soc. 2023, 38, 213–226. [Google Scholar] [CrossRef]
  88. Tóth, Z.; Caruana, R.; Gruber, T.; Loebbecke, C. The Dawn of the AI Robots: Towards a New Framework of AI Robot Accountability. J. Bus. Ethics 2022, 178, 895–916. [Google Scholar] [CrossRef]
  89. Brandão, M.; Jirotka, M.; Webb, H.; Luff, P. Fair Navigation Planning: A Resource for Characterizing and Designing Fairness in Mobile Robots. Artif. Intell. 2020, 282, 103259. [Google Scholar] [CrossRef]
  90. Bendiab, G.; Hameurlaine, A.; Germanos, G.; Kolokotronis, N.; Shiaeles, S. Autonomous Vehicles Security: Challenges and Solutions Using Blockchain and Artificial Intelligence. IEEE Trans. Intell. Transp. Syst. 2023, 24, 3614–3637. [Google Scholar] [CrossRef]
  91. Krontiris, I.; Grammenou, K.; Terzidou, K.; Zacharopoulou, M.; Tsikintikou, M.; Baladima, F.; Sakellari, C.; Kaouras, K. Autonomous Vehicles: Data Protection and Ethical Considerations. In Proceedings of the Computer Science in Cars Symposium, Feldkirchen, Germany, 2 December 2020; ACM: New York, NY, USA, 2020; pp. 1–10. [Google Scholar]
  92. Dhirani, L.L.; Mukhtiar, N.; Chowdhry, B.S.; Newe, T. Ethical Dilemmas and Privacy Issues in Emerging Technologies: A Review. Sensors 2023, 23, 1151. [Google Scholar] [CrossRef] [PubMed]
  93. He, S. Who Is Liable for the UBER Self-Driving Crash? Analysis of the Liability Allocation and the Regulatory Model for Autonomous Vehicles. In Perspectives in Law, Business and Innovation; Springer: Berlin/Heidelberg, Germany, 2021; pp. 93–111. [Google Scholar]
  94. Skrickij, V.; Šabanovič, E.; Žuraulis, V. Autonomous Road Vehicles: Recent Issues and Expectations. IET Intell. Transp. Syst. 2020, 14, 471–479. [Google Scholar] [CrossRef]
  95. Chen, S.-Y.; Kuo, H.-Y.; Lee, C. Preparing Society for Automated Vehicles: Perceptions of the Importance and Urgency of Emerging Issues of Governance, Regulations, and Wider Impacts. Sustainability 2020, 12, 7844. [Google Scholar] [CrossRef]
  96. Kim, K.; Kim, J.S.; Jeong, S.; Park, J.-H.; Kim, H.K. Cybersecurity for Autonomous Vehicles: Review of Attacks and Defense. Comput. Secur. 2021, 103, 102150. [Google Scholar] [CrossRef]
  97. Landini, S. Ethical Issues, Cybersecurity and Automated Vehicles. In InsurTech: A Legal and Regulatory View; AIDA Europe Research Series on Insurance Law and Regulation; Springer: Cham, Germany, 2020; Volume 1, pp. 291–312. [Google Scholar]
  98. Bissell, D.; Birtchnell, T.; Elliott, A.; Hsu, E.L. Autonomous Automobilities: The Social Impacts of Driverless Vehicles. Curr. Sociol. 2020, 68, 116–134. [Google Scholar] [CrossRef]
  99. Ryan Conmy, P.; Mcdermid, J.; Habli, I.; Porter, Z. Safety Engineering, Role Responsibility and Lessons from the Uber ATG Tempe Accident. In Proceedings of the First International Symposium on Trustworthy Autonomous Systems, Edinburgh, UK, 11–12 July 2023; ACM: New York, NY, USA, 2023; pp. 1–10. [Google Scholar]
  100. Robinson-Tay, K. The Role of Autonomous Vehicles in Transportation Equity in Tempe, Arizona. Mobilities 2023, 19, 504–520. [Google Scholar] [CrossRef]
  101. Bécue, A.; Praça, I.; Gama, J. Artificial Intelligence, Cyber-Threats and Industry 4.0: Challenges and Opportunities. Artif. Intell. Rev. 2021, 54, 3849–3886. [Google Scholar] [CrossRef]
  102. Chu, Y.; Liu, P. Human Factor Risks in Driving Automation Crashes. In HCI in Mobility, Transport, and Automotive Systems; Springer: Berlin/Heidelberg, Germany, 2023; pp. 3–12. [Google Scholar]
  103. Ergin, U. One of the First Fatalities of a Self-Driving Car: Root Cause Analysis of the 2016 Tesla Model S 70D Crash. Trafik Ve Ulaşım Araştırmaları Derg. 2022, 5, 83–97. [Google Scholar] [CrossRef]
  104. Boggs, A.M.; Arvin, R.; Khattak, A.J. Exploring the Who, What, When, Where, and Why of Automated Vehicle Disengagements. Accid. Anal. Prev. 2020, 136, 105406. [Google Scholar] [CrossRef]
  105. Scanlon, J.M.; Kusano, K.D.; Daniel, T.; Alderson, C.; Ogle, A.; Victor, T. Waymo Simulated Driving Behavior in Reconstructed Fatal Crashes within an Autonomous Vehicle Operating Domain. Accid. Anal. Prev. 2021, 163, 106454. [Google Scholar] [CrossRef]
  106. Das, S. Autonomous Vehicle Safety: Understanding Perceptions of Pedestrians and Bicyclists. Transp. Res. Part F Traffic. Psychol. Behav. 2021, 81, 41–54. [Google Scholar] [CrossRef]
  107. Sahawneh, S.; Alnaser, A.J.; Akbas, M.I.; Sargolzaei, A.; Razdan, R. Requirements for the Next-Generation Autonomous Vehicle Ecosystem. In Proceedings of the 2019 SoutheastCon, Huntsville, AL, USA, 11–14 April 2019; IEEE: Piscataway Township, NJ, USA, 2019; pp. 1–6. [Google Scholar]
  108. Claybrook, J.; Kildare, S. Autonomous Vehicles: No Driver…no Regulation? Science (1979) 2018, 361, 36–37. [Google Scholar] [CrossRef]
  109. Banks, V.A.; Plant, K.L.; Stanton, N.A. Driver Error or Designer Error: Using the Perceptual Cycle Model to Explore the Circumstances Surrounding the Fatal Tesla Crash on 7th May 2016. Saf. Sci. 2018, 108, 278–285. [Google Scholar] [CrossRef]
  110. Ghorai, P.; Eskandarian, A.; Abbas, M.; Nayak, A. A Causation Analysis of Autonomous Vehicle Crashes. IEEE Intell. Transp. Syst. Mag. 2024, 16, 2–15. [Google Scholar] [CrossRef]
  111. Dixit, V.V.; Chand, S.; Nair, D.J. Autonomous Vehicles: Disengagements, Accidents and Reaction Times. PLoS ONE 2016, 11, e0168054. [Google Scholar] [CrossRef] [PubMed]
  112. Liu, J.; Xu, N.; Shi, Y.; Rahman, M.M.; Barnett, T.; Jones, S. Do First Responders Trust Connected and Automated Vehicles (CAVs)? A National Survey. Transp. Policy 2023, 140, 85–99. [Google Scholar] [CrossRef]
  113. Johnson, J. Artificial Intelligence, Drone Swarming and Escalation Risks in Future Warfare. RUSI J. 2020, 165, 26–36. [Google Scholar] [CrossRef]
  114. Johnson, J. ‘Catalytic Nuclear War’ in the Age of Artificial Intelligence & Autonomy: Emerging Military Technology and Escalation Risk between Nuclear-Armed States. J. Strateg. Stud. 2021, 1–41. [Google Scholar] [CrossRef]
  115. Turchin, A.; Denkenberger, D. Classification of Global Catastrophic Risks Connected with Artificial Intelligence. AI Soc. 2020, 35, 147–163. [Google Scholar] [CrossRef]
  116. Johnson, J. Delegating Strategic Decision-Making to Machines: Dr. Strangelove Redux? J. Strateg. Stud. 2022, 45, 439–477. [Google Scholar] [CrossRef]
  117. Sheehan, B.; Murphy, F.; Mullins, M.; Ryan, C. Connected and Autonomous Vehicles: A Cyber-Risk Classification Framework. Transp. Res. Part A Policy Pract. 2019, 124, 523–536. [Google Scholar] [CrossRef]
  118. Muzahid, A.J.M.; Kamarulzaman, S.F.; Rahman, M.A.; Murad, S.A.; Kamal, M.A.S.; Alenezi, A.H. Multiple Vehicle Cooperation and Collision Avoidance in Automated Vehicles: Survey and an AI-Enabled Conceptual Framework. Sci. Rep. 2023, 13, 603. [Google Scholar] [CrossRef]
  119. Chandra, S.; Shirish, A.; Srivastava, S.C. To Be or Not to Be …Human? Theorizing the Role of Human-Like Competencies in Conversational Artificial Intelligence Agents. J. Manag. Inf. Syst. 2022, 39, 969–1005. [Google Scholar] [CrossRef]
  120. Kiss, G.; Berecz, C. Priority Levels and Danger in Usage of Artificial Intelligence in the World of Autonomous Vehicle. In Soft Computing Applications; Springer: Berlin/Heidelberg, Germany, 2021; pp. 307–316. [Google Scholar]
  121. Dong, J.; Chen, S.; Miralinaghi, M.; Chen, T.; Li, P.; Labi, S. Why Did the AI Make That Decision? Towards an Explainable Artificial Intelligence (XAI) for Autonomous Driving Systems. Transp. Res. Part C Emerg. Technol. 2023, 156, 104358. [Google Scholar] [CrossRef]
  122. von Eschenbach, W.J. Transparency and the Black Box Problem: Why We Do Not Trust AI. Philos. Technol. 2021, 34, 1607–1622. [Google Scholar] [CrossRef]
  123. Guidotti, R.; Monreale, A.; Giannotti, F.; Pedreschi, D.; Ruggieri, S.; Turini, F. Factual and Counterfactual Explanations for Black Box Decision Making. IEEE Intell. Syst. 2019, 34, 14–23. [Google Scholar] [CrossRef]
  124. Chia, W.M.D.; Keoh, S.L.; Goh, C.; Johnson, C. Risk Assessment Methodologies for Autonomous Driving: A Survey. IEEE Trans. Intell. Transp. Syst. 2022, 23, 16923–16939. [Google Scholar] [CrossRef]
  125. Firlej, M.; Taeihagh, A. Regulating Human Control over Autonomous Systems. Regul. Gov. 2021, 15, 1071–1091. [Google Scholar] [CrossRef]
  126. Wang, H.; Shao, W.; Sun, C.; Yang, K.; Cao, D.; Li, J. A Survey on an Emerging Safety Challenge for Autonomous Vehicles: Safety of the Intended Functionality. Engineering 2024, 33, 17–34. [Google Scholar] [CrossRef]
  127. Caldwell, M.; Andrews, J.T.A.; Tanay, T.; Griffin, L.D. AI-Enabled Future Crime. Crime. Sci. 2020, 9, 14. [Google Scholar] [CrossRef]
  128. Seymour, J.; Ho, D.-T.-C.; Luu, Q.-H. An Empirical Testing of Autonomous Vehicle Simulator System for Urban Driving. In Proceedings of the 2021 IEEE International Conference on Artificial Intelligence Testing (AITest), Virtual, 23–26 August 2021; IEEE: Piscataway Township, NJ, USA, 2021; pp. 111–117. [Google Scholar]
  129. Taeihagh, A. Governance of Artificial Intelligence. Policy Soc. 2021, 40, 137–157. [Google Scholar] [CrossRef]
  130. Tan, L.; Yu, K.; Lin, L.; Cheng, X.; Srivastava, G.; Lin, J.C.-W.; Wei, W. Speech Emotion Recognition Enhanced Traffic Efficiency Solution for Autonomous Vehicles in a 5G-Enabled Space–Air–Ground Integrated Intelligent Transportation System. IEEE Trans. Intell. Transp. Syst. 2022, 23, 2830–2842. [Google Scholar] [CrossRef]
  131. Gill, T. Ethical Dilemmas Are Really Important to Potential Adopters of Autonomous Vehicles. Ethics Inf. Technol. 2021, 23, 657–673. [Google Scholar] [CrossRef]
  132. Lundgren, B. Safety Requirements vs. Crashing Ethically: What Matters Most for Policies on Autonomous Vehicles. AI Soc. 2021, 36, 405–415. [Google Scholar] [CrossRef]
  133. Copp, C.J.; Cabell, J.J.; Kemmelmeier, M. Plenty of Blame to Go around: Attributions of Responsibility in a Fatal Autonomous Vehicle Accident. Curr. Psychol. 2023, 42, 6752–6767. [Google Scholar] [CrossRef]
  134. Meyer-Waarden, L.; Cloarec, J. “Baby, You Can Drive My Car”: Psychological Antecedents That Drive Consumers’ Adoption of AI-Powered Autonomous Vehicles. Technovation 2022, 109, 102348. [Google Scholar] [CrossRef]
  135. Ferrara, E. Fairness and Bias in Artificial Intelligence: A Brief Survey of Sources, Impacts, and Mitigation Strategies. Sci 2023, 6, 3. [Google Scholar] [CrossRef]
  136. Felzmann, H.; Fosch-Villaronga, E.; Lutz, C.; Tamò-Larrieux, A. Towards Transparency by Design for Artificial Intelligence. Sci. Eng. Ethics 2020, 26, 3333–3361. [Google Scholar] [CrossRef]
  137. He, Q.; Meng, X.; Qu, R.; Xi, R. Machine Learning-Based Detection for Cyber Security Attacks on Connected and Autonomous Vehicles. Mathematics 2020, 8, 1311. [Google Scholar] [CrossRef]
  138. He, H.; Gray, J.; Cangelosi, A.; Meng, Q.; McGinnity, T.M.; Mehnen, J. The Challenges and Opportunities of Human-Centered AI for Trustworthy Robots and Autonomous Systems. IEEE Trans. Cogn. Dev. Syst. 2022, 14, 1398–1412. [Google Scholar] [CrossRef]
  139. Olayode, O.I.; Tartibu, L.K.; Okwu, M.O. Application of Artificial Intelligence in Traffic Control System of Non-Autonomous Vehicles at Signalized Road Intersection. Procedia CIRP 2020, 91, 194–200. [Google Scholar] [CrossRef]
  140. Costantini, F.; Thomopoulos, N.; Steibel, F.; Curl, A.; Lugano, G.; Kováčiková, T. Autonomous Vehicles in a GDPR Era: An International Comparison. In Advances in Transport Policy and Planning; Elsevier: Amsterdam, The Netherlands, 2020; pp. 191–213. [Google Scholar]
  141. Mordue, G.; Yeung, A.; Wu, F. The Looming Challenges of Regulating High Level Autonomous Vehicles. Transp. Res. Part A Policy Pract. 2020, 132, 174–187. [Google Scholar] [CrossRef]
  142. Beck, J.; Arvin, R.; Lee, S.; Khattak, A.; Chakraborty, S. Automated Vehicle Data Pipeline for Accident Reconstruction: New Insights from LiDAR, Camera, and Radar Data. Accid. Anal. Prev. 2023, 180, 106923. [Google Scholar] [CrossRef]
  143. Tennant, C.; Stilgoe, J. The Attachments of ‘Autonomous’ Vehicles. Soc. Stud. Sci. 2021, 51, 846–870. [Google Scholar] [CrossRef] [PubMed]
  144. Neri, H.; Cozman, F. The Role of Experts in the Public Perception of Risk of Artificial Intelligence. AI Soc. 2020, 35, 663–673. [Google Scholar] [CrossRef]
  145. Lawless, W.F.; Mittu, R.; Sofge, D.A. Introduction: Artificial Intelligence (AI), Autonomous Machines, and Constructing Context: User Interventions, Social Awareness, and Interdependence. In Human-Machine Shared Contexts; Elsevier: Amsterdam, The Netherlands, 2020; pp. 1–22. [Google Scholar]
  146. Nikitas, A.; Vitel, A.-E.; Cotet, C. Autonomous Vehicles and Employment: An Urban Futures Revolution or Catastrophe? Cities 2021, 114, 103203. [Google Scholar] [CrossRef]
  147. Tyagi, A.K.; Aswathy, S.U. Autonomous Intelligent Vehicles (AIV): Research Statements, Open Issues, Challenges and Road for Future. Int. J. Intell. Netw. 2021, 2, 83–102. [Google Scholar] [CrossRef]
  148. Mankodiya, H.; Jadav, D.; Gupta, R.; Tanwar, S.; Hong, W.-C.; Sharma, R. OD-XAI: Explainable AI-Based Semantic Object Detection for Autonomous Vehicles. Appl. Sci. 2022, 12, 5310. [Google Scholar] [CrossRef]
  149. Cugurullo, F. Urban Artificial Intelligence: From Automation to Autonomy in the Smart City. Front. Sustain. Cities 2020, 2, 38. [Google Scholar] [CrossRef]
  150. Qayyum, A.; Usama, M.; Qadir, J.; Al-Fuqaha, A. Securing Connected & Autonomous Vehicles: Challenges Posed by Adversarial Machine Learning and the Way Forward. IEEE Commun. Surv. Tutor. 2020, 22, 998–1026. [Google Scholar] [CrossRef]
  151. Hafeez, F.; Sheikh, U.U.; Alkhaldi, N.; Al Garni, H.Z.; Arfeen, Z.A.; Khalid, S.A. Insights and Strategies for an Autonomous Vehicle With a Sensor Fusion Innovation: A Fictional Outlook. IEEE Access 2020, 8, 135162–135175. [Google Scholar] [CrossRef]
  152. Shaheen, K.; Hanif, M.A.; Hasan, O.; Shafique, M. Continual Learning for Real-World Autonomous Systems: Algorithms, Challenges and Frameworks. J. Intell. Robot. Syst. 2022, 105, 9. [Google Scholar] [CrossRef]
Figure 1. AI components’ interaction in an autonomous vehicle system.
Figure 1. AI components’ interaction in an autonomous vehicle system.
Electronics 13 03660 g001
Figure 2. Real-time data processing and decision-making in critical scenarios.
Figure 2. Real-time data processing and decision-making in critical scenarios.
Electronics 13 03660 g002
Figure 3. AV architecture—system interactions.
Figure 3. AV architecture—system interactions.
Electronics 13 03660 g003
Table 1. Key AI techniques in AV navigation.
Table 1. Key AI techniques in AV navigation.
TechniqueDescription
Machine LearningAlgorithms that allow AVs to learn from data and improve performance over time [40].
Neural NetworksAI models that mimic the human brain to recognize patterns and make decisions [41].
Sensor FusionCombining data from multiple sensors to improve accuracy and reliability [42,43].
Natural Language Processing (NLP)Enabling AVs to understand and respond to verbal instructions [43,44].
Bird’s Eye View (BEV) RepresentationsTransforming multi-modal sensor data into a top-down, 2D map of the environment,
enhancing spatial understanding and decision-making [45].
Occupancy NetworksProviding a continuous, probabilistic 3D model of the environment, crucial for obstacle detection and collision avoidance [46].
Table 2. Technical limitations of current AI systems.
Table 2. Technical limitations of current AI systems.
LimitationImpact
Limited Data QualityIncomplete or biased data can lead
to poor decision-making [3]
Inadequate Sensor
Performance
Sensors can fail or provide inaccurate data
in adverse weather conditions [51].
Computational
Constraints
Real-time processing requirements can exceed
current computational capabilities [19].
Complexity of
Real-World Scenarios
Difficulty in accurately predicting and responding to
highly dynamic and unpredictable environments [52].
Table 3. Summary of challenges and proposed recommendations for AI in autonomous vehicle navigation.
Table 3. Summary of challenges and proposed recommendations for AI in autonomous vehicle navigation.
CategoryChallengeRecommendation
AI SystemsBiases in AI ModelsDevelop more robust and diverse datasets to reduce biases and improve the AI’s ability to handle various scenarios.
Limited Situational AwarenessIncorporate advanced sensor fusion techniques, combining data from multiple sensors like cameras, LIDAR, and radar.
Inability to Adapt to New DataImplement continuous learning frameworks to allow AI systems to update and adapt based on new information.
Lack of Transparency in Decision-MakingDevelop explainable AI (XAI) techniques to enhance the understanding of AI decisions and facilitate debugging.
Insufficient Real-World TestingConduct regular and rigorous testing in diverse and challenging environments to ensure AI can manage real-world complexities.
Regulatory MeasuresInconsistent Safety StandardsEstablish comprehensive and consistent standards for AV testing and deployment across different regions.
Lack of Fail-Safe MechanismsMandate the inclusion of fail-safe mechanisms and manual override options in AVs for human intervention.
Outdated RegulationsContinuously monitor and update regulations to keep pace with technological advancements.
Public AwarenessPublic Mistrust and Misunderstanding of AV CapabilitiesConduct public awareness campaigns to educate the public about AV capabilities, limitations, and risks.
Lack of Vigilance When Using Semi-Autonomous FeaturesEmphasize the importance of remaining vigilant and prepared to manually intervene, even with semi-autonomous features.
Lack of Informed Dialogue About AV TechnologyEngage with communities through public forums, workshops, and educational programs to foster informed discussions.
Insufficient Transparency from Manufacturers and PolicymakersEnsure transparent communication from manufacturers and policymakers regarding safety measures and incident reports.
Table 4. Summary of emerging challenges.
Table 4. Summary of emerging challenges.
Emerging ChallengeDescriptionProposed Solutions
Cybersecurity VulnerabilitiesAVs are susceptible to cyber-attacks due to their reliance on AI systems and networked communication. A cyber-attack could disrupt operations or lead to unauthorized control, posing significant safety risks.Develop robust cybersecurity frameworks, implement real-time threat detection, use resilient communication protocols, secure software updates, and consider blockchain for additional security.
AI Interpretability and the “Black Box” ProblemAI systems in AVs often operate as “black boxes”, making it difficult to understand decision-making processes, which is crucial for identifying and correcting errors or biases.Develop explainable AI (XAI) techniques to provide transparency in AI decision-making, allowing for easier identification of errors and building public trust.
Ethical Decision-MakingAVs face ethical dilemmas in situations where harm is unavoidable (e.g., the “trolley problem”). The potential for algorithmic bias adds to the complexity of ensuring fair and safe AI decisions.Ensure fairness in AI by using diverse and representative datasets, and engage in ethical discussions to guide the development of AI decision-making frameworks.
Regulatory and Legal ChallengesThe rapid evolution of AV technology outpaces the development of regulatory frameworks, leading to potential gaps in safety standards and oversight.Proactively develop and update regulations in collaboration with technologists and ethicists. International standardization may be necessary for consistent global deployment.
Table 5. Summary of AI challenges in autonomous vehicle modules.
Table 5. Summary of AI challenges in autonomous vehicle modules.
ModuleKey ChallengesCurrent Solutions
Perception
-
Sensor fusion complexity
-
Adverse environmental conditions
-
Real-time processing demands
-
Deep learning (CNNs) for image processing
-
Advanced sensor fusion algorithms
Planning and Decision-Making
-
Handling unpredictable scenarios
-
Ethical decision-making
-
Dynamic path planning
-
Reinforcement learning
-
Markov decision processes (MDPs)
Localization and Mapping
-
Precision in complex environments
-
SLAM limitations in dynamic settings
-
Data drift issues
-
Improved SLAM algorithms
-
Deep learning-based visual odometry
-
Enhanced sensor fusion techniques
Control
-
Minimizing latency in response
-
Safety in redundant systems
-
Integration with legacy systems
-
Model predictive control (MPC)
-
Adaptive control systems
Table 6. Safety risks and real-world incidents.
Table 6. Safety risks and real-world incidents.
IncidentDescriptionLessons Learned
Uber Fatal AccidentA pedestrian was struck by an AV in
Tempe, Arizona due to sensor failure [100].
Improved sensor performance and better human oversight needed.
Tesla Florida CrashA Tesla in Autopilot mode failed to recognize a truck, leading to a fatal crash [103].Enhanced object recognition and decision-making algorithms.
Waymo Near-Miss IncidentAn AV narrowly avoided a collision due
to sudden human intervention [105].
Importance of fail-safe mechanisms and human readiness to intervene.
Table 7. Ethical and legal concerns.
Table 7. Ethical and legal concerns.
ConcernDescription
Liability in AccidentsDetermining who is responsible when an AV is involved in an accident [133].
Privacy IssuesHandling and protecting the vast amounts of data collected by AVs [134].
Bias and FairnessEnsuring AI systems do not perpetuate or exacerbate societal biases [135].
Transparency in Decision-
Making
Making AI decision-making processes understandable to users and regulators [136].
Ethical Use of AIEnsuring AI applications in AVs align with broader ethical principles and societal values [137].
Table 8. Recommendations for improved AI systems.
Table 8. Recommendations for improved AI systems.
RecommendationDescription
Diverse and Robust DatasetsDeveloping comprehensive datasets to train AI models and reduce biases [150].
Advanced Sensor FusionIntegrating data from various sensors to enhance environmental perception and accuracy [151].
Explainable AI (XAI)Creating transparent AI models to facilitate understanding and debugging of decision-making processes [148].
Continuous Learning FrameworksImplementing systems that allow AI to learn and adapt from new data continuously [152].
Comprehensive TestingConducting extensive tests in diverse and challenging environments to ensure system robustness [128].
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Miller, T.; Durlik, I.; Kostecka, E.; Borkowski, P.; Łobodzińska, A. A Critical AI View on Autonomous Vehicle Navigation: The Growing Danger. Electronics 2024, 13, 3660. https://doi.org/10.3390/electronics13183660

AMA Style

Miller T, Durlik I, Kostecka E, Borkowski P, Łobodzińska A. A Critical AI View on Autonomous Vehicle Navigation: The Growing Danger. Electronics. 2024; 13(18):3660. https://doi.org/10.3390/electronics13183660

Chicago/Turabian Style

Miller, Tymoteusz, Irmina Durlik, Ewelina Kostecka, Piotr Borkowski, and Adrianna Łobodzińska. 2024. "A Critical AI View on Autonomous Vehicle Navigation: The Growing Danger" Electronics 13, no. 18: 3660. https://doi.org/10.3390/electronics13183660

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop