Next Article in Journal
Spatial Correlation Network Characteristics of Comprehensive Transportation Green Efficiency in China
Previous Article in Journal
Trends in Autonomous Vehicle Performance: A Comprehensive Study of Disengagements and Mileage
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Impact of Critical Situations on Autonomous Vehicles and Strategies for Improvement

by
Shahriar Austin Beigi
and
Byungkyu Brian Park
*
Department of Civil & Environmental Engineering, School of Engineering and Applied Science, University of Virginia, Charlottesville, VA 22903, USA
*
Author to whom correspondence should be addressed.
Future Transp. 2025, 5(2), 39; https://doi.org/10.3390/futuretransp5020039
Submission received: 31 October 2024 / Revised: 28 January 2025 / Accepted: 13 March 2025 / Published: 1 April 2025

Abstract

:
Recently, the development of autonomous vehicles (AVs) and intelligent driver assistance systems has drawn significant attention from the public. Despite these advancements, AVs may encounter critical situations in real-world scenarios that can lead to severe traffic accidents. This review paper investigated these critical scenarios, categorizing them under weather conditions, environmental factors, and infrastructure challenges. Factors such as attenuation and scattering severely influence the performance of sensors and AVs, which can be affected by rain, snow, fog, and sandstorms. GPS and sensor signals can be disturbed in urban canyons and forested regions, which pose vehicle localization and navigation problems. Both roadway infrastructure issues, like inadequate signage and poor road conditions, are major challenges to AV sensors and navigation systems. This paper presented a survey of existing technologies and methods that can be used to overcome these challenges, evaluating their effectiveness, and reviewing current research to improve AVs’ robustness and dependability under such critical situations. This systematic review compares the current state of sensor technologies, fusion techniques, and adaptive algorithms to highlight advances and identify continuing challenges for the field. The method involved categorizing sensor robustness, infrastructure adaptation, and algorithmic improvement progress. The results show promise for advancements in dynamic infrastructure and V2I systems but pose challenges to overcoming sensor failures in extreme weather and on non-maintained roads. Such results highlight the need for interdisciplinary collaboration and real-world validation. Moreover, the review presents future research lines to improve how AVs overcome environmental and infrastructural adversities. This review concludes with actionable recommendations for upgrading physical and digital infrastructures, adaptive sensors, and algorithmic upgrades. Such research is important for AV technology to remain in the zone of advancement and stability.

1. Introduction

The evolution of autonomous driving technology is becoming increasingly prevalent in our daily lives, driven by significant advancements in perception technologies and computational capabilities [1]. Just as the agricultural-to-industrial adaptation in the late 18th century was met with initial hesitation and uncertainty, there is current skepticism concerning AV safety, reliability, and societal impact. Reflecting on the Industrial Revolution, AV technologies will eventually integrate into our infrastructure, perhaps more gradually than proponents anticipate, reshaping our transportation systems. AVs are at the forefront of technological innovation, utilizing various sensors such as radar, lidar, and cameras to accurately detect and interpret their environment. This integration of cutting-edge technology empowers AVs to navigate complex scenarios safely and efficiently, marking a significant leap forward in the journey toward fully autonomous transportation. The driving automation is classified into six levels, from 0 (entirely manual driving) to Level 5 (complete automation in all scenarios).
Figure 1 illustrates the five levels’ automation level description and driver support. At Level 0, the human driver is fully in the loop and supplied with safety features such as emergency braking, blind spot warning, and lane warnings. Assistive features at Level 1 include lane keeping or adaptive cruise control, but the driver is fully engaged. At this level, Level 2, the driver can activate multiple assistive features simultaneously for convenience and support but is required to maintain full attention and control. Conditional automation, as it is called, can drive on its own in certain situations but gives way to the driver in uncertain circumstances. Systems exist to bring the vehicle to a safe stop in designated scenarios, and Level 4 is highly autonomous and capable of self-navigation. Finally, Level 5 is fully autonomous and requires no human input in any driving situation, potentially eliminating the need for traditional manual controls. In this review study, the focus will be explicitly on Level 5 automation.
With the growing interest in full autonomy, researchers can use these gaps to explore what is possible and start to put autonomous driving into traffic operations with the highest safety potential. Yet actual AVs may face dire traffic accidents. They include situations like heavy rain, snow, haze, fog, or dust when the sensors and cameras may be obscured by the situation, complex road environment, and infrastructure, unexpected debris on the road, or human drivers that are erratic or unpredictable. Furthermore, AVs need to be able to navigate construction zones, figure out how to manage system malfunctions, and respond to accidents or medical emergencies involving passengers. It is important to address these potential challenges to improve the safety and reliability of autonomous vehicles. Sensor technology has advanced in recent years, significantly improving its performance and resilience in various challenging conditions. Data from multiple sources, including radar, lidar, cameras, and GNSS, are combined using cutting-edge sensor fusion techniques to comprehensively view a vehicle’s surroundings [3,4,5]. Other researchers are developing advanced algorithms to enable AVs to interpret sensor data accurately, even if it is noisy or occluded [6,7,8,9]. Also, simulation, testing, and hardware enhancements are critical strategies. The critical situations recognize the impact of sensor performance, and research that will help improve perception is required to further develop AV technologies.
To date, some literature reviews have focused on evaluating the performance of standard sensors used in AVs under various weather conditions [2,10,11,12]. Rana et al. [11] critically reviewed the current state of advanced driver-assist systems (ADAS) in AVs, including their functions and deployment scenarios, and discussed the technological and infrastructural challenges. Vargas et al. [2] focused on the limitations of RADAR, LiDAR, ultrasonic sensors, cameras, and GNSS under heavy precipitation, suggesting potential hardware improvements. Yoneda et al. [12] analyzed research on automated driving technologies, focusing on recognizing and navigating under adverse weather conditions such as sun glare, rain, fog, and snow, and highlighted the difficulties these conditions pose for safe driving and market introduction.
Despite these efforts, to our knowledge, no paper has comprehensively covered all adverse weather phenomena and all standard AV sensors. Additionally, no paper has inclusively addressed all types of critical situations and the various solutions to improve AV performance in these scenarios. Therefore, this review aims to fill that gap by thoroughly examining the challenges and solutions investigated to improve AV sensor performance in critical situations including Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Generative Adversarial Networks (GANs), fusion and multi-modal networks, domain adaptation, unsupervised and semi-supervised learning, specialized segmentation and detection networks, simulation and testing, sensor fusion and multi-modal networks, data processing, and hardware enhancements. This comprehensive approach provides a holistic view of current research and future directions to enhance AV sensor performance under diverse and challenging conditions.
In the following sections of this study, Section 2 provides a detailed overview of radar technologies used in AVs, including their applications and limitations. Section 3 investigates the critical situations AVs encounter, such as adverse weather, complex environments, and infrastructure constraints. Section 4 concludes with future research directions to fill in the gaps discovered in AV technology performance and sensor reliability.

2. Sensor Technologies in AVs

AVs rely on various sensor technologies to sense and navigate their environment. These include radar, LiDAR, cameras, and global navigation satellite systems (GNSS), each of which contributes unique capabilities to ensure safe and efficient operations. In this section, we will explore these technologies in detail. Section 2.1 focuses on radar technology, which uses radio waves to detect objects and determine their distance. Section 2.2 discusses LiDAR, a laser system that provides accurate distance measurement. Section 2.3 covers the camera technologies necessary for visual perception and interpretation, and Section 2.4 examines GNSS, which enables real-time positioning and navigation. Figure 2 illustrates the basic principles of these sensors, and Table 1 summarizes key research projects aimed at improving sensor performance.

2.1. Radar Technology Application

RADAR (Radio Detection and Ranging) technology plays a critical role in AVs by using radio waves to detect range, velocity, and moving objects within specific distances around the vehicle. This system consists of both a transmitter and a receiver: the transmitter sends out radio waves that reflect off objects, whether stationary or in motion, and the receiver captures these reflected signals (see Figure 2). By analyzing the time delay and frequency shift in the returned waves, the RADAR system accurately determines the position, speed, and direction of potential obstacles relative to the AV [13]. RADAR systems are tailored for distinct operational ranges: Short-Range RADAR (SRR 0.2 m), Medium-Range RADAR (MRR 30–80 m), and Long-Range RADAR (LRR 80–200 m) [2]. The LRR is particularly adept at detecting targets or objects ahead of the vehicle, while MRR and SRR are used for proximity applications like parking assistance and side view detection [3]. They operate at 24, 74, 77, and 79 GHz frequencies within the millimeter wave (MMW) spectrum. In alignment with American and European standards, the designated frequency band for automotive radar systems is around 77 GHz (λ ≈ 4 mm), commonly referred to as mm-wave radars [10], denoting their utilization of millimeter-wave frequencies.

2.2. LiDAR Technology Application

LiDAR, similar to RADAR, is a distance-measuring remote sensing technology. Unlike RADAR, which uses radio waves, LiDAR employs laser beams to measure distance by emitting an invisible laser light to human eyes on objects. The environment then reflects these laser beams to the device, where a photodetector captures the returned signals. Using the Time of Flight (ToF) method, LiDAR measures the duration required for the laser to travel to the target and back, allowing for precise distance calculation [14,15]. This feature allows LiDAR to generate detailed maps by capturing millions of data points. LiDAR also scans 360° continuously to create comprehensive 3D point clouds of its surroundings, with each pixel containing depth and motion speed information, providing a detailed view of the environment [16]. This operating principle is shown in Figure 2.

2.3. Camera Technology Application

Cameras are crucial in AVs and vital for observing and interpreting the vehicle’s surroundings. They are indispensable for recognizing pavement markings, identifying road signs, and spotting potential hazards like obstacles. Cameras are generally cost-effective, and when combined with appropriate software, they can distinguish between moving and stationary objects within their field of vision and produce high-resolution images of the environment [17]. This integration of cameras and advanced processing algorithms allows AVs to navigate complex environments with precision and reliability. Camera systems in AVs are primarily classified into two main types: visible (VIS) and infrared (IR). VIS cameras, monocular (single-camera), or binocular (dual-camera), operate in the 400 to 780 nm wavelength range, aligning with the spectrum visible to humans [2]. This allows them to capture a comprehensive worldview in conditions similar to human sight, making them invaluable for detailed environmental perception and navigation tasks. Stereo cameras and binocular cameras replicate the depth perception mechanism found in animals. They utilize the “disparity” between the two slightly different images captured by each camera to gauge depth, enhancing the vehicle’s ability to perceive and interact with its surroundings in 3D. This operating principle is shown in Figure 2.
IR cameras, operating in the infrared wavelength spectrum of 780 nm to 1 mm beyond what the human eye can see, can address several limitations associated with VIS cameras. They are particularly advantageous in low-light conditions, as they do not rely on visible light to capture images. IR cameras can be specialized into near-infrared (NIR) cameras, which cover the 780 nm to 3 µm range, and mid-infrared (MIR) cameras, ranging from 3 µm to 50 µm, also known as thermal cameras [18]. These adaptations allow IR cameras to serve a wide array of applications, providing enhanced capabilities such as improved night vision, warm body detection for pedestrians and animals [19,20,21], and the ability to see through obstructions that would typically impair the performance of VIS cameras.
The other popularly used camera in AVs is the fisheye, also widely used in AVs for its wide field of view for near-field sensing applications such as parking aid and traffic jam assistance. However, they can be used on AV systems with only four cameras, which can cover 360 degrees [22,23,24] and are a valuable way to have full environmental awareness. However, these cameras must be calibrated to correct optical distortions, such as pincushion, barrel, and moustache effects, to achieve precise object positioning and image accuracy [6,25].
Table 1 compares various sensors: Camera, LiDAR, RADAR, and GNSS, providing a comprehensive understanding of their strengths and limitations in advanced sensing applications. The table outlines key performance metrics: range, resolution, distance accuracy, velocity detection, and environmental adaptability. Cameras excel in color perception and lane detection, making them highly effective for tasks such as recognizing traffic lights and road markings. However, they are constrained by limited range and reduced performance in adverse weather conditions. LiDAR provides exceptional spatial resolution and distance accuracy, enabling precise object detection and classification, though it is associated with higher costs and energy demands. RADAR is well-suited for velocity detection and performs reliably in challenging weather conditions but lacks the resolution needed for fine-grained object classification. Additionally, factors such as energy consumption, cost, processing time, maintenance requirements, durability, spatial coverage, and compatibility further emphasize the trade-offs involved in sensor selection.
Building upon this comparison, infrared (IR) sensors emerge as an essential complement to these technologies, particularly in adverse environmental conditions. IR sensors operate effectively in fog, snow, and rain because they track thermal radiation from objects rather than the conventional visual light detection of traditional sensors like cameras or LiDAR. IR sensors offer valuable nighttime driving protection in addition to low-visibility services. IR sensors enhance detection performance in sensor fusion systems by delivering thermal imaging data to complement other sensing modalities. With the power to detect heat signatures, these components deliver exact identification of pedestrians alongside important objects and animals, thereby becoming essential in today’s autonomous vehicle technology [26].

2.4. Global Navigation Satellite System Technology Application

GNSS plays a crucial role in the operation of AVs, providing a real-time location, navigation, and timing of a vehicle, which is the core functionality for any navigation system combined with a digital map [27]. GNSS satellites transmit their current clock and position data via radio signals. Each GNSS signal consists of a carrier wave and a message modulated onto this carrier using a specific modulation process. Nowadays, GNSS mainly includes four typical satellite navigation systems: Global Positioning System (GPS), owned by the United States; Galileo, owned by the European Union; GLONASS, owned by Russia; and BeiDou, owned by China. GPS is the best-known among these systems, providing users with positioning, navigation, and timing (PNT) services. Currently, AVs use advanced techniques like differential global positioning systems (DGPS) and Real-Time Kinematic (RTK) positioning to improve the accuracy of corrections in positioning data [27]. The introduction of GNSS-RTN systems has enabled precise positioning over distances of 50–70 km from a base station, with the recommended spacing between reference stations not exceeding 70 km. By utilizing GNSS-RTN, systems can achieve positioning accuracy within the 1 to 5 cm range, in contrast to the 1 to 10 m accuracy typically seen without these advanced sensors [28]. In DGPS, the position error can be as low as 1 cm. With DGPS, positioning errors can be reduced to as little as 1 cm [27].
While each sensor type is suitable, the combination of LiDAR, RADAR, and camera data can vastly improve the perceptual accuracy and reliability of an AV. These systems combine the use of multiple sensors, allowing the system to account for the limitations of the individual sensors and form a robust detection and response framework for environmental variables. For example, cameras may fail in low visibility conditions, whereas LiDAR and RADAR can still measure distances and recognize obstacles. Together, these technologies allow autonomous vehicles to always know and accurately what they are around, making them safer and more efficient to operate. Research projects in Table 2 illustrate the importance of multi-sensor integration in advancing autonomous vehicle technologies. Through sensor fusion and integrated machine learning models, these projects demonstrate increased capabilities that can significantly improve accuracy and reliability in AVs.
Table 2. Research projects focused on sensor improvements in AVs.
Table 2. Research projects focused on sensor improvements in AVs.
SensorsSensor
Applications
Improvement
Method
Performance
Evaluation
Key
Findings
Reference
Thermal infrared camera and LiDARObject detection and classification in low-light and adverse weather conditions.Sensor fusion calibrated with a 3D target.Experiments conducted in day and night environments.Improved object detection accuracy.[5]
Millimeter wave radarReal-time wide-area vehicle trajectory tracking.Unlimited roadway tracking using millimeter wave radar.Validation with Real-Time Kinematic (RTK) and UAV video data.92% vehicle capture accuracy; position accuracy within 0.99 m of ground truth.[29]
Traffic surveillance camera systemDetection, localization, and AI networking in autonomous vehicles.Sensor fusion with AI networking capabilities. Tested with various sensor combinations and machine learning models.Improved detection accuracy and networking efficiency for autonomous operations. [30]
High-resolution satellite imagesRoad detection in very high-resolution imagesSemantic segmentation with attention blocks and hybrid loss functions for better edge detection. Extensive testing on urban satellite images (Saudi Arabia and Massachusetts) using segmentation masks and edge detection metrics. Significantly improved road detection and edge delineation with high accuracy in complex backgrounds[31]
mm-wave radarRecognition of vulnerable road users (pedestrians, cyclists) in intelligent transportation systems.Shallow neural networks (CNN, RNN) for micro-Doppler signature analysis. Tested recognition using CNN, RNN, and hybrid CNN-RNN on simulated datasets.Achieved high recognition accuracy, enhancing road safety for vulnerable users.[32]
LiDAR and cameraRoad detectionLiDAR-camera fusion using fully convolutional neural networks (FCNs).Evaluated on the KITTI road benchmark.Achieved state-of-the-art MaxF score of 96.03%, outperforming single-sensor systems and ensuring robust detection in varying lighting conditions.[4]
Deep Visible and Thermal Image FusionEnhanced pedestrian visibility in low-light and foggy conditions.Learning-based fusion method producing RGB-like images with added informative details.Qualitative and quantitative evaluations using no-reference quality metrics and human detection performance metrics, compared with existing fusion methods.Outperformed existing methods, significantly improving pedestrian visibility and information quality while maintaining natural image appearance.[33]
3D LiDAR + Monocular CameraUrban road detection.Inverse-depth induced fusion framework with IDA-FCNN and line scanning strategy using LiDAR’s 3D point cloud.Evaluated on KITTI-Road benchmark with Conditional Random Field (CRF) for result fusion.Achieved state-of-the-art road detection accuracy, significantly outperforming existing methods on the benchmark.[34]
LiDAR and Monocular CameraPedestrian classificationMultimodal CNN leveraging LiDAR (depth and reflectance) and camera data fusion.Evaluated on the KITTI Vision Benchmark Suite using binary classification for pedestrians, comparing early and late fusion strategies.Achieved significant improvements in pedestrian classification accuracy through LiDAR-camera data fusion.[35]
LIDAR and Vision (Camera)Vehicle detectionPC-CNN framework fusing LiDAR point cloud and camera images via shared convolutional layers.Evaluated on the KITTI dataset with 77.6% average recall for proposal generation and 89.4% average precision for car detection.Achieved significant improvements in proposal accuracy and detection precision, highlighting its potential for real-time applications.[36]
Multispectral (Visible and Thermal Cameras)Pedestrian detectionEarly and late deep fusion CNN architectures for visible and thermal data fusion.Evaluated on the KAIST multispectral pedestrian detection benchmark, outperforming the ACF + T + THOG baseline with pre-trained late-fusion models.Achieved superior detection performance, demonstrating robustness in varying lighting conditions.[37]
LIDAR and RGB CamerasPedestrian detectionFusion of LiDAR (up-sampled to a dense depth map) and RGB data using HHA. Validated various fusion methods within CNN architectures using the KITTI pedestrian detection dataset.Late fusion of RGB and HHA data at different CNN levels yielded the best results, especially when fine-tuned.[38]
Dynamic Vision Sensor (DVS)Computer vision in challenging scenarios.Adaptive slicing of spatiotemporal event streams to reduce motion blur and information loss.Evaluated on public and proprietary datasets with object information entropy deviation under 1%.Achieved accurate, blur-free virtual frames, enhancing object recognition and tracking in dynamic scenes[39]
GNSS, INS, Radar, Vision, LiDAR, OdometerVehicle navigation state estimation (position, velocity, attitude)Multi-sensor integration with motion constraints (NHC, ZUPT, ZIHR) and radar-based feature matching. Tightly coupled FMCW radar and IMU integration, tested for GNSS outage scenarios.Improved navigation reliability by mitigating GNSS outages and correcting IMU drift, ensuring robust performance in diverse conditions.[40]
Thermal cameras (LWIR range)Pedestrian and cyclist detection in low-visibility conditions.Deep neural network tailored for thermal imaging in variable lighting.Evaluated on KAIST Pedestrian Benchmark dataset with paired RGB and thermal data.Achieved an F1-score of 81.34%, significantly enhancing detection under challenging conditions where RGB systems struggle.[41]

3. Critical Situations for AVs

Critical situations for AVs arise from environmental and infrastructure problems, and Part 3 explores these situations in detail. Section 3.1 addresses adverse weather conditions such as rain, fog, and snow that can degrade sensor performance and navigation. Section 3.2 focuses on complex environments, including urban canyons and tunnels, where GPS signals can experience dysfunctionalities. Section 3.3 discusses road infrastructure conditions where a lack of signage, markings, and maintenance can cause navigation difficulties. These critical situations significantly challenge AV reliability and safety. Finally, Table 2, Table 3 and Table 4 summarize key research projects to improve critical situations.

3.1. Adverse Weather Conditions

Adverse weather conditions continue to pose a significant challenge for autonomous vehicles AVs despite continuous advancements in sensor technology. These conditions impact sensor performance, algorithm accuracy, and overall driving behavior, highlighting the importance of reliable operation under various environmental scenarios for automated driving on any road. This section summarizes the performance of different sensing technologies under various adverse weather conditions, including rain, fog, snow, hail, sun glare, lightning, dust, sandstorms, and contamination. The technical issues associated with each adverse condition are detailed in the following subsections.

3.1.1. Rain Effects

AVs are known to be severely degraded by rain, and it has been reported that rain is a major cause of radar signal attenuation. Rain backscatter was also identified by Wallace et al. [42] as a significant component of radar performance. The attenuation of electromagnetic (EM) energy by water droplets and the problem of backscattering or ‘rain clutter’, which can generate false signals and mask real targets from the radar’s detection capabilities [43], arises because raindrops are similar in size to the radar’s wavelength. Significant high-intensity rain backscattering is found on 77 GHz radars. Gourova et al. [44], however, note that at close distances of up to 30 m, the attenuation of the radar signal, which ranges from 0.0016 dB/m for light rain (1 mm/h) to 0.0032 dB/m for heavy rain (100 mm/h), has a negligible effect. Like Zhang et al. [10], they also pointed out that severe rainfall (150 mm/h) will reduce the detection range by up to 45% because of increased backscatter interference. Like LiDAR systems, rain impacts Mie scattering at 905 nm and 1550 nm wavelengths [2]. However, at the critical 250 m range used for AVs, LiDAR’s accuracy still needs to be improved, although greatly affected by extreme rainfall conditions [2,45]. For example, moderate rain at 10 mm/h does not appreciably affect the accuracy of LiDAR, but heavy rain exceeding 30 mm/h will see its effective measurable distance cut in half [45]. Yoneda et al. [12] highlighted another challenge of rain on LiDAR: vehicle-induced rain splashes from puddles in LiDAR data can mimic actual obstacles, complicating the interpretation of LiDAR observations, yet one must be able to distinguish between water splashes and actual obstacles in LiDAR data. Heavy rainfall can significantly degrade camera system performance by affecting image quality and obscuring object details. However, deep learning algorithms can effectively eliminate rain streaks from images, enhancing image quality in variable weather conditions [46]. Additionally, Yodeda et al. [12] suggested that positioning the camera within the wiper can mitigate this problem by removing raindrops from the camera’s field of view, enhancing visibility. Zang et al. [10] highlighted additional challenges, such as ice formation on rotating cameras and lens frosting under freezing conditions, suggesting self-heating cameras to maintain clear vision and prevent mechanical obstruction.

3.1.2. Snow and Hail Effects

Although rain is the principal factor in radar signal attenuation, it is crucial to recognize that snow and hail also present significant challenges to the operation of autonomous vehicles [47]. Vargas et al. [2] emphasized that all forms of precipitation, whether liquid or frozen, can impact the electromagnetic signals crucial for AV mechanisms. Battan et al. [48] and Lhermitte et al. [49] discussed radar attenuation due to hail, specifically addressing microwave and millimeter-wave radar (MWR), respectively. In snowy conditions, MWRs are impacted by attenuation and increased backscatter. Wallace et al. [42] highlighted the complexities in accurately assessing and predicting the impact of snow on radar signals. The diverse physical properties of snow, influenced by environmental and temporal factors, complicate the establishment of a consistent link between snow characteristics and radar performance. Wallace et al. [42] also noted that snow’s composition of crystallized water, which varies in shape, size, and free water content, adds to this complexity. This variability obstructs efforts to directly correlate radar signal attenuation with snow mass, free water content, and visibility, often resulting in inconsistent and unpredictable patterns.
Yoneda et al. [12] and Meydani et al. [50] reported that snowfall poses significant challenges for AVs regarding self-localization, object recognition, and path planning. Snow can obscure road markings and change the appearance of surrounding objects, making it difficult for AVs to match real-time observations with map data, affecting lane positioning and safe driving [12]. To enhance self-localization, methods like high-precision RTK-GNSS, reconstructing observation data, and using MWR are proposed. Although these offer varying degrees of accuracy, they help navigate snowy conditions.
Moreover, snowflakes are misinterpreted as obstacles by LiDAR sensors, which complicates object recognition and requires the use of noise removal techniques and machine learning in the same manner as rainfall measurements are reconstructed [51]. Another hurdle is path planning since snow can change the drivable space on roads. However, high-definition maps, which rely on clear conditions, are insufficient, and an adaptive approach that considers current weather and road conditions is required. In addition, snow or hail can block off portions of the camera’s field of view, causing blurred images that prevent effective image processing tasks, such as pattern recognition. Snowfall and hail also affect the quality of images captured by cameras and directly threaten the camera hardware, as highlighted by Zang et al. [10]. Hailstorms also present a risk of physical damage to camera lenses. In cold and snowy weather, cameras face additional challenges from low temperatures and optical and mechanical disruptions. In addition, cameras with no protective shielding are particularly vulnerable to damage from ice accumulation.

3.1.3. Fog Effects

Foggy conditions significantly impact the performance of LiDAR due to Mie scattering, as the operating wavelengths of LiDAR are generally smaller than the size of fog particles. When coupled with water absorption, this scattering has a pronounced effect on the Near-Infrared (NIR) spectral band [2], leading to a reduction in reflectance and a decrease in the measured distances from the LiDAR sensor. Research by Wojtanowski et al. [52] indicates that under foggy conditions, the effective visibility range of LiDAR at wavelengths of 905 nm and 1550 nm is considerably reduced, more so than under rainfall. The study further reveals that the 1.5 μm wavelength experiences greater water absorption, by an order of magnitude of two, compared to the 0.9 μm wavelength, making it more vulnerable in wet conditions. Reflectivity varies across surfaces; for instance, while some surfaces reflect more light at 1.5 μm when dry, they appear darker when wet. They also found that as fog density increases, causing visibility to drop from 500 m to 200 m, the ability of rangefinders to measure distances accurately diminishes correspondingly.
Hadj-bachir and Souza [45] have corroborated these findings, observing that the measurement distance of LiDAR is inversely related to fog density, with heavier fog leading to significantly reduced visibility and, consequently, measurement range. Kutila et al. [53] examined the performance of two LiDAR systems: the Ibeo Lux and Velodyne PUCK, which operate at the 905 nm wavelength. The study found that under foggy conditions, the detection efficiency of both LiDAR systems was halved. The potential shift to using the 1550 nm wavelength. Kutila et al. [54] also mentioned that testing under foggy conditions revealed adverse effects that led to a 25% reduction in sensor performance. Given these results, the researchers explored the feasibility of employing a 1550 nm wavelength by testing a “pre-prototype LiDAR” designed for this higher wavelength. The rationale for this shift was based on the fact that the 1550 nm wavelength meets eye safety standards, allowing for laser sources with power up to 20 times greater than those permissible at 905 nm. This increase in power could potentially enhance LiDAR functionality in low-visibility scenarios, such as heavy fog. However, further research is necessary to confirm the advantages of using the 1550 nm wavelength in adverse weather conditions.
Similarly, the range of camera measurements is compromised due to reduced visibility. In heavy fog, image recognition using visible light becomes challenging, even for human vision. This is illustrated in Figure 3, which shows how image quality and the visibility of illuminated objects, such as traffic lights, are notably degraded in fog. Fog also reduces the contrast of images, which is essential for pattern and edge recognition in image processing [10]. In an image, sharp edges are typically represented by a mix of low and high-frequency components, whereas smooth edges are depicted by low frequencies only. Fog affects these frequencies in an image: in a foggy scene, frequency components tend to be concentrated at zero frequency, leading to a loss of edge definition. Addressing these impediments, infrared cameras are proposed as a viable solution due to their operation on wavelengths that penetrate fog more effectively than visible light. Moreover, integrating deep neural networks has shown promise in processing LiDAR and camera data to mitigate performance deterioration under low visibility conditions by extracting salient information obscured by fog.
According to Yadav et al. [55], there are two methods to counteract fog in images: Fog correction which adjusts contrast levels, and fog removal, which estimates and algorithmically clears the fog from the image to improve visibility. These technological advancements allow for autonomous vehicles to operate safely and efficiently, regardless of bad weather. MWR, on the other hand, shows some degree of resilience when the foggy conditions, especially at shorter range (about 20 m) [53], warrant the consideration of the strengths of different sensors in different weather conditions [12]. Ultrasonic sensors are known to be less affected by scattering effects than other sensors. They employ sound waves and are sensitive to the air composition and temperature. Still, their performance is mainly insensitive to precipitation, as they are typically used in short-range detection applications.

3.1.4. Lightening Effects

Despite the potential impact of lightning on AVs, limited literature addresses this issue. Considering that AVs rely heavily on technologies driven by electricity and electromagnetism, it is important to consider various unpredictable conditions, including electrical interferences like lightning. When lightning strikes, it generates a substantial electromagnetic field, which has the potential to significantly affect electronic devices [56], including AVs [2]. This can disrupt the functioning of sensors, communication systems, and onboard computers, leading to temporary malfunction or, in extreme cases, permanent damage. Lightning can affect GPS systems and other navigation aids, leading to inaccuracies in positioning data. Moreover, communication systems used for Vehicle-to-Everything (V2X) communications might also be disrupted, affecting the AV’s ability to communicate with other vehicles and infrastructure. Vargas et al. [2] stated that lightning could have high direct effects on all AV sensors, high indirect effects on LiDAR and camera, and medium indirect effects on RADAR, GNSS, and ultrasonic sensors. The bright flash of lightning can also temporarily blind these sensors, causing a lapse in data collection or inaccuracies in object detection and ranging, which will be discussed widely in the light section.

3.1.5. Severe Light Effects

The overpowering brightness of sun glare is a major challenge to AV performance. MMW radar, ultrasonic sensors, and GNSS are less affected by sun glare, but cameras and LiDAR can suffer from performance degradation caused by intense light interference. Under sun glare, cameras, particularly, are susceptible to obscuring visual information needed by image recognition algorithms to accurately identify objects, resulting in either non-detection of objects such as pedestrians or false detection. Likewise, LiDAR sensors may have reduced accuracy because of light scattering and thus may be unable to detect objects at a distance. Figure 4 illustrates the adverse effects of sun glare on camera and LiDAR performance, with a Xenon light source generating 200 kilolux (klx) at 40 m distance. To mitigate these issues, the implementation of robust sensor fusion algorithms, advanced filtering techniques, and hardware improvements, such as high dynamic range cameras, are crucial. As demonstrated in Figure 5, the use of thermal cameras (top left inset), which detect heat instead of light, provides more reliable imaging under these conditions, showing that thermal cameras, along with thermal multiscale retinex and thermal bio-inspired retina techniques, significantly improve object recognition [13]. The recall rates for thermal bio-inspired retina, in particular, reached 67.1%, 45%, and 78.6% for a person, car 1, and car 2, respectively. In another study conducted by Yahiaoui et al. [57], the glare is detected by an image processing algorithm with several processing blocks, including color conversion, adaptive thresholding, geometric filters, and blob detection, and trained with a CNN network.
Sun glints, like sun glare, can sensor blind and lead to erroneous readings in cameras and LiDAR systems. Reflected light can provide false positives in object detection or can obliterate the existence of real obstacles. This is extremely difficult in environments with highly reflective surfaces like wet roads. This challenge is clearly illustrated by Vargas et al. [2], illustrating how a sun glint can appear on water surfaces and providing valuable insights into the real-world implications of this phenomenon for AVs.
At the same time, shadow and low light conditions add another layer of difficulty for AVs. However, these conditions reduce the sensor’s performance, but it can be compensated with different computer vision techniques. The performance of AVs is ensured by these techniques in various lighting environments and improves image quality and object detection capability. For example, the DriveRetinex-Net algorithm is a popular image enhancement method that increases visibility and sensor accuracy when limited light levels are available [58].

3.1.6. Dust and Sandstorm and Contamination Effects

Less frequent but more severe weather phenomena, including dust and sandstorm air, could also affect AV performance by obscuring sensors’ lenses, such as cameras and LiDAR, reducing visibility and accuracy. These conditions can become a big obstacle in regions such as the Middle East or desert areas. Sky Harbor International Airport reported [2] that dust storms can reduce visibility to 800 m. In these environments, cameras and LiDAR are the most disrupted sensors, with cameras being worse affected than LiDAR. This is a problem that Trierweiler et al. [59] have highlighted, as their tests of near-homogeneous dust particles on the surface of a LiDAR scanning system reduced the system’s maximum detection range by 75%. Mie scattering by dust storm particles with 2.5–10 microns diameters, similar to how fog or rain affects LiDAR [2], can also occur at LiDAR wavelengths of 905 and 1550 nm.
Similarly, smoke, which shares physical characteristics with dust storms, poses similar performance challenges for AVs. Visual cameras (0.4 to 0.7 μm) and near-infrared instruments like LiDAR’s (0.905 µm) face difficulties in dense smoke and cold conditions when visibility is very poor, as reported by Starr and Lattimer [60]. Additionally, as Kovalev et al. [61] investigated how lidar signals are affected by multiple scattering in dense smoke plumes, and the additional difficulties associated with sensor operation in such an environment were also explored.
Contamination, such as dirt, dust, and other opaque or semi-transparent materials, can significantly affect the performance of AV sensors, including backup cameras (as shown in Figure 6). Contaminants can obscure the sensors’ view, posing a severe challenge to the robustness and adaptability of autonomous driving systems. Uřičář et al. [62] developed a novel approach to address lens soiling in automated driving systems. Their study focused on wide-angle fisheye cameras used in parking and low-speed navigation, susceptible to soiling from mud, dust, water, and frost. They introduced a Generative Adversarial Network (GAN)-based algorithm capable of generating unseen patterns of soiled images and corresponding soiling masks, thus eliminating manual annotation. This method significantly improved the accuracy of soiling detection by up to 18%.
Trierweiler et al. [59] further advanced these efforts with an automatic control for windshield wipers and nozzles. It can discriminate between contaminants (liquid raindrops or solid dust) by observing total internal reflection and intensity distribution. This system differs from conventional rain sensors in deciding which pollutants to clean using cleaning nozzles. Thus, the problem of sensor contamination in autonomous driving is as important as other critical problems. Developing new methods to detect and combat these effects is necessary to make autonomous driving systems operate safely and efficiently.
In summary, adverse weather conditions can significantly impact the effectiveness of LiDAR, RADAR, and cameras unevenly. On the other hand, GNSS technology is relatively less sensitive and depends on high-frequency satellite signals that can pass through clouds and rain [2,12]. Nonetheless, adverse weather conditions cause minor signal disturbances. For example, GNSS modules placed indoors may have low signal visibility because of the oscillating windshield wiper blades, and those installed outdoors may suffer from signal fade due to raindrops on the exterior antenna [10]. Also, ionospheric scintillation, a space weather effect, may sometimes make tracking signals slightly difficult for a while [63].
Research on sensors for AVs in adverse weather conditions is crucial for the advancement of safe and reliable automated transportation systems. This will likely involve the evolution and improvement of multi-sensor fusion methodology, combining data from improved current models of LiDAR, RADAR, and cameras, along with complex environmental modeling. Robust sensor fusion techniques, which combine data from multiple sensors, such as radar, LiDAR, and cameras, have recently been developed to improve detection in adverse conditions. For example, fusion architectures based on neural networks have been demonstrated to improve reliability by compensating for the weaknesses of individual sensors, especially in the presence of snow and fog. Further, adaptive domain incremental learning algorithms have shown improvements by dynamically handling sensor data distortions to maintain consistent object detection performance across different weather domains. Also, artificial intelligence is essential; the AVs must learn to make better choices by using better algorithms to analyze the weather and how it becomes dynamic.
Another interesting application is in selecting materials and advanced technologies that do not allow the formation of obstacles to sensors, such as hydrophobic coatings and self-cleaning. Sandstorms present special challenges because of their effects on sensor hardware. LiDAR beams are scattered by dust particles, which obscure camera lenses, significantly degrading performance. These limitations are being addressed through innovations such as self-cleaning sensor enclosures and hydrophobic coatings. There will also be a need for cooperation between meteorologists, sensor technologists, and artificial intelligence engineers to generate models that drive AVs and assist them in overcoming major precipitation challenges.
Therefore, solving these challenges will require strong sensor hardware, sophisticated algorithms, and interdisciplinary cooperation. Future work has to focus on real-world testing of resilient autonomous vehicle systems, and that testing must be performed in extreme weather conditions. Table 3 demonstrates a summary of some research on this area.
Table 3. Research projects focused on advancing AV performance in adverse weather conditions.
Table 3. Research projects focused on advancing AV performance in adverse weather conditions.
CategorySensorAdverse WeatherReferencesContributionChallenges
CNNs and VariantsCameraRain[64,65,66,67,68,69,70,71,72,73]
Robust feature extraction
Adaptability with variants
Transfer learning
Integration with other sensor data
End-to-end learning
Vision augmentation
Simplifies high-dimensional data.
Susceptibility to noise
Computational intensity
Overfitting risk
Challenges of up-sampling and down-sampling
Generalization issues
Some algorithms use restricted adverse weather model assumptions.
Mostly have been applied on a synthetic dataset.
In some cases, small datasets have been used.
In some cases, radar Doppler information was not incorporated.
Challenges of large, sparse grid maps in system complexity
Information loss
Snow[74]
Rain, Fog[75,76]
Haze and Fog[9,77,78,79,80,81,82]
LiDARRain[8]
All Weather[83]
RNNs and VariantsCameraRain[84,85,86]
Temporal dependency handling
Vision augmentation
Contextual information processing
Flexibility in input Sequence length
Real-time decision-making.
Handling sensor fusion data.
Simplifies high-dimensional data.
Continuous learning and adaptation
Predictive maintenance and anomaly detection
Computational demands
Training challenges
Overfitting
Sequential data dependency
Mostly have been applied on a synthetic dataset.
In some cases, small datasets have been used.
Challenges of large, sparse grid maps in system complexity
Information loss
GANs and Related TechniquesCameraRain[87,88,89,90,91,92,93,94]
Data augmentation
Image enhancement and restoration
Simulating sensor data for radar or LiDAR
Improving perception and Decision-making
Transfer learning
Vision augmentation
Training complexity and stability
Resource intensity
Realism and trustworthiness of synthetic data
Overfitting to synthetic features
Ethical and safety considerations: ensuring accuracy and reliability in real-world applications.
Mostly have been applied on synthetic datasets.
In some cases, small dataset has been used
Snow[95,96]
Haze and Fog[97,98,99,100,101,102,103,104,105]
Soil[62]
Fusion and Multi-Modal NetworksCameraRain[70,93,106,107,108,109,110,111]
Enhanced perception accuracy
Redundancy and reliability
Improved object detection and classification
Greater situational awareness
Enhanced adaptability
Vision augmentation
Algorithmic challenges
Latency issues
Increased computational load.
Complexity in integration
Needs data management and storage solutions.
Mostly have been applied on a synthetic dataset.
In some cases, small dataset has been used
Haze[112,113,114,115,116,117,118]
Domain Adaptation and Unsupervised/Semi-Supervised LearningLiDARRain[119,120]
Reducing the need for labeled data.
Solving the data absence problem
Improving generalization
Continuous learning
Cost-efficiency
Enhancing robustness
Vision augmentation
Complexity in implementation
Potential for reduced accuracy
Dependency on the source domain
Risk of model drift
Evaluation challenges
Mostly, have been applied on synthetic datasets.
In some cases, small dataset has been used
CameraRain and Haze[121]
Haze[122,123,124,125]
Rain and Snow[126]
Specialized Segmentation and Detection NetworksCameraRain[74,127,128,129,130,131,132,133,134,135,136]
Customizable for specific scenarios
Efficient processing
Robust to environmental variabilities
Vision augmentation
Enhanced detection accuracy
Limited flexibility
High development costs
Information loss
Resolution degradation
Overfitting risk
Maintenance and updates
Integration complexity
Mostly have been applied on a synthetic dataset.
In some cases, small dataset has been used
Snow[137,138]
Haze and Fog[108,139,140,141,142,143,144]
LiDARRain and Snow[145]
Simulation and TestingLiDARRain, Fog[45,146]
Controlled testing environment
Cost effective.
Enhanced data collection and analysis
Rapid iterative testing
Realism gap issues
Model accuracy complexity
Sensor simulation challenges
Scenario design dependency
Complacency risk concerns
Snow[147]
Sensor FusionLiDAR
+
Camera
Rain, Haze, and Fog[148]
Comprehensive data analysis
Adaptability to environmental changes
Robustness to sensor failures
Enhanced object detection and classification
Increased reliability and redundancy
Increased operational range.
Enhanced decision-making
Compensate for the weaknesses of individual sensors
High computational demands
Increased system cost
Algorithmic challenges
Latency issues
Complex integration of diverse sensor data
Advanced algorithms needed for data fusion
Rainy and Snowy[149]
Fog[150]
RADAR
+
Camera
Snow, Fog, Rain[151]
LiDAR
+
RADAR
Haze and Fog[152]
Rain, Smoke[153]
Hardware EnhancementsLiDARSnow[154]
Improved sensor capabilities
Enhanced data processing power
Increased robustness and durability
Reduced sensor failures
Better integration of multi-modal systems
Higher costs
Increased complexity
Greater power consumption
Integration challenges
Weight and space issues

3.2. Complex Environment Conditions

Navigating through complex environments poses substantial challenges to the performance of AVs, especially in terms of GNSS reliability. Dense urban areas with high buildings or canyons and harsh natural environments like mountains or forests, tunnels, and airports can hinder AVs’ ability to receive the signals critical for accurate navigation [155,156,157]. Figure 7 illustrates a typical situation within an urban canyon. The direct line-of-sight (LOS) signals from many, sometimes most, satellites are blocked, attenuated, or reflected by existing obstacles [158], leading to phenomena such as cycle slips [159] and multipath or non-LOS (NLOS) signals [160,161,162]. Signal attenuation results in noisy data, while multipath and NLOS effects caused by signal reflections off various obstacles introduce outliers in the measurements [163]. These challenges substantially impact the accuracy of AVs positioning. Figure 7 shows how tall buildings can block signals from satellites at low to medium elevations, highlighting the navigational challenges encountered in these settings.
As the deployment of additional satellites from other GNSS constellations increases, the availability of direct LOS signals for users equipped to handle complex environments is improving [164]. Simultaneously, researchers explore innovative solutions to enhance signal reception and processing in challenging areas. These efforts include integrating GNSS with other sensors such as Inertial Navigation System (INS) technologies (GNSS/INS), integration of Dead Reckoning (DR) with GNSS/INS data [155,165,166,167], advanced algorithms for detecting and mitigating multipath and NLOS signals [161,168], developing more sophisticated models for urban signal propagation [164], developing advanced algorithms for improving the resilience and performance of navigation systems [155,169,170,171,172,173,174]. Combining these technological advancements and increased satellite coverage could substantially mitigate the navigation challenges currently faced by AVs, paving the way for more reliable and accurate GNSS-based positioning even in the most demanding environments. Table 4 summarizes the recent research studies implemented to improve the performance of AV sensors in complex environments.
Recent research on AVs in complex environments has focused on various aspects of their development and implementation. This collection of papers explores different aspects of autonomous navigation and control for AVs in complex environments.
Table 4. Research projects focused on advancing AV performance in complex environments.
Table 4. Research projects focused on advancing AV performance in complex environments.
CategoryReferencesContributionChallenges
Deep Learning Models[6,175]
Simplifies motion planning tasks.
Enables real-time decision-making.
Improves decision-making using local or networked data.
Ensures accuracy in GNSS-denied environments.
Scalability in complex urban and dynamic environments
Sensitive to noise and occlusions.
Risk of overfitting.
Struggles with rapidly changing conditions.
Sensory Data Integration[7,176,177]
Enhances situational awareness and decision-making with sensor data fusion.
Managing large, real-time sensor data.
Challenges in unpredictable or harsh environments.
Simulation and Testing[178,179]
Uses simulations to evaluate AV performance in complex scenarios.
Allows rapid testing without real-world risks.
Issues with simulation realism and trustworthiness.
Limited power and problems in low-light and cluttered environments.
Autonomous Navigation[180,181,182]
Provides navigation techniques for urban and 3D environments.
Manages dynamic obstacles and complex path planning in real time.
Difficulties in pathfinding and obstacle avoidance in 3D environments.
Needs sophisticated algorithms for real-time changes.
Reinforcement Learning[183,184,185]
Develop algorithms for decentralized navigation.
Adapts to navigation challenges in complex environments.
Ensuring model robustness in changing environments.
Balancing exploration, exploitation, and safety.
Collaborative Systems[186,187]
Facilitates cooperative control for tasks in complex settings.
Enhances task management through vehicle collaboration.
Vehicle integration and coordination challenges.
Sensor Fusion[188,189]
Improves state estimation and situational awareness with sensor fusion.
Increases positioning accuracy in GNSS-limited environments.
Sensor accuracy limitations and environmental adaptation.
Signal interference, especially in NLOS conditions.

3.3. Road Infrastructure Conditions

Road signs are critical for the localization and navigation of AVs, and they are heavily dependent on visible curves, road edges, speed limits, etc., for their practical interpretation of traffic rules [190,191]. Sensors, cameras, artificial intelligence, etc., are used to identify elements by detecting markings, colors, shapes, messages, etc. [192]. Still, the existing road infrastructure of markings and signage, which is on the road must be sufficiently adapted or maintained to accommodate AV technology with the stringent requirements. For example, U.S. Road network challenges such as varied traffic sign designs, faded road markings, and using paints with varying reflectivity and lack of necessary markings pose discontinuities. Liu et al. [193] also showed other obstacles that could make the use of machine vision systems problematic, such as the low maintenance priority of road signs in the UK, which means that they can become unreadable because of dirt; the confusion from EU nonstandard road signs; and temporary changes to road layouts caused by incidents and road works. Also, Bruno et al. [192] pointed out the false positives and negatives in signal detection by infrastructure improvement and standardization of AV navigation systems so that efficiency and safety can be improved. Vital signs or markings can be hidden behind these obstacles, such as vehicles parked along roadsides, vegetation, and existing roadside infrastructure [194]. Inadequate infrastructure devastates AV sensor performance, thus localizing, navigating, and making safe decisions. From faded road markings to inconsistent signage, camera-based systems misread lane boundaries or do not follow key traffic rules. Accurate environmental modeling can also be hampered by worn or obscured markings, as with LiDAR and radar systems. Nevertheless, these issues are especially critical in adverse weather conditions or nighttime operations when the lack of high contrast markings degrades detection accuracy. There are no consistent and visible markings to help AVs stay in the lane or respond appropriately to dynamic traffic situations. The phenomenon of “ghost markings”, resulting from hydro-plastic, adds another layer of complexity, potentially confusing AVs [194]. Road design and maintenance inconsistencies like abrupt transitions from one pavement type to another or poorly maintained intersections add other complexities. Furthermore, road construction or detours add temporary disruptions to these problems, requiring that AVs use prediction algorithms that may not fully capture the variability in infrastructure. These challenges highlight the need for standardized road construction practices and maintenance to ensure reliable AV operation. Furthermore, road drainage systems can significantly impact AVs’ performance, especially in areas susceptible to intense rainfall. The presence of water on the road and surfaces muddied by runoff due to inadequate drainage can hinder the ability of AVs to discern lane markings and road edges [191]. The problem becomes even more pronounced at night [195]. Rural and remote areas exacerbate these challenges, often lacking the infrastructure and communication networks vital for AVs’ optimal functionality.
These challenges clearly indicate the importance of standardization and upgrading road infrastructure (road markings, traffic signs, drainage, roadside, etc.) to allow the safe operation of different road conditions [193,195]. In recent years, researchers have been increasingly interested in finding out the best requirements and conditions for road markings for AVs’ safe navigation [196]. To address these needs, innovative solutions, such as the smart signs and markings developed by 3M, a US leader in traffic signage and road marking production, are being introduced to fill the gap with advanced materials and technology meeting the challenging needs of AV navigation [194]. Nevertheless, several studies have recommended that infrastructure be maintained as it is in the early stages of AV integration [193]. Real-time detection of lane boundaries and vehicle guiding has been studied in several studies to develop algorithms that can do this [7,197,198,199]. Research studies also highlighted the need for physical and digital infrastructure refreshes to facilitate safe and efficient AV operation with few deficiencies.
Both physical enhancements in the structure and technological improvements are needed to overcome the deficiencies of the existing systems. These challenges require advances in infrastructure design and vehicle control technologies. For instance, curve passing control models that combine camera-based lane detection with vehicle dynamics are shown to improve navigation accuracy by adjusting speed and steering based on real time lane curvature data. These methods enhance vehicle stability and passenger comfort in situations where high speed cornering or unclear lane markings are present. In addition, the integration of adaptive control strategies, including Model Predictive Control based ones, guarantees precise trajectory tracking and reliable vehicle performance under changing road and environmental conditions [200].
This collection examines different aspects of autonomous navigation and control for AVs in roadway infrastructure. However, it is notable that most of this research has investigated the problem through the prism of hardware and software capabilities, particularly image capture devices and detection algorithms. Table 5 also shows the most recent studies highlighting how infrastructure is a critical enabler for AVs.
Based on these different methods for improving the performance of AVs in critical situations, the effectiveness of these systems is strongly tied to the robustness of the algorithms that power their perception, decision-making, and control systems. A large portion of these algorithms, in particular those based on deep learning and machine learning, are data driven. The training on large, diverse datasets is extensive, and these methods are not yet accurate enough to detect and interpret real-world conditions, like objects in urban environments, road structures, weather conditions, and dynamic obstacles. These algorithms are effective partly because of how they are designed but also because the data we use to train them is of high quality and diverse. Without adequate datasets, models can fail to generalize to novel, complex scenarios and produce disastrous failures in critical circumstances.

4. Datasets Availability

For improving algorithms in AVs in critical situations, several specialized datasets are widely used to train and evaluate perception systems and algorithms. The availability of datasets is important for the development and cross-evaluation of methods. Creating large-scale datasets for training algorithms from scratch is not easy; therefore, pre-existing publicly available datasets are very important because they accelerate the development of algorithms, and researchers are not required to build their datasets themselves. In this section, we survey the available datasets and Table 6 summarizes the proposed datasets. These datasets can be broadly categorized into synthetic and real data, offering unique advantages for improving model performance. Synthetic datasets simulate adverse weather conditions such as rain and haze, allowing for controlled, large-scale experimentation where ground truth is known. On the other hand, real-world datasets capture actual urban environments and natural weather effects, providing valuable data for testing the generalization of algorithms in realistic scenarios. Some datasets, like SPA-Data and BeDDE, also focus specifically on urban driving conditions, while others offer a combination of synthetic and real data to improve robustness across weather conditions.

5. Future Research Directions

Based on the critical situations described in this review and the various sensors and methods’ strengths and limitations to improve AV performance, we observe that while many algorithms intended for these scenarios look promising, most have not been validated in real-world conditions. Future research must focus on testing these algorithms in a practical environment, specifically through unsupervised learning. Models can learn directly from large amounts of real-world data without manual labeling using these methods. This is important for AVs because real-world environments are highly dynamic and unpredictable, so it is not feasible to manually label all possible scenarios. By unsupervised learning, the system learns to find patterns, adapt to new situations, and continuously improve its performance in real time, narrowing the gap between the data from simulations and real conditions and reducing the time and cost of labeled datasets. Such unified models would also significantly improve AV performance by enabling them to cope with multiple critical conditions. Furthermore, applying these methodologies to more challenging driving scenarios, like highway merging and dense urban areas, is necessary to advance this work further.
Currently, most available datasets are synthetic. While synthetic datasets have been invaluable for early development, incorporating real-world data from ego vehicle perception systems is a key step toward improving the reliability and robustness of AV algorithms in real driving conditions. Similarly, simulation studies, another dominant approach in AV research, need rigorous real-world validation to ensure their effectiveness. Bridging the gap between simulated models and real-world data is essential for the successful deployment of AVs in unpredictable, dynamic environments.
Autonomous driving research relies heavily on simulated data that allows controlled experiments with scalability. Real-world data are, however, still required to understand the full spectrum of naturalistic driving scenarios. Research also shows that challenges in real-world data collection, including high costs, privacy concerns, and unpredictability of environmental conditions, call for the complementary use of simulation tools [13]. These challenges are mitigated by annotated datasets and advanced simulators, which enable more complete testing and modeling of autonomous vehicle systems. Due to the complexity of autonomous vehicle operations in diverse and adverse conditions, it is critical to integrate simulation with real-world data [13].
To overcome these challenges, a hybrid approach that integrates simulation with real-world testing is necessary. Developing modular sensor platforms capable of transitioning seamlessly between simulated and real-world environments is also critical. A collaborative effort among researchers, industry, and municipalities will facilitate these pilot programs and accelerate the transition from research to real-world applications.

6. Discussion, Conclusions, and Future Work

6.1. Discussion

AVs have extensive possibilities to enhance road safety, traffic flow, and energy utilization and reduce fatal crashes. These benefits could lead our traffic operation system to fully automated Level 5 traffic. At the same time, we understand that realizing this vision and acquiring public trust on a broader scale could be possible only when expanding our research in sensor algorithms, perception, and control mechanisms have been made.
Our review shows that AVs still face challenges in adverse weather, complex environments, and roadway infrastructure and a lack of public trust and safety concerns remain a major obstacle to broader public acceptance. However, transparency and trust levels do not improve while technology progresses continuously. Most people still do not fully trust AVs to take care of several things they value most, such as having their children travel with AVs. Such a feeling shows the challenge of having practical test data to yield a comprehensive safety concern. Even market leaders like Waymo and Cruise continue to experience such delays in AVs’ mass production; this indicates that even the leading companies in the industry have not addressed these trust-related problems fully and are not confident to provide their technologies to the public yet.
This review suggests that to obtain full acceptance of AVs in society, the cooperation of stakeholders in the automotive industry, policymakers, and researchers is imperative to close this trust gap. Policymakers should promote AVs in the same manner as solar panels and electric cars as a solution that raises traffic safety and efficiency through state incentives such as tax exemptions for their usage. In addition, we believe that open data sharing with the research community would be necessary, especially data from automation levels 3, 4, and 5. Finally, from the perspective of adverse weather to complex environments and roadway infrastructures, we anticipate that testing under various conditions would be essential. We believe this collaboration would invigorate research into developing more accurate algorithms, improving current technologies, and enhancing public understanding and acceptance of AVs.

6.2. Conclusions and Future Work

This review assessed the performance of four types of sensors, LiDAR, RADAR, Camera, and GNSS, under three critical situations: adverse weather, complex environments, and road infrastructure.

6.2.1. Adverse Weather Summary

Adverse weather conditions significantly affect the performance and reliability of AVs, particularly impacting key sensors like LiDAR, RADAR, and cameras. Rain causes radar signal attenuation and backscatter, reducing detection capabilities, while snow and hail further complicate radar performance by introducing variability in signal attenuation. These conditions also obscure road markings and alter object appearances, complicating self-localization and object recognition. Fog reduces LiDAR and camera accuracy, but advancements like the use of higher wavelength LiDAR, infrared cameras, and deep learning techniques show promise in mitigating these effects. Sun glare and dust storms similarly degrade sensor performance, though solutions like thermal cameras and sensor fusion have been proposed to enhance visibility. Despite these challenges, GNSS remains relatively stable but may experience minor disruptions in extreme weather.
Gap Identified: Most research has focused on camera-based solutions, with less attention given to enhancing LiDAR and RADAR technologies for adverse weather conditions. Snow and other extreme weather events, such as sandstorms, strong light, and sensor contamination, remain underexplored in terms of their impact on AV sensors.
Future Work: Future research should focus on improving the performance of sensors in snow, sandstorms, and other challenging conditions. This includes the development of robust sensor fusion algorithms that can integrate data from multiple sources to enhance overall situational awareness and decision-making. Expanding datasets that include more diverse weather conditions will also help improve the generalizability of existing models.

6.2.2. Complex Environments Summary

Navigating complex environments poses significant challenges for AVs, particularly with GNSS reliability. In urban canyons and tunnels, satellite signals are often blocked, reflected, or attenuated, leading to positioning errors. To address this, researchers are integrating GNSS with Inertial Navigation Systems (INS) and Dead Reckoning (DR) and developing algorithms to mitigate multipath and non-line-of-sight (NLOS) signal issues. Increased satellite coverage and improved signal processing techniques are also helping to enhance positioning accuracy in these demanding environments, making navigation more reliable for AVs.
Gap Identified: Despite progress, the integration of GNSS with INS and other systems is still not robust enough to handle extreme environmental conditions consistently. Moreover, the complexity of signal propagation in urban environments remains an open challenge, requiring more advanced models and algorithms.
Future Work: Research should focus on developing more sophisticated algorithms for detecting and mitigating multipath and NLOS signals. Additionally, improving GNSS/INS and Dead Reckoning (DR) integration will be critical for ensuring reliable positioning in dense urban areas, mountainous regions, and other complex environments. Future studies should also aim to improve signal processing in obstructed environments, with an emphasis on real-time adaptation to signal fluctuations.

6.2.3. Road Infrastructure Summary

AVs rely heavily on road features like curves, edges, signage, and markings for navigation, but current infrastructure often falls short of meeting their requirements. Issues such as faded markings, inconsistent sign designs, and obstacles like parked vehicles or vegetation can impair AVs’ ability to navigate safely. In areas with heavy rainfall or poor drainage, water and mud can obscure lane markings, particularly at night, while rural and remote areas often lack the necessary infrastructure and communication networks. These challenges highlight the need for standardized and upgraded road infrastructure, including advanced materials and smart signs. Efforts are also underway to develop real-time detection algorithms for lane boundaries and vehicle guidance. While some researchers advocate for preserving existing infrastructure during early AV integration, it is clear that both physical and digital upgrades are essential for safe and efficient AV operations.
Gap Identified: Current road infrastructure does not consistently meet the needs of AV systems. Inadequate standardization and maintenance, especially in rural or underdeveloped areas, create gaps that reduce the effectiveness of machine-vision systems. Additionally, temporary road alterations due to construction or roadworks introduce further complexities.
Future Work: There is a critical need for the standardization and upgrade of road infrastructure, including consistently using high-visibility road markings and signs. Future work should also explore innovative solutions like smart signs and advanced materials for road markings, which can improve the interpretability of infrastructure for AV sensors. Furthermore, addressing poor drainage systems, which obscure lane markingsduring heavy rainfall, should be a key focus in improving road safety and sensor accuracy.

Author Contributions

Conceptualization, S.A.B. and B.B.P.; methodology, S.A.B. and B.B.P.; validation, S.A.B. and B.B.P.; formal analysis, S.A.B.; investigation, S.A.B.; resources, S.A.B.; writing—original draft preparation, S.A.B.; writing—review and editing, S.A.B.; visualization, S.A.B.; supervision, B.B.P.; project administration, B. Brian Park. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Guanetti, J.; Kim, Y.; Borrelli, F. Control of Connected and Automated Vehicles: State of the Art and Future Challenges. Annu. Rev. Control 2018, 45, 18–40. [Google Scholar] [CrossRef]
  2. Vargas, J.; Alsweiss, S.; Toker, O.; Razdan, R.; Santos, J. An Overview of Autonomous Vehicles Sensors and Their Vulnerability to Weather Conditions. Sensors 2021, 21, 5397. [Google Scholar] [CrossRef] [PubMed]
  3. Steinbaeck, J.; Steger, C.; Holweg, G.; Druml, N. Next Generation Radar Sensors in Automotive Sensor Fusion Systems. In Proceedings of the 2017 Sensor Data Fusion: Trends, Solutions, Applications (SDF), Bonn, Germany, 10–12 October 2017; pp. 1–6. [Google Scholar] [CrossRef]
  4. Caltagirone, L.; Bellone, M.; Svensson, L.; Wahde, M. LIDAR–Camera Fusion for Road Detection Using Fully Convolutional Neural Networks. Rob. Auton. Syst. 2019, 111, 125–131. [Google Scholar] [CrossRef]
  5. Choi, J.D.; Kim, M.Y. A Sensor Fusion System with Thermal Infrared Camera and LiDAR for Autonomous Vehicles and Deep Learning Based Object Detection. ICT Express 2023, 9, 222–227. [Google Scholar] [CrossRef]
  6. Cunnington, D.; Manotas, I.; Law, M.; De Mel, G.; Calo, S.; Bertino, E.; Russo, A. A Generative Policy Model for Connected and Autonomous Vehicles. In Proceedings of the 2019 IEEE Intelligent Transportation Systems Conference (ITSC), Auckland, New Zealand, 27–30 October 2019; pp. 1558–1565. [Google Scholar] [CrossRef]
  7. Eskandarian, A.; Wu, C.; Sun, C. Research Advances and Challenges of Autonomous and Connected Ground Vehicles. IEEE Trans. Intell. Transp. Syst. 2020, 22, 683–711. [Google Scholar] [CrossRef]
  8. Heinzler, R.; Piewak, F.; Schindler, P.; Stork, W. CNN-Based Lidar Point Cloud De-Noising in Adverse Weather. IEEE Robot. Autom. Lett. 2020, 5, 2514–2521. [Google Scholar] [CrossRef]
  9. Ullah, H.; Member, S.; Muhammad, K.; Irfan, M.; Anwar, S.; Sajjad, M.; Imran, A.S.; De Albuquerque, V.H.C.; Member, S. Light-DehazeNet: A Novel Lightweight CNN Architecture for Single Image Dehazing. IEEE Trans. Image Process. 2021, 30, 8968–8982. [Google Scholar] [CrossRef]
  10. Zang, S.; Ding, M.; Smith, D.; Tyler, P.; Rakotoarivelo, T.; Kaafar, M.A. The Impact of Adversary Weather Conditions on Autonomous Vehicles. IEEE Veh. Technol. Mag. 2019, 14, 103–111. [Google Scholar] [CrossRef]
  11. Rana, M.M.; Hossain, K. Connected and Autonomous Vehicles and Infrastructures: A Literature Review. Int. J. Pavement Res. Technol. 2023, 16, 264–284. [Google Scholar] [CrossRef]
  12. Yoneda, K.; Suganuma, N.; Yanase, R.; Aldibaja, M. Automated Driving Recognition Technologies for Adverse Weather Conditions. IATSS Res. 2019, 43, 253–262. [Google Scholar] [CrossRef]
  13. Zhang, Y.; Carballo, A.; Yang, H.; Takeda, K. Perception and Sensing for Autonomous Vehicles under Adverse Weather Conditions: A Survey. ISPRS J. Photogramm. Remote Sens. 2023, 196, 146–177. [Google Scholar] [CrossRef]
  14. Li, Y.; Ibanez-guzman, J. Lidar for Autonomous Driving: The principles, challenges, and trends for automotive lidar and perception systems. IEEE Signal Process. Mag. 2020, 37, 50–61. [Google Scholar]
  15. Royo, S.; Ballesta-Garcia, M. An Overview of Lidar Imaging Systems for Autonomous Vehicles. Appl. Sci. 2019, 9, 4093. [Google Scholar] [CrossRef]
  16. Electron_one RADAR, LiDAR and Cameras Technologies for ADAS and Autonomous Vehicles. Available online: https://www.onelectrontech.com/radar-lidar-and-cameras-technologies-for-adas-and-autonomous-vehicles/ (accessed on 15 December 2019).
  17. Ignatious, H.A.; El Sayed, H.; Khan, M. An Overview of Sensors in Autonomous Vehicles. Procedia Comput. Sci. 2021, 198, 736–741. [Google Scholar] [CrossRef]
  18. Gade, R.; Moeslund, T.B. Thermal Cameras and Applications: A Survey. Mach. Vis. Appl. 2014, 25, 245–262. [Google Scholar] [CrossRef]
  19. Olmeda, D.; De La Escalera, A.; Armingol, J.M. Far Infrared Pedestrian Detection and Tracking for Night Driving. Robotica 2011, 29, 495–505. [Google Scholar] [CrossRef]
  20. González, A.; Fang, Z.; Socarras, Y.; Serrat, J.; Vázquez, D.; Xu, J.; López, A.M. Pedestrian Detection at Day/Night Time with Visible and FIR Cameras: A Comparison. Sensors 2016, 16, 820. [Google Scholar] [CrossRef]
  21. Forslund, D.; Bjarkefur, J. Night Vision Animal Detection. In Proceedings of the 2014 IEEE Intelligent Vehicles Symposium Proceedings, Dearborn, MI, USA, 8–11 June 2014; pp. 737–742. [Google Scholar] [CrossRef]
  22. Yogamani, S.; Witt, C.; Rashed, H.; Nayak, S.; Mansoor, S.; Varley, P.; Perrotton, X.; Odea, D.; Perez, P.; Hughes, C.; et al. WoodScape: A Multi-Task, Multi-Camera Fisheye Dataset for Autonomous Driving. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 9307–9317. [Google Scholar] [CrossRef]
  23. Heng, L.; Choi, B.; Cui, Z.; Geppert, M.; Hu, S.; Kuan, B.; Liu, P.; Nguyen, R.; Yeo, Y.C.; Geiger, A.; et al. Project Autovision: Localization and 3d Scene Perception for an Autonomous Vehicle with a Multi-Camera System. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 4695–4702. [Google Scholar] [CrossRef]
  24. Yahiaoui, M.; Rashed, H.; Mariotti, L.; Sistu, G.; Clancy, I.; Yahiaoui, L.; Kumar, V.R.; Yogamani, S. FisheyeMODNet: Moving Object Detection on Surround-View Cameras for Autonomous Driving. arXiv 2019, arXiv:1908.11789. [Google Scholar] [CrossRef]
  25. O’Mahony, N.; Campbell, S.; Krpalkova, L.; Riordan, D.; Walsh, J.; Murphy, A.; Ryan, C. Computer Vision for 3d Perception a Review; Springer International Publishing: Cham, Switzerland, 2018; Volume 869, ISBN 9783030010577. [Google Scholar]
  26. Altaf, M.A.; Ahn, J.; Khan, D.; Kim, M.Y. Usage of IR Sensors in the HVAC Systems, Vehicle and Manufacturing Industries: A Review. IEEE Sens. J. 2022, 22, 9164–9176. [Google Scholar] [CrossRef]
  27. Dasgupta, S.; Rahman, M.; Islam, M.; Chowdhury, M. A Sensor Fusion-Based GNSS Spoofing Attack Detection Framework for Autonomous Vehicles. IEEE Trans. Intell. Transp. Syst. 2022, 23, 23559–23572. [Google Scholar] [CrossRef]
  28. Raza, S.; Al-Kaisy, A.; Teixeira, R.; Meyer, B. The Role of GNSS-RTN in Transportation Applications. Encyclopedia 2022, 2, 1237–1249. [Google Scholar] [CrossRef]
  29. Wang, J.; Fu, T.; Xue, J.; Li, C.; Song, H.; Xu, W.; Shangguan, Q. Realtime Wide-Area Vehicle Trajectory Tracking Using Millimeter-Wave Radar Sensors and the Open TJRD TS Dataset. Int. J. Transp. Sci. Technol. 2023, 12, 273–290. [Google Scholar] [CrossRef]
  30. Hasanujjaman, M.; Chowdhury, M.Z.; Jang, Y.M. Sensor Fusion in Autonomous Vehicle with Traffic Surveillance Camera System: Detection, Localization, and AI Networking. Sensors 2023, 23, 3335. [Google Scholar] [CrossRef]
  31. Ghandorh, H.; Boulila, W.; Masood, S.; Koubaa, A.; Ahmed, F.; Ahmad, J. Semantic Segmentation and Edge Detection—Approach to Road Detection in Very High Resolution Satellite Images. Remote Sens. 2022, 14, 613. [Google Scholar] [CrossRef]
  32. Islam Minto, M.R.; Tan, B.; Sharifzadeh, S.; Riihonen, T.; Valkama, M. Shallow Neural Networks for MmWave Radar Based Recognition of Vulnerable Road Users. In Proceedings of the 2020 12th International Symposium on Communication Systems, Networks and Digital Signal Processing (CSNDSP), Porto, Portugal, 20–22 July 2020. [Google Scholar] [CrossRef]
  33. Shopovska, I.; Jovanov, L.; Philips, W. Deep Visible and Thermal Image Fusion for Enhanced Pedestrian Visibility. Sensors 2019, 19, 3727. [Google Scholar] [CrossRef]
  34. Gu, S.; Lu, T.; Zhang, Y.; Alvarez, J.M.; Yang, J.; Kong, H. 3-D LiDAR + Monocular Camera: An Inverse-Depth-Induced Fusion Framework for Urban Road Detection. IEEE Trans. Intell. Veh. 2018, 3, 351–360. [Google Scholar] [CrossRef]
  35. Melotti, G.; Premebida, C.; Goncalves, N.M.M.D.S.; Nunes, U.J.C.; Faria, D.R. Multimodal CNN Pedestrian Classification: A Study on Combining LIDAR and Camera Data. In Proceedings of the 2018 21st International Conference on Intelligent Transportation Systems (ITSC), Maui, HI, USA, 4–7 November 2018; pp. 3138–3143. [Google Scholar] [CrossRef]
  36. Du, X.; Ang, M.H.; Rus, D. Car Detection for Autonomous Vehicle: LIDAR and Vision Fusion Approach through Deep Learning Framework. In Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, Canada, 24–28 September 2017; pp. 749–754. [Google Scholar] [CrossRef]
  37. Wagner, J.; Fischer, V.; Herman, M.; Behnke, S. Multispectral Pedestrian Detection Using Deep Fusion Convolutional Neural Networks. In Proceedings of the ESANN 2016—24th European Symposium on Artificial Neural Networks, Bruges, Belgium, 27–29 April 2016; pp. 509–514. [Google Scholar]
  38. Schlosser, J.; Chow, C.K.; Kira, Z. Fusing LIDAR and Images for Pedestrian Detection Using Convolutional Neural Networks. In Proceedings of the 016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 16–21 May 2016; pp. 2198–2205. [Google Scholar] [CrossRef]
  39. Zhang, Y.; Zhao, Y.; Lv, H.; Feng, Y.; Liu, H.; Han, C. Adaptive Slicing Method of the Spatiotemporal Event Stream Obtained from a Dynamic Vision Sensor. Sensors 2022, 22, 2614. [Google Scholar] [CrossRef]
  40. Elkholy, M. Radar and INS Integration for Enhancing Land Vehicle Navigation in GNSS-Denied Environment. Doctoral Thesis, University of Calgary, Calgary, AB, Canada, 2024. [Google Scholar]
  41. Annapareddy, N.; Sahin, E.; Abraham, S.; Islam, M.M.; DePiro, M.; Iqbal, T. A Robust Pedestrian and Cyclist Detection Method Using Thermal Images. In Proceedings of the 2021 Systems and Information Engineering Design Symposium (SIEDS), Charlottesville, VA, USA, 29–30 April 2021; pp. 1–6. [Google Scholar] [CrossRef]
  42. Wallace, H.B. Millimeter Wave Propagation Measurements At The Ballistic Research Laboratory. Opt. Eng. 1983, 22, 24–31. [Google Scholar] [CrossRef]
  43. Bertoldo, S.; Lucianaz, C.; Allegretti, M. 77 GHz Automotive Anti-Collision Radar Used for Meteorological Purposes. In Proceedings of the 2017 IEEE-APS Topical Conference on Antennas and Propagation in Wireless Communications (APWC), Verona, Italy, 11–15 September 2017; pp. 49–52. [Google Scholar] [CrossRef]
  44. Gourova, R.; Krasnov, O.; Yarovoy, A. Analysis of Rain Clutter Detections in Commercial 77 GHz Automotive Radar. In Proceedings of the 2017 European Radar Conference (EURAD), Nuremberg, Germany, 11–13 October 2017; pp. 25–28. [Google Scholar]
  45. Hadj-bachir, M.; De Souza, P. LIDAR Sensor Simulation in Adverse Weather Condition for Driving Assistance Development to Cite This Version: HAL Id: Hal-01998668. 2019. Available online: https://hal.science/hal-01998668/ (accessed on 15 December 2019).
  46. Fu, X.; Huang, J.; Ding, X.; Liao, Y.; Paisley, J. Clearing the Skies: A Deep Network Architecture for Single-Image Rain Removal. IEEE Trans. Image Process. 2017, 26, 2944–2956. [Google Scholar] [CrossRef]
  47. Kulemin, G.P. Influence of Propagation Effects on Millimeter Wave Radar Operation. Radar Sens. Technol. IV 1999, 3704, 170–178. [Google Scholar]
  48. Battan, L.J. Radar Attenuation by Wet Ice Spheres. J. Appl. Meteorol. Climatol. 1971, 10, 247–252. [Google Scholar]
  49. Lhermitte, R. Attenuation and Scattering of Millimeter Wavelength Radiation by Clouds and Precipitation. J. Atmos. Ocean. Technol. 1990, 7, 464–479. [Google Scholar] [CrossRef]
  50. Meydani, A. State-of-the-Art Analysis of the Performance of the Sensors Utilized in Autonomous Vehicles in Extreme Conditions. In Artificial Intelligence and Smart Vehicles; Ghatee, M., Hashemi, S.M., Eds.; Communications in Computer and Information Science; Springer: Cham, Switzerland, 2023; Volume 1883. [Google Scholar]
  51. Caccia, L.; Hoof, H.V.; Courville, A.; Pineau, J. Deep Generative Modeling of LiDAR Data. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 3–8 November 2019; pp. 5034–5040. [Google Scholar] [CrossRef]
  52. Wojtanowski, J.; Zygmunt, M.; Kaszczuk, M.; Mierczyk, Z.; Muzal, M. Comparison of 905 Nm and 1550 Nm Semiconductor Laser Rangefinders’ Performance Deterioration Due to Adverse Environmental Conditions. Opto-Electron. Rev. 2014, 22, 183–190. [Google Scholar] [CrossRef]
  53. Kutila, M.; Pyykonen, P.; Holzhuter, H.; Colomb, M.; Duthon, P. Automotive LiDAR Performance Verification in Fog and Rain. In Proceedings of the 2018 21st International Conference on Intelligent Transportation Systems (ITSC), Maui, HI, USA, 4–7 November 2018; pp. 1695–1701. [Google Scholar] [CrossRef]
  54. Kutila, M.; Pyykönen, P.; Ritter, W.; Sawade, O.; Schäufele, B. Automotive LIDAR Sensor Development Scenarios for Harsh Weather Conditions. In Proceedings of the 2016 IEEE 19th International Conference on Intelligent Transportation Systems (ITSC), Rio de Janeiro, Brazil, 1–4 November 2016; pp. 265–270. [Google Scholar] [CrossRef]
  55. Yadav, G.; Maheshwari, S.; Agarwal, A. Fog Removal Techniques from Images: A Comparative Review and Future Directions. In Proceedings of the 2014 International Conference on Signal Propagation and Computer Technology (ICSPCT 2014), Ajmer, India, 12–13 July 2014; pp. 44–52. [Google Scholar] [CrossRef]
  56. Vehicles, A. Identification of Lightning Overvoltage in Unmanned. Energies 2022, 15, 6609. [Google Scholar] [CrossRef]
  57. Yahiaoui, L.; Uřičář, M.; Das, A.; Senthil, Y. Let The Sunshine in: Sun Glare Detection on Automotive Surround-View Cameras Let The Sunshine in: Sun Glare Detection on Automotive Surround-View Cameras. Electron. Imaging 2020, 2020, 80–81. [Google Scholar] [CrossRef]
  58. Pham, L.H.; Tran, D.N.-N.; Jeon, J.W. Low-Light Image Enhancement for Autonomous Driving Systems Using DriveRetinex-Net. In Proceedings of the 2020 IEEE International Conference on Consumer Electronics-Asia (ICCE-Asia), Seoul, Republic of Korea, 1–3 November 2020. [Google Scholar]
  59. Trierweiler, M.; Peterseim, T.; Neumann, C. Automotive LiDAR Pollution Detection System Based on Total Internal Reflection Techniques. Light-Emit. Devices Mater. Appl. XXIV 2020, 11302, 135–144. [Google Scholar]
  60. Starr, J.W.; Lattimer, B.Y. Evaluation of Navigation Sensors in Fire Smoke Environments. Fire Technol. 2014, 50, 1459–1481. [Google Scholar] [CrossRef]
  61. Kovalev, V.A.; Hao, W.M.; Wold, C. Determination of the Particulate Extinction-Coefficient Profile and the Column-Integrated Lidar Ratios Using the Backscatter-Coefficient and Optical-Depth Profiles. Appl. Opt. 2007, 46, 8627–8634. [Google Scholar]
  62. Uricar, M.; Sistu, G.; Rashed, H.; Vobecky, A.; Kumar, V.R.; Krizek, P.; Burger, F.; Yogamani, S. Let’s Get Dirty: GAN Based Data Augmentation for Camera Lens Soiling Detection in Autonomous Driving. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Virtual, 5–9 January 2021; pp. 766–775. [Google Scholar] [CrossRef]
  63. Zhang, L.; Chen, F.; Ma, X.; Pan, X. Fuel Economy in Truck Platooning: A Literature Overview and Directions for Future Research. J. Adv. Transp. 2020, 2020, 2604012. [Google Scholar] [CrossRef]
  64. Wang, C.; Pan, J.; Wu, X.M. Online-Updated High-Order Collaborative Networks for Single Image Deraining. In Proceedings of the AAAI Conference on Artificial Intelligence, Vancouver, BC, Canada, 28 February–1 March 2022; Volume 36, pp. 2406–2413. [Google Scholar] [CrossRef]
  65. Zhang, H.; Xie, Q.; Lu, B.; Gai, S. Dual Attention Residual Group Networks for Single Image Deraining. Digit. Signal Process. 2021, 116, 103106. [Google Scholar] [CrossRef]
  66. Wang, Z.; Wang, C.; Su, Z.; Chen, J. Dense Feature Pyramid Grids Network for Single Image Deraining. In Proceedings of the ICASSP 2021—2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Toronto, ON, Canada, 6–11 June 2024; pp. 2025–2029. [Google Scholar] [CrossRef]
  67. Jiang, Y.; Wang, H.; Wang, Q.; Gao, Q.; Tang, Y. Context-Wise Attention-Guided Network for Single Image Deraining. Electron. Lett. 2022, 58, 148–150. [Google Scholar] [CrossRef]
  68. Zhang, Y.; Liu, Y.; Li, Q.; Wang, J.; Qi, M.; Sun, H.; Xu, H.; Kong, J. A Lightweight Fusion Distillation Network for Image Deblurring and Deraining. Sensors 2021, 21, 5312. [Google Scholar] [CrossRef]
  69. Jiang, K.; Wang, Z.; Yi, P.; Chen, C.; Wang, Z.; Lin, C. Progressive Coupled Network for Real-Time Image Deraining. In Proceedings of the 2021 IEEE International Conference on Image Processing (ICIP), Anchorage, AK, USA, 19–22 September 2021; pp. 1759–1763. [Google Scholar] [CrossRef]
  70. Wang, Q.; Jiang, K.; Wang, Z.; Member, S.; Ren, W.; Zhang, J.; Lin, C. Multi-Scale Fusion and Decomposition Network for Single Image Deraining. IEEE Trans. Image Process. 2024, 33, 191–204. [Google Scholar] [CrossRef]
  71. Su, Z.; Zhang, Y.; Zhang, X.P.; Qi, F. Non-Local Channel Aggregation Network for Single Image Rain Removal. Neurocomputing 2022, 469, 261–272. [Google Scholar] [CrossRef]
  72. Huang, Z.; Zhang, J. Dynamic Multi-Domain Translation Network for Single Image Deraining. In Proceedings of the 2021 IEEE International Conference on Image Processing (ICIP), Anchorage, AK, USA, 19–22 September 2021; pp. 1754–1758. [Google Scholar]
  73. Khatab, E.; Onsy, A.; Varley, M.; Abouelfarag, A. A Lightweight Network for Real-Time Rain Streaks and Rain Accumulation Removal from Single Images Captured by AVs. Appl. Sci. 2023, 13, 219. [Google Scholar] [CrossRef]
  74. Li, M.; Cao, X.; Zhao, Q.; Zhang, L.; Meng, D. Online Rain/Snow Removal From. IEEE Trans. Image Process. 2021, 30, 2029–2044. [Google Scholar] [CrossRef]
  75. Hu, X.; Zhu, L.; Wang, T.; Fu, C.W.; Heng, P.A. Single-Image Real-Time Rain Removal Based on Depth-Guided Non-Local Features. IEEE Trans. Image Process. 2021, 30, 1759–1770. [Google Scholar] [CrossRef] [PubMed]
  76. Ali, A.; Sarkar, R.; Chaudhuri, S.S. Wavelet-Based Auto-Encoder for Simultaneous Haze and Rain Removal from Images. Pattern Recognit. 2024, 150, 110370. [Google Scholar] [CrossRef]
  77. Susladkar, O.; Deshmukh, G.; Nag, S.; Mantravadi, A.; Makwana, D.; Ravichandran, S.; Teja, R.S.C.; Chavhan, G.H.; Mohan, C.K.; Mittal, S. ClarifyNet: A High-Pass and Low-Pass Filtering Based CNN for Single Image Dehazing. J. Syst. Archit. 2022, 132, 102736. [Google Scholar] [CrossRef]
  78. Xu, Y.J.; Zhang, Y.J.; Li, Z.; Cui, Z.W.; Yang, Y.T. Multi-Scale Dehazing Network via High-Frequency Feature Fusion. Comput. Graph. 2022, 107, 50–59. [Google Scholar] [CrossRef]
  79. Qin, X.; Wang, Z.; Bai, Y.; Xie, X.; Jia, H. FFA-Net: Feature Fusion Attention Network for Single Image Dehazing. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020. [Google Scholar]
  80. Wu, H.; Qu, Y.; Lin, S.; Zhou, J.; Qiao, R.; Zhang, Z.; Xie, Y.; Ma, L. Contrastive Learning for Compact Single Image Dehazing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; Volume 1, pp. 10551–10560. [Google Scholar]
  81. Ren, W.; Pan, J.; Zhang, H.; Cao, X.; Yang, M. Single Image Dehazing via Multi-Scale Convolutional Neural Networks with Holistic Edges. Int. J. Comput. Vis. 2020, 128, 240–259. [Google Scholar] [CrossRef]
  82. Das, S.D. Fast Deep Multi-Patch Hierarchical Network for Nonhomogeneous Image Dehazing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Seattle, WA, USA, 14–19 June 2020; pp. 1–8. [Google Scholar]
  83. Liang, X.; Huang, Z.; Lu, L.; Tao, Z.; Yang, B.; Li, Y. Deep Learning Method on Target Echo Signal Recognition for Obscurant Penetrating Lidar Detection in Degraded Visual Environments. Sensors 2020, 20, 3424. [Google Scholar] [CrossRef]
  84. Wang, C.; Zhu, H.; Fan, W.; Wu, X.M.; Chen, J. Single Image Rain Removal Using Recurrent Scale-Guide Networks. Neurocomputing 2022, 467, 242–255. [Google Scholar] [CrossRef]
  85. Zheng, Y.; Yu, X.; Liu, M.; Zhang, S. Single-Image Deraining via Recurrent Residual Multiscale Networks. IEEE Trans. Neural Networks Learn. Syst. 2022, 33, 1310–1323. [Google Scholar] [CrossRef]
  86. Xue, X.; Meng, X.; Ma, L.; Liu, R.; Fan, X. GTA-Net: Gradual Temporal Aggregation Network for Fast Video Deraining. In Proceedings of the ICASSP 2021—2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Toronto, ON, Canada, 6–11 June 2021; pp. 2020–2024. [Google Scholar] [CrossRef]
  87. Matsui, T.; Ikehara, M. GAN-Based Rain Noise Removal from Single-Image Considering Rain Composite Models. IEEE Access 2020, 8, 40892–40900. [Google Scholar] [CrossRef]
  88. Yan, X.; Loke, Y.R. RainGAN: Unsupervised Raindrop Removal via Decomposition and Composition. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 3–8 January 2022; pp. 14–23. [Google Scholar]
  89. Zhang, H.; Sindagi, V.; Patel, V.M. Image De-Raining Using a Conditional Generative Adversarial Network. IEEE Trans. Circuits Syst. Video Technol. 2020, 30, 3943–3956. [Google Scholar] [CrossRef]
  90. Wei, Y.; Member, S.; Zhang, Z.; Member, S.; Wang, Y.; Member, S.; Xu, M.; Yang, Y.; Member, S.; Yan, S. DerainCycleGAN: Rain Attentive CycleGAN for Single Image Deraining and Rainmaking. IEEE Trans. Image Process. 2021, 30, 4788–4801. [Google Scholar] [CrossRef]
  91. Guo, Z.; Hou, M.; Sima, M.; Feng, Z. DerainAttentionGAN: Unsupervised Single-Image Deraining Using Attention-Guided Generative Adversarial Networks. Signal Image Video Process. 2022, 16, 185–192. [Google Scholar] [CrossRef]
  92. Ding, Y.; Li, M.; Yan, T.; Zhang, F.; Liu, Y.; Lau, R.W.H. Rain Streak Removal From Light Field Images. IEEE Trans. Circuits Syst. Video Technol. 2021, 32, 467–482. [Google Scholar] [CrossRef]
  93. Yang, F.; Ren, J.; Lu, Z.; Zhang, J.; Zhang, Q. Rain-Component-Aware Capsule-GAN for Single Image de-Raining. Pattern Recognit. 2022, 123, 108377. [Google Scholar] [CrossRef]
  94. Guo, Y.; Chen, J.; Ren, X.; Wang, A.; Wang, W. Joint Raindrop and Haze Removal from a Single Image. IEEE Trans. Image Process. 2020, 29, 9508–9519. [Google Scholar] [CrossRef]
  95. Jaw, D.W.; Huang, S.C.; Kuo, S.Y. Desnowgan: An Efficient Single Image Snow Removal Framework Using Cross-Resolution Lateral Connection and Gans. IEEE Trans. Circuits Syst. Video Technol. 2021, 31, 1342–1350. [Google Scholar] [CrossRef]
  96. Sung, T.; Lee, H.J. Removing Snow from a Single Image Using a Residual Frequency Module and Perceptual RaLSGAN. IEEE Access 2021, 9, 152047–152056. [Google Scholar] [CrossRef]
  97. Deng, Q.; Huang, Z.; Tsai, C.-C.; Lin, C.-W. HardGAN: A Haze-Aware Representation Distillation GAN for Single Image Dehazing; Springer International Publishing: Cham, Switzerland, 2020; Volume 12350, ISBN 9783030660956. [Google Scholar]
  98. Kan, S.; Zhang, Y.; Zhang, F.; Cen, Y. Signal Processing: Image Communication A GAN-Based Input-Size Flexibility Model for Single Image Dehazing. Signal Process. Image Commun. 2022, 102, 116599. [Google Scholar] [CrossRef]
  99. Liu, W.; Hou, X.; Duan, J.; Qiu, G. End-to-End Single Image Fog Removal Using Enhanced Cycle Consistent Adversarial Networks. IEEE Trans. Image Process. 2020, 29, 7819–7833. [Google Scholar] [CrossRef]
  100. Mo, Y.; Li, C.; Zheng, Y.; Wu, X. Journal of Visual Communication and Image Representation DCA-CycleGAN: Unsupervised Single Image Dehazing Using Dark Channel Attention Optimized CycleGAN. J. Vis. Commun. Image Represent. 2022, 82, 103431. [Google Scholar] [CrossRef]
  101. Park, J.; Member, S.; Han, D.K.; Member, S. Fusion of Heterogeneous Adversarial Networks for Single Image Dehazing. IEEE Trans. Image Process. 2020, 29, 4721–4732. [Google Scholar] [CrossRef]
  102. Jin, Y.; Gao, G.; Liu, Q.; Wang, Y. Unsupervised Conditional Disentangle Network for Image Dehazing. In Proceedings of the 2020 IEEE International Conference on Image Processing (ICIP), Abu Dhabi, United Arab Emirates, 25–28 October 2020. [Google Scholar]
  103. Dong, Y.; Liu, Y.; Zhang, H.; Chen, S.; Qiao, Y. FD-GAN: Generative Adversarial Networks with Fusion-Discriminator for Single Image Dehazing. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020. [Google Scholar]
  104. Fu, M.; Liu, H.; Yu, Y.; Chen, J.; Wang, K. DW-GAN: A Discrete Wavelet Transform GAN for NonHomogeneous Dehazing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021. [Google Scholar]
  105. Wang, P.; Zhu, H.; Huang, H.; Zhang, H.; Wang, N. TMS-GAN: A Twofold Multi-Scale Generative Adversarial Network for Single Image Dehazing. IEEE Trans. Circuits Syst. Video Technol. 2022, 32, 2760–2772. [Google Scholar] [CrossRef]
  106. Fu, X.; Qi, Q.; Zha, Z.J.; Zhu, Y.; Ding, X. Rain Streak Removal via Dual Graph Convolutional Network. In Proceedings of the AAAI Conference on Artificial Intelligence, Virtual, 19–21 May 2021; pp. 1352–1360. [Google Scholar] [CrossRef]
  107. Zhang, K.; Luo, W.; Yu, Y.; Ren, W.; Zhao, F.; Li, C.; Ma, L.; Liu, W.; Li, H. Beyond Monocular Deraining: Parallel Stereo Deraining Network Via Semantic Prior. Int. J. Comput. Vis. 2022, 130, 1754–1769. [Google Scholar] [CrossRef]
  108. Zhou, J.; Leong, C.; Lin, M.; Liao, W.; Li, C. Task Adaptive Network for Image Restoration with Combined Degradation Factors. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 3–8 January 2022; pp. 1–8. [Google Scholar]
  109. Jiang, K.; Wang, Z.; Yi, P.; Chen, C.; Huang, B.; Luo, Y.; Ma, J.; Jiang, J. Multi-Scale Progressive Fusion Network for Single Image Deraining. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 14–19 June 2020; pp. 8346–8355. [Google Scholar]
  110. Wang, Q.; Sun, G.; Fan, H.; Li, W.; Tang, Y. APAN: Across-Scale Progressive Attention Network for Single Image Deraining. IEEE Signal Process. Lett. 2022, 29, 159–163. [Google Scholar] [CrossRef]
  111. Yu, Y.; Liu, H.; Fu, M.; Chen, J.; Wang, X.; Wang, K. A Two-Branch Neural Network for Non-Homogeneous Dehazing via Ensemble Learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021. [Google Scholar]
  112. Hu, B. Multi-Scale Feature Fusion Network with Attention for Single Image Dehazing. Pattern Recognit. Image Anal. 2021, 31, 608–615. [Google Scholar] [CrossRef]
  113. Wang, J.; Li, C.; Xu, S. An Ensemble Multi-Scale Residual Attention Network (EMRA-Net) for Image Dehazing. Multimed. Tools Appl. 2021, 80, 29299–29319. [Google Scholar]
  114. Zhao, D.; Xu, L.; Ma, L. Pyramid Global Context Network for Image Dehazing. IEEE Trans. Circuits Syst. Video Technol. 2021, 31, 3037–3050. [Google Scholar] [CrossRef]
  115. Sheng, J.; Lv, G.; Du, G.; Wang, Z.; Feng, Q. Multi-Scale Residual Attention Network for Single Image Dehazing. Digit. Signal Process. 2022, 121, 103327. [Google Scholar] [CrossRef]
  116. Fan, G.; Hua, Z.; Li, J. Multi-Scale Depth Information Fusion Network for Image Dehazing. Appl. Intell. 2021, 51, 7262–7280. [Google Scholar]
  117. Liu, J.; Wu, H.; Xie, Y.; Qu, Y.; Ma, L. Trident Dehazing Network. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Seattle, WA, USA, 14–19 June 2020. [Google Scholar]
  118. Jo, E.; Sim, J. Multi-Scale Selective Residual Learning for Non-Homogeneous Dehazing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021. [Google Scholar]
  119. Saleh, K.; Abobakr, A.; Attia, M.; Iskander, J.; Nahavandi, D.; Hossny, M.; Nahavandi, S. Domain Adaptation for Vehicle Detection from Bird’s Eye View LiDAR Point Cloud Data. In Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops, Seattle, WA, USA, 14–19 June 2020. [Google Scholar]
  120. Wang, Y.; Yin, J.; Li, W.; Frossard, P.; Yang, R.; Shen, J. SSDA3D: Semi-Supervised Domain Adaptation for 3D Object Detection from Point Cloud. In Proceedings of the AAAI Conference on Artificial Intelligence, Virtual, 19–21 May 2021. [Google Scholar]
  121. Sindagi, V.A.; Oza, P.; Yasarla, R.; Patel, M.V. Prior-Based Domain Adaptive Object Detection for Hazy and Rainy Conditions; Springer International Publishing: Cham, Switzerland, 2020; Volume 12350, ISBN 978-3-030-58557-7. [Google Scholar]
  122. Zhao, S.; Zhang, L.; Member, S.; Shen, Y. RefineDNet: A Weakly Supervised Refinement Framework for Single Image Dehazing. IEEE Trans. Image Process. 2021, 30, 3391–3404. [Google Scholar] [CrossRef]
  123. Li, B.; Gou, Y.; Gu, S.; Zitao, J.; Joey, L.; Zhou, T.; Peng, X. You Only Look Yourself: Unsupervised and Untrained Single Image Dehazing Neural Network. Int. J. Comput. Vis. 2021, 129, 1754–1767. [Google Scholar] [CrossRef]
  124. Shao, Y.; Li, L.; Ren, W.; Gao, C.; Sang, N. Domain Adaptation for Image Dehazing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 14–19 June 2020; pp. 2808–2817. [Google Scholar]
  125. Chen, Z.; Wang, Y.; Yang, Y.; Liu, D. PSD: Principled Synthetic-to-Real Dehazing Guided by Physical Priors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 7180–7189. [Google Scholar]
  126. Wang, Y.; Ma, C.; Liu, J. SmartAssign: Learning A Smart Knowledge Assignment Strategy for Deraining and Desnowing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 3677–3686. [Google Scholar]
  127. Li, Y.; Monno, Y.; Okutomi, M. Single Image Deraining Network with Rain Embedding Consistency and Layered LSTM. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 3–8 January 2022; pp. 4060–4069. [Google Scholar]
  128. Rai, S.N.; Saluja, R.; Arora, C.; Balasubramanian, V.N.; Subramanian, A.; Jawahar, C.V. FLUID: Few-Shot Self-Supervised Image Deraining. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 3–8 January 2022. [Google Scholar]
  129. Jasuja, C.; Gupta, H.; Gupta, D.; Parihar, A.S. SphinxNet—A Lightweight Network for Single Image Deraining. In Proceedings of the 2021 International Conference on Intelligent Technologies (CONIT), Hubli, India, 25–27 June 2021; pp. 1–6. [Google Scholar] [CrossRef]
  130. Huang, S.C.; Jaw, D.W.; Hoang, Q.V.; Le, T.H. 3FL-Net: An Efficient Approach for Improving Performance of Lightweight Detectors in Rainy Weather Conditions. IEEE Trans. Intell. Transp. Syst. 2023, 24, 4293–4305. [Google Scholar] [CrossRef]
  131. Cho, J.; Kim, S. Memory-Guided Image De-Raining Using Time-Lapse Data. IEEE Trans. Image Process. 2022, 31, 4090–4103. [Google Scholar] [CrossRef]
  132. Jiang, K.; Member, S.; Wang, Z.; Yi, P.; Chen, C.; Han, Z.; Lu, T.; Huang, B.; Jiang, J. Decomposition Makes Better Rain Removal: An Improved Attention-Guided Deraining Network. IEEE Trans. Circuits Syst. Video Technol. 2021, 31, 3981–3995. [Google Scholar] [CrossRef]
  133. Wei, Y.; Zhang, Z.; Xu, M.; Hong, R.; Fan, J.; Yan, S. Robust Attention Deraining Network for Synchronous Rain Streaks and Raindrops Removal. In Proceedings of the 30th ACM International Conference on Multimedia, Lisbon, Portugal, 10–14 October 2022; pp. 6464–6472. [Google Scholar] [CrossRef]
  134. Yang, W.; Tan, R.T.; Feng, J.; Guo, Z.; Yan, S.; Liu, J.; Member, S. Joint Rain Detection and Removal from a Single Image with Contextualized Deep Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 42, 1377–1393. [Google Scholar] [CrossRef] [PubMed]
  135. Wang, H.; Xie, Q.; Zhao, Q.; Meng, D. A Model-Driven Deep Neural Network for Single Image Rain Removal. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 14–19 June 2020; pp. 3103–3112. [Google Scholar]
  136. Zheng, S.; Lu, C.; Wu, Y.; Gupta, G. SAPNet: Segmentation-Aware Progressive Network for Perceptual Contrastive Deraining. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Virtual, 5–9 January 2021; pp. 52–62. [Google Scholar]
  137. Chen, W.T.; Fang, H.Y.; Hsieh, C.L.; Tsai, C.C.; Chen, I.H.; Ding, J.J.; Kuo, S.Y. ALL Snow Removed: Single Image Desnowing Algorithm Using Hierarchical Dual-Tree Complex Wavelet Representation and Contradict Channel Loss. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 4176–4185. [Google Scholar] [CrossRef]
  138. Chen, W.-T.; Fang, H.-Y.; Ding, J.-J.; Tsai, C.-C.; Kuo, S.-Y. JSTASR: Joint Size and Transparency-Aware Snow Removal Algorithm Based on Modified Partial Convolution and Veiling Effect Removal. In Computer Vision–ECCV 2020: Proceedings of the 16th European Conference, Glasgow, UK, 23–28 August 2020; Springer International Publishing: Cham, Switzerland, 2020. [Google Scholar]
  139. Ju, M.; Ding, C.; Ren, W.; Yang, Y.; Zhang, D.; Guo, Y.J. IDE: Image Dehazing and Exposure Using an Enhanced Atmospheric Scattering Model. IEEE Trans. Image Process. 2021, 30, 2180–2192. [Google Scholar] [CrossRef]
  140. Zhu, Q.; Mai, J.; Song, Z.; Wu, D.; Wang, J.; Wang, L. Mean Shift-Based Single Image Dehazing with Re-Refined Transmission Map. In Proceedings of the 2014 IEEE International Conference on Systems, Man, and Cybernetics (SMC), San Diego, CA, USA, 5–8 October 2014; pp. 4058–4064. [Google Scholar]
  141. Yuan, H.; Liu, C.; Guo, Z.; Sun, Z. A Region-Wised Medium Transmission Based Image Dehazing Method. IEEE Access 2017, 5, 1735–1742. [Google Scholar] [CrossRef]
  142. He, K. Single Image Haze Removal Using Dark Channel Prior. IEEE Trans. Pattern Anal. Mach. Intell. 2009, 1956–1963. [Google Scholar] [CrossRef]
  143. Hautière, N.; Tarel, J.P.; Aubert, D. Towards Fog-Free in-Vehicle Vision Systems through Contrast Restoration. In Proceedings of the 2007 IEEE Conference on Computer Vision and Pattern Recognition, Minneapolis, MN, USA, 17–22 June 2007. [Google Scholar] [CrossRef]
  144. Jiang, N.; Hu, K.; Zhang, T.; Chen, W.; Xu, Y.; Zhao, T. Deep Hybrid Model for Single Image Dehazing and Detail Refinement. Pattern Recognit. 2023, 136, 109227. [Google Scholar] [CrossRef]
  145. Wu, J.; Xu, H.; Zheng, J.; Zhao, J. Automatic Vehicle Detection With Roadside LiDAR Data Under Rainy and Snowy Conditions. IEEE Intell. Transp. Syst. Mag. 2021, 13, 197–209. [Google Scholar] [CrossRef]
  146. Shih, Y.; Liao, W.; Lin, W.; Wong, S.; Wang, C. Reconstruction and Synthesis of Lidar Point Clouds of Spray. IEEE Robot. Autom. Lett. 2022, 7, 3765–3772. [Google Scholar] [CrossRef]
  147. Hahner, M.; Sakaridis, C.; Bijelic, M.; Heide, F.; Yu, F.; Dai, D.; Gool, L. Van LiDAR Snowfall Simulation for Robust 3D Object Detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 16364–16374. [Google Scholar]
  148. Godfrey, J.; Kumar, V.; Subramanian, S.C. Evaluation of Flash LiDAR in Adverse Weather Conditions Toward Active Road Vehicle Safety. IEEE Sens. J. 2023, 23, 20129–20136. [Google Scholar] [CrossRef]
  149. Wen, L.; Peng, Y.; Lin, M.; Gan, N.; Tan, R. Multi-Modal Contrastive Learning for LiDAR Point Cloud Rail-Obstacle Detection in Complex Weather. Electronics 2024, 13, 220. [Google Scholar] [CrossRef]
  150. Anh, N.; Mai, M.; Duthon, P.; Khoudour, L.; Crouzil, A.; Velastin, S.A. 3D Object Detection with SLS-Fusion Network in Foggy Weather Conditions. Sensors 2021, 21, 6711. [Google Scholar] [CrossRef]
  151. Liu, Z.; Cai, Y.; Wang, H.; Chen, L.; Gao, H.; Jia, Y.; Member, S.; Li, Y. Robust Target Recognition and Tracking of Self-Driving Cars With Radar and Camera Information Fusion Under Severe Weather Conditions. IEEE Trans. Intell. Transp. Syst. 2022, 23, 6640–6653. [Google Scholar] [CrossRef]
  152. Qian, K.; Zhu, S.; Zhang, X.; Li, L.E. Robust Multimodal Vehicle Detection in Foggy Weather Using Complementary Lidar and Radar Signals. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 444–453. [Google Scholar]
  153. Lin, S.L.; Wu, B.H. Application of Kalman Filter to Improve 3d Lidar Signals of Autonomous Vehicles in Adverse Weather. Appl. Sci. 2021, 11, 3018. [Google Scholar] [CrossRef]
  154. Park, J.; Member, S.; Park, J.; Kim, K. Fast and Accurate Desnowing Algorithm for LiDAR Point Clouds. IEEE Access 2020, 8, 160202–160212. [Google Scholar]
  155. Kaygisiz, B.H.; Erkmen, I.; Erkmen, A.M. GPS/INS Enhancement for Land Navigation Using Neural Network. J. Navig. 2004, 57, 297–310. [Google Scholar] [CrossRef]
  156. He, Y.; Li, J.; Liu, J. Research on GNSS INS & GNSS/INS Integrated Navigation Method for Autonomous Vehicles: A Survey. IEEE Access 2023, 11, 79033–79055. [Google Scholar] [CrossRef]
  157. Guilloton, A.; Arethens, J.P.; Escher, A.C.; MacAbiau, C.; Koenig, D. Multipath Study on the Airport Surface. In Proceedings of the 2012 IEEE/ION Position, Location and Navigation Symposium, Myrtle Beach, SC, USA, 23–26 April 2012; pp. 355–365. [Google Scholar] [CrossRef]
  158. Kaplan, E.D.; Hegarty, C.J. Understanding GPS: Principles and Applications; Artech House: Norwood, MA, USA, 2006. [Google Scholar]
  159. Godha, S.; Cannon, M.E. GPS/MEMS INS Integrated System for Navigation in Urban Areas. GPS Solut. 2007, 11, 193–203. [Google Scholar] [CrossRef]
  160. Breßler, J.; Obst, M. GNSS Positioning in Non-Line-of-Sight Context—A Survey for Technological Innovation. Adv. Sci. Technol. Eng. Syst. 2017, 2, 722–731. [Google Scholar] [CrossRef]
  161. Wen, W.W.; Hsu, L.T. 3D LiDAR Aided GNSS NLOS Mitigation in Urban Canyons. IEEE Trans. Intell. Transp. Syst. 2022, 23, 18224–18236. [Google Scholar] [CrossRef]
  162. Angrisano, A.; Vultaggio, M.; Gaglione, S.; Crocetto, N. Pedestrian Localization with PDR Supplemented by GNSS. In Proceedings of the 2019 European Navigation Conference (ENC), Warsaw, Poland, 9–12 April 2019; pp. 1–6. [Google Scholar] [CrossRef]
  163. Groves, P.D.; Jiang, Z. Height Aiding, C/N0 Weighting and Consistency Checking for Gnss Nlos and Multipath Mitigation in Urban Areas. J. Navig. 2013, 66, 653–669. [Google Scholar] [CrossRef]
  164. Groves, P.D.; Wang, L.; Ziebart, M. Shadow Matching: Improved GNSS Accuracy in Urban Canyons. GPS World 2012, 23, 14–18. [Google Scholar]
  165. Zhai, H.Q.; Wang, L.H. The Robust Residual-Based Adaptive Estimation Kalman Filter Method for Strap-down Inertial and Geomagnetic Tightly Integrated Navigation System. Rev. Sci. Instrum. 2020, 91, 104501. [Google Scholar] [CrossRef]
  166. Chen, Q.; Zhang, Q.; Niu, X. Estimate the Pitch and Heading Mounting Angles of the IMU for Land Vehicular GNSS/INS Integrated System. IEEE Trans. Intell. Transp. Syst. 2021, 22, 6503–6515. [Google Scholar] [CrossRef]
  167. Li, W.; Li, W.; Cui, X.; Zhao, S.; Lu, M. A Tightly Coupled RTK/INS Algorithm with Ambiguity Resolution in the Position Domain for Ground Vehicles in Harsh Urban Environments. Sensors 2018, 18, 2160. [Google Scholar] [CrossRef] [PubMed]
  168. Wang, D.; Dong, Y.; Li, Q.; Li, Z.; Wu, J. Using Allan Variance to Improve Stochastic Modeling for Accurate GNSS/INS Integrated Navigation. GPS Solut. 2018, 22, 53. [Google Scholar] [CrossRef]
  169. Ning, Y.; Wang, J.; Han, H.; Tan, X.; Liu, T. An Optimal Radial Basis Function Neural Network Enhanced Adaptive Robust Kalman Filter for GNSS/INS Integrated Systems in Complex Urban Areas. Sensors 2018, 18, 3091. [Google Scholar] [CrossRef] [PubMed]
  170. Liu, H.; Nassar, S.; El-Sheimy, N. Two-Filter Smoothing for Accurate INS/GPS Land-Vehicle Navigation in Urban Centers. IEEE Trans. Veh. Technol. 2010, 59, 4256–4267. [Google Scholar] [CrossRef]
  171. Yao, Y.; Xu, X.; Zhu, C.; Chan, C.Y. A Hybrid Fusion Algorithm for GPS/INS Integration during GPS Outages. Meas. J. Int. Meas. Confed. 2017, 103, 42–51. [Google Scholar] [CrossRef]
  172. Li, Z.; Wang, J.; Li, B.; Gao, J.; Tan, X. GPS/INS/Odometer Integrated System Using Fuzzy Neural Network for Land Vehicle Navigation Applications. J. Navig. 2014, 67, 967–983. [Google Scholar] [CrossRef]
  173. Abdolkarimi, E.S.; Mosavi, M.R.; Abedi, A.A.; Mirzakuchaki, S. Optimization of the Low-Cost INS/GPS Navigation System Using ANFIS for High Speed Vehicle Application. In Proceedings of the 2015 Signal Processing and Intelligent Systems Conference (SPIS), Tehran, Iran, 16–17 December 2015; pp. 93–98. [Google Scholar] [CrossRef]
  174. Sharaf, R.; Noureldin, A.; Osman, A.; El-Sheimy, N. Online INS/GPS Integration with a Radial Basis Function Neural Network. IEEE Aerosp. Electron. Syst. Mag. 2005, 20, 8–14. [Google Scholar] [CrossRef]
  175. Sun, J.; Kousik, S.; Fridovich-Keil, D.; Schwager, M. Connected Autonomous Vehicle Motion Planning with Video Predictions from Smart, Self-Supervised Infrastructure. In Proceedings of the 2023 IEEE 26th International Conference on Intelligent Transportation Systems (ITSC), Bilbao, Spain, 24–28 September 2023; pp. 1721–1726. [Google Scholar] [CrossRef]
  176. Nguyen, A.; Nguyen, N.; Tran, K.; Tjiputra, E.; Tran, Q.D. Autonomous Navigation in Complex Environments with Deep Multimodal Fusion Network. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 24 October 2020–24 January 2021; pp. 5824–5830. [Google Scholar] [CrossRef]
  177. Martínez, C.; Jiménez, F. Implementation of a Potential Field-Based Decision-Making Algorithm on Autonomous Vehicles for Driving in Complex Environments. Sensors 2019, 19, 3318. [Google Scholar] [CrossRef]
  178. Dehman, A.; Farooq, B. Are Work Zones and Connected Automated Vehicles Ready for a Harmonious Coexistence? A Scoping Review and Research Agenda. Transp. Res. Part C Emerg. Technol. 2021, 133, 103422. [Google Scholar] [CrossRef]
  179. Malik, S.; Khan, M.A.; Aadam; El-Sayed, H.; Iqbal, F.; Khan, J.; Ullah, O. CARLA+: An Evolution of the CARLA Simulator for Complex Environment Using a Probabilistic Graphical Model. Drones 2023, 7, 111. [Google Scholar] [CrossRef]
  180. Beul, M.; Krombach, N.; Zhong, Y.; Droeschel, D.; Nieuwenhuisen, M.; Behnke, S. A High-Performance MAV for Autonomous Navigation in Complex 3D Environments. In Proceedings of the 2015 International Conference on Unmanned Aircraft Systems (ICUAS), Denver, CO, USA, 9–12 June 2015; pp. 1241–1250. [Google Scholar] [CrossRef]
  181. Dharmadhikari, M.; Nguyen, H.; Mascarich, F.; Khedekar, N.; Alexis, K. Autonomous Cave Exploration Using Aerial Robots. In Proceedings of the 2021 International Conference on Unmanned Aircraft Systems (ICUAS), Athens, Greece, 15–18 June 2021; pp. 942–949. [Google Scholar] [CrossRef]
  182. Wang, C.; Wang, J.; Shen, Y.; Zhang, X. Autonomous Navigation of UAVs in Large-Scale Complex Environments: A Deep Reinforcement Learning Approach. IEEE Trans. Veh. Technol. 2019, 68, 2124–2136. [Google Scholar] [CrossRef]
  183. Mirowski, P.; Pascanu, R.; Viola, F.; Soyer, H.; Ballard, A.J.; Banino, A.; Denil, M.; Goroshin, R.; Sifre, L.; Kavukcuoglu, K.; et al. Learning to Navigate in Complex Environments. arXiv 2016, arXiv:1611.03673. [Google Scholar]
  184. Lin, J.; Yang, X.; Zheng, P.; Cheng, H. End-to-End Decentralized Multi-Robot Navigation in Unknown Complex Environments via Deep Reinforcement Learning. In Proceedings of the 2019 IEEE International Conference on Mechatronics and Automation (ICMA), Tianjin, China, 4–7 August 2019; pp. 2493–24500. [Google Scholar] [CrossRef]
  185. Bouton, M.; Nakhaei, A.; Fujimura, K.; Kochenderfer, M.J. Safe Reinforcement Learning with Scene Decomposition for Navigating Complex Urban Environments. In Proceedings of the 2019 IEEE Intelligent Vehicles Symposium (IV), Paris, France, 9–12 June 2019; pp. 1469–1476. [Google Scholar] [CrossRef]
  186. Zhang, J.; Yu, Z.; Mao, S.; Periaswamy, S.C.G.; Patton, J.; Xia, X. IADRL: Imitation Augmented Deep Reinforcement Learning Enabled UGV-UAV Coalition for Tasking in Complex Environments. IEEE Access 2020, 8, 102335–102347. [Google Scholar] [CrossRef]
  187. Lauzon, M.; Rabbath, C.-A.; Gagnon, E. UAV Autonomy for Complex Environments. Unmanned Syst. Technol. VIII 2006, 6230, 184–195. [Google Scholar] [CrossRef]
  188. Shen, S.; Mulgaonkar, Y.; Michael, N.; Kumar, V. Vision-Based State Estimation for Autonomous Rotorcraft MAVs in Complex Environments. In Proceedings of the 2013 IEEE International Conference on Robotics and Automation, Karlsruhe, Germany, 6–10 May 2013; pp. 1758–1764. [Google Scholar] [CrossRef]
  189. Kim, B.; Azhari, M.B.; Park, J.; Shim, D.H. An Autonomous UAV System Based on Adaptive LiDAR Inertial Odometry for Practical Exploration in Complex Environments. J. Field Robot. 2024, 41, 669–698. [Google Scholar] [CrossRef]
  190. Kuutti, S.; Fallah, S.; Katsaros, K.; Dianati, M.; Mccullough, F.; Mouzakitis, A. A Survey of the State-of-the-Art Localization Techniques and Their Potentials for Autonomous Vehicle Applications. IEEE Internet Things J. 2018, 5, 829–846. [Google Scholar] [CrossRef]
  191. Johnson, C. Readiness of the Road Network for Connected and Autonomous Vehicles; RAC Foundation: London, UK, 2017; pp. 1–42. [Google Scholar]
  192. Bruno, D.R.; Sales, D.O.; Amaro, J.; Osorio, F.S. Analysis and Fusion of 2D and 3D Images Applied for Detection and Recognition of Traffic Signs Using a New Method of Features Extraction in Conjunction with Deep Learning. In Proceedings of the 2018 International Joint Conference on Neural Networks (IJCNN), Rio de Janeiro, Brazil, 8–13 July 2018. [Google Scholar] [CrossRef]
  193. Liu, Y.; Tight, M.; Sun, Q.; Kang, R. A Systematic Review: Road Infrastructure Requirement for Connected and Autonomous Vehicles (CAVs). J. Phys. Conf. Ser. 2019, 1187, 042073. [Google Scholar] [CrossRef]
  194. Tengilimoglu, O.; Carsten, O.; Wadud, Z. Implications of Automated Vehicles for Physical Road Environment: A Comprehensive Review. Transp. Res. Part E Logist. Transp. Rev. 2023, 169, 102989. [Google Scholar] [CrossRef]
  195. Lawson, S. Roads That Cars Can Read REPORT III: Tackling the Transition to Automated Vehicles. 2018. Available online: http://resources.irap.org/Report/2018_05_30_Roads (accessed on 15 December 2019).
  196. Ambrosius, E. Autonomous Driving and Road Markings. IRF & UNECE ITS Event “Governance and Infrastructure for Smart and Autonomous Mobility. 2018, pp. 1–17. Available online: https://unece.org/fileadmin/DAM/trans/doc/2018/wp29grva/s1p5._Eva_Ambrosius.pdf (accessed on 15 December 2019).
  197. Xing, Y.; Lv, C.; Chen, L.; Wang, H.; Wang, H.; Cao, D.; Velenis, E.; Wang, F.Y. Advances in Vision-Based Lane Detection: Algorithms, Integration, Assessment, and Perspectives on ACP-Based Parallel Vision. IEEE/CAA J. Autom. Sin. 2018, 5, 645–661. [Google Scholar] [CrossRef]
  198. Dong, Y.; Patil, S.; van Arem, B.; Farah, H. A Hybrid Spatial–Temporal Deep Learning Architecture for Lane Detection. Comput. Civ. Infrastruct. Eng. 2023, 38, 67–86. [Google Scholar] [CrossRef]
  199. Zhang, Y.; Lu, Z.; Zhang, X.; Xue, J.H.; Liao, Q. Deep Learning in Lane Marking Detection: A Survey. IEEE Trans. Intell. Transp. Syst. 2022, 23, 5976–5992. [Google Scholar] [CrossRef]
  200. Wang, B.; Liao, Z.; Guo, S. Adaptive Curve Passing Control in Autonomous Vehicles with Integrated Dynamics and Camera-Based Radius Estimation. Vehicles 2024, 6, 1648–1660. [Google Scholar] [CrossRef]
  201. Sanusi, F.; Choi, J.; Kim, Y.H.; Moses, R. Development of a Knowledge Base for Multiyear Infrastructure Planning for Connected and Automated Vehicles. J. Transp. Eng. Part A Syst. 2022, 148, 03122001. [Google Scholar] [CrossRef]
  202. Khan, S.M.; Chowdhury, M.; Morris, E.A.; Deka, L. Synergizing Roadway Infrastructure Investment with Digital Infrastructure for Infrastructure-Based Connected Vehicle Applications: Review of Current Status and Future Directions. J. Infrastruct. Syst. 2019, 25, 03119001. [Google Scholar] [CrossRef]
  203. Luu, Q.; Nguyen, T.M.; Zheng, N.; Vu, H.L. Digital Infrastructure for Connected and Automated Vehicles. arXiv 2023, arXiv:2401.08613. [Google Scholar]
  204. Gomes Correia, M.; Ferreira, A. Road Asset Management and the Vehicles of the Future: An Overview, Opportunities, and Challenges. Int. J. Intell. Transp. Syst. Res. 2023, 21, 376–393. [Google Scholar] [CrossRef]
  205. Tang, Z.; He, J.; Flanagan, S.K.; Procter, P.; Cheng, L. Cooperative Connected Smart Road Infrastructure and Autonomous Vehicles for Safe Driving. In Proceedings of the 2021 IEEE 29th International Conference on Network Protocols (ICNP), Dallas, TX, USA, 1–5 November 2021; pp. 1–6. [Google Scholar] [CrossRef]
  206. Labi, S.; Saeed, T.U.; Sinha, K.C. Design and Management of Highway Infrastructure to Accommodate CAVs Design and Management of Highway Infrastructure to Accommodate CAVs; Purdue University: West Lafayette, IN, USA, 2023; ISBN 3551747105. [Google Scholar]
  207. Sobanjo, J.O. Civil Infrastructure Management Models for the Connected and Automated Vehicles Technology. Infrastructures 2019, 4, 49. [Google Scholar] [CrossRef]
  208. Saeed, T.U.; Alabi, B.N.T.; Labi, S. Preparing Road Infrastructure to Accommodate Connected and Automated Vehicles: System-Level Perspective. J. Infrastruct. Syst. 2021, 27, 2–4. [Google Scholar] [CrossRef]
  209. Ran, B.; Cheng, Y.; Li, S.; Li, H.; Parker, S. Classification of Roadway Infrastructure and Collaborative Automated Driving System. SAE Int. J. Connect. Autom. Veh. 2023, 6, 387–395. [Google Scholar] [CrossRef]
  210. Feng, Y.; Chen, Y.; Zhang, J.; Tian, C.; Ren, R.; Han, T.; Proctor, R.W. Human-Centred Design of next Generation Transportation Infrastructure with Connected and Automated Vehicles: A System-of-Systems Perspective. Theor. Issues Ergon. Sci. 2024, 25, 287–315. [Google Scholar] [CrossRef]
  211. Rios-Torres, J.; Malikopoulos, A.A. Automated and Cooperative Vehicle Merging at Highway On-Ramps. IEEE Trans. Intell. Transp. Syst. 2017, 18, 780–789. [Google Scholar] [CrossRef]
  212. Ghanipoor Machiani, S.; Ahmadi, A.; Musial, W.; Katthe, A.; Melendez, B.; Jahangiri, A. Implications of a Narrow Automated Vehicle-Exclusive Lane on Interstate 15 Express Lanes. J. Adv. Transp. 2021, 2021, 6617205. [Google Scholar] [CrossRef]
  213. Li, Y.; Chen, Z.; Yin, Y.; Peeta, S. Deployment of Roadside Units to Overcome Connectivity Gap in Transportation Networks with Mixed Traffic. Transp. Res. Part C Emerg. Technol. 2020, 111, 496–512. [Google Scholar] [CrossRef]
  214. Rios-Torres, J.; Malikopoulos, A.A. A Survey on the Coordination of Connected and Automated Vehicles at Intersections and Merging at Highway On-Ramps. IEEE Trans. Intell. Transp. Syst. 2017, 18, 1066–1077. [Google Scholar] [CrossRef]
  215. van Geelen, H.; Redant, K. Connected & Autonomous Vehicles and Road Infrastructure—State of Play and Outlook. Transp. Res. Procedia 2023, 72, 1311–1317. [Google Scholar] [CrossRef]
  216. Fu, X.; Huang, J.; Zeng, D.; Huang, Y.; Ding, X.; Paisley, J. Removing Rain from Single Images via a Deep Detail Network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1715–1723. [Google Scholar] [CrossRef]
  217. Zhang, H.; Patel, V.M. Density-Aware Single Image De-Raining Using a Multi-Stream Dense Network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 695–704. [Google Scholar] [CrossRef]
  218. Xie, L.; Xiang, C.; Yu, Z.; Xu, G.; Yang, Z.; Cai, D.; He, X. PI-RCNN: An Efficient Multi-Sensor 3D Object Detector with Point-Based Attentive Cont-Conv Fusion Module. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; pp. 12460–12467. [Google Scholar] [CrossRef]
  219. Liang, M.; Yang, B.; Wang, S.; Urtasun, R. Deep Continuous Fusion for Multi-Sensor 3D Object Detection. In Proceedings of the European Conference on Computer Vision, Munich, Germany, 8–14 September 2018; pp. 663–678. [Google Scholar] [CrossRef]
  220. Liu, L.; He, J.; Ren, K.; Xiao, Z.; Hou, Y. A LiDAR–Camera Fusion 3D Object Detection Algorithm. Information 2022, 13, 169. [Google Scholar] [CrossRef]
  221. Kenk, M.A.; Hassaballah, M. DAWN: Vehicle Detection in Adverse Weather Nature Dataset. arXiv 2020, arXiv:2008.05402. [Google Scholar] [CrossRef]
  222. Ancuti, C.; Ancuti, C.O.; Timofte, R.; De Vleeschouwer, C. I-HAZE: A Dehazing Benchmark with Real Hazy and Haze-Free Indoor Images. In Advanced Concepts for Intelligent Vision Systems: Proceedings of the 19th International Conference, ACIVS 2018, Poitiers, France, 24–27 September 2018; Springer International Publishing: Cham, Switzerland, 2018; pp. 620–631. [Google Scholar] [CrossRef]
  223. Zheng, Z.; Ren, W.; Cao, X.; Hu, X.; Wang, T.; Song, F.; Jia, X. Ultra-High-Definition Image Dehazing via Multi-Guided Bilateral Learning. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 16180–16189. [Google Scholar] [CrossRef]
  224. Zhao, S.; Zhang, L.; Huang, S.; Shen, Y.; Zhao, S. Dehazing Evaluation: Real-World Benchmark Datasets, Criteria, and Baselines. IEEE Trans. Image Process. 2020, 29, 6947–6962. [Google Scholar] [CrossRef]
  225. Zhang, X.; Dong, H.; Pan, J.; Zhu, C.; Tai, Y.; Wang, C.; Li, J.; Huang, F.; Wang, F. Learning to Restore Hazy Video: A New Real-World Dataset and A New Method. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 9235–9244. [Google Scholar] [CrossRef]
  226. Li, P.; Yun, M.; Tian, J.; Tang, Y.; Wang, G.; Wu, C. Stacked Dense Networks for Single-Image Snow Removal. Neurocomputing 2019, 367, 152–163. [Google Scholar] [CrossRef]
  227. Pitropov, M.; Garcia, D.E.; Rebello, J.; Smart, M.; Wang, C.; Czarnecki, K.; Waslander, S. Canadian Adverse Driving Conditions Dataset. Int. J. Rob. Res. 2021, 40, 681–690. [Google Scholar] [CrossRef]
  228. Kurup, A.M.; Bos, J.P. Winter Adverse Driving Dataset for Autonomy in Inclement Winter Weather. Opt. Eng. 2023, 62, 031207. [Google Scholar] [CrossRef]
Figure 1. SAE-level definition of driving automation [2].
Figure 1. SAE-level definition of driving automation [2].
Futuretransp 05 00039 g001
Figure 2. The basic principle of sensors.
Figure 2. The basic principle of sensors.
Futuretransp 05 00039 g002
Figure 3. Captured images in artificial, non-foggy, and foggy conditions [12].
Figure 3. Captured images in artificial, non-foggy, and foggy conditions [12].
Futuretransp 05 00039 g003
Figure 4. (a,c) Represent the typical surface of water, (b,d) demonstrate the effect of sun glint on the water surface [2].
Figure 4. (a,c) Represent the typical surface of water, (b,d) demonstrate the effect of sun glint on the water surface [2].
Futuretransp 05 00039 g004
Figure 5. The impact of intense light on the LiDAR data (point cloud colored by intensity) and the RGB camera image (top right inset). Objects are barely detected: four 3D points for the mannequin (cyan box), three points for the reflective targets (magenta box), and ten points for the black vehicle (green box). Thermal camera (top left inset), thermal multiscale retinex transformation (left middle inset), and thermal bio-inspired retina (left bottom inset) [13].
Figure 5. The impact of intense light on the LiDAR data (point cloud colored by intensity) and the RGB camera image (top right inset). Objects are barely detected: four 3D points for the mannequin (cyan box), three points for the reflective targets (magenta box), and ten points for the black vehicle (green box). Thermal camera (top left inset), thermal multiscale retinex transformation (left middle inset), and thermal bio-inspired retina (left bottom inset) [13].
Futuretransp 05 00039 g005
Figure 6. Views from AVs’ Cameras in harsh environmental conditions. (Left): camera lens covered in mud. (Middle): image captured by the mud-soiled camera (as seen on the (left)). (Right): camera lens soiled during heavy rain [62].
Figure 6. Views from AVs’ Cameras in harsh environmental conditions. (Left): camera lens covered in mud. (Middle): image captured by the mud-soiled camera (as seen on the (left)). (Right): camera lens soiled during heavy rain [62].
Futuretransp 05 00039 g006
Figure 7. Urban satellite signal reception. Satellites S2 and S3 have clear LOS reception, whereas S4 is obstructed, and S1 exhibits NLOS propagation. Modified from [160].
Figure 7. Urban satellite signal reception. Satellites S2 and S3 have clear LOS reception, whereas S4 is obstructed, and S1 exhibits NLOS propagation. Modified from [160].
Futuretransp 05 00039 g007
Table 1. A comparative overview of sensors. “--” shows that the sensor performs moderately well under specific conditions, “✖” indicates poor performance under the given conditions, and “✔” represents optimal performance under the specified conditions.
Table 1. A comparative overview of sensors. “--” shows that the sensor performs moderately well under specific conditions, “✖” indicates poor performance under the given conditions, and “✔” represents optimal performance under the specified conditions.
FactorsCameraLiDARRADARGNSS
Velocity----
Object Detection--
Resolution----
Range--
Distance Accuracy--
Lane Detection
Obstacle Edge Detection
Weather Conditions--
Situation Awareness
CostLowHighModerateModerate
Processing TimeFastModerateFastModerate
Maintenance RequirementsLowModerateLowLow
CompatibilityHighModerateModerateHigh
DurabilityModerateHighHighHigh
Spatial CoverageWideModerateWideWide
Table 5. Research projects addressing challenges and advancements in road infrastructure to support AVs.
Table 5. Research projects addressing challenges and advancements in road infrastructure to support AVs.
CategoryReferencesContributionChallenges
Road Infrastructure Upgrades[193,201,202]
Created long-term plans for upgrading AV infrastructure.
Established knowledge bases for AV infrastructure needs.
Major investments and upgrades needed.
Collaboration between agencies and sectors is needed for standardization.
Digital Infrastructure and V2X Communication[202,203,204,205]
Supports V2V/V2I communication to improve safety and efficiency.
Enables real-time data for better traffic flow and hazard detection.
Controls traffic signals dynamically to reduce congestion.
Enhances AV navigation by integrating real-time infrastructure data.
Challenges in ensuring fast, reliable, secure V2X communication.
High costs and complexity in scaling V2X across large networks.
Management and Planning Models[201,206,207,208]
Developed models for infrastructure upgrades, focusing on reliability-based planning.
Proposed strategies to integrate AVs into future infrastructure.
Infrastructure management needs to be updated for new technologies.
Advanced tools required to manage future infrastructure complexity.
Human-Centered and Cooperative Design[205,209,210]
Designed frameworks for smooth AV and HDV interaction.
Ensured infrastructure works for both human drivers and AVs.
Risk of miscommunication between AVs and HDVs needs careful design.
Infrastructure must be optimized for both human drivers and AVs.
Simulation and Testing Environments[147,205,211,212]
Developed simulation frameworks to test and validate infrastructure support for AVs.
Simulation improves AV testing in traffic conditions.
Difficulties in creating realistic simulations that reflect real-world conditions.
Simulation models must capture both AV and human driver behavior accurately.
Sensor Integration and Optimization[205,208,213,214]
Proposed frameworks for sensor integration in infrastructure.
Emphasized real-time traffic data for safer driving.
Ongoing need to optimize sensor integration and data processing.
Connectivity gaps can impact real-time traffic monitoring performance.
Infrastructure Impact Studies[11,206,212,215]
Analyzed AV impact on road infrastructure for planning.
Explored long-term effects of AVs on-road use and maintenance.
Mixed traffic during the transition period complicates infrastructure planning.
Infrastructure must stay adaptable to evolving AV technology, requiring frequent updates.
Table 6. List of datasets used for AV research and development.
Table 6. List of datasets used for AV research and development.
DatasetType of DataReference
Rain12600Synthetic[216]
Rain100LSynthetic[65,66,67,68,71,72,73,84,90,91,127,128,129,132,134,135]
Rain200HSynthetic[64,71,127,133]
Rain800Real[76,89,90,91,109,127,129,134]
Rain12Synthetic[64,66,68,87,109,134]
Test100Synthetic[68,70,73,89,132]
Test1200Synthetic[70,73,110]
Test2800Synthetic[70]
RainTrainHSynthetic[217]
RainTrainLSynthetic[217]
Rain100HSynthetic[65,67,70,71,72,73,84,91,93,107,109,111,127,129,132,134,135,136]
KITTIReal[76,107,150,218,219,220]
RaindropReal[108,133]
CityscapesReal[62,70,107,121,128]
RIDReal[70,109]
RISReal[70,109]
Rain12000Synthetic[85]
Rain1400Synthetic[64,71,72,107,109,131,135]
DAWN/RainyReal[221]
NTURainSynthetic[86,109]
SPA-DataReal[90,127]
RESIDESynthetic and Real[76,77,78,97,139,144]
I-HAZESynthetic[77,222]
O-HAZESynthetic[77]
DENSE-HAZESynthetic[77]
NH-HAZESynthetic[77,144]
HazeRDSynthetic[78,144]
SOTSSynthetic[78,108]
4KIDSynthetic[223]
BeDDEReal[224]
REVIDESynthetic[225]
Snow-100KSynthetic and Real[95]
SITDSynthetic[226]
CADCReal[227]
WADSReal[228]
CSDSynthetic[137]
SRRSSynthetic and Real[138]
AVPolicySynthetic[6]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Beigi, S.A.; Park, B.B. Impact of Critical Situations on Autonomous Vehicles and Strategies for Improvement. Future Transp. 2025, 5, 39. https://doi.org/10.3390/futuretransp5020039

AMA Style

Beigi SA, Park BB. Impact of Critical Situations on Autonomous Vehicles and Strategies for Improvement. Future Transportation. 2025; 5(2):39. https://doi.org/10.3390/futuretransp5020039

Chicago/Turabian Style

Beigi, Shahriar Austin, and Byungkyu Brian Park. 2025. "Impact of Critical Situations on Autonomous Vehicles and Strategies for Improvement" Future Transportation 5, no. 2: 39. https://doi.org/10.3390/futuretransp5020039

APA Style

Beigi, S. A., & Park, B. B. (2025). Impact of Critical Situations on Autonomous Vehicles and Strategies for Improvement. Future Transportation, 5(2), 39. https://doi.org/10.3390/futuretransp5020039

Article Metrics

Back to TopTop