Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (94)

Search Parameters:
Keywords = CARLA simulator

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 8223 KB  
Article
Evaluating Visual eHMI Formats for Pedestrian Crossing Confirmation in Electric Autonomous Vehicles: A Comprehension-Time Study with Simulation and Preliminary Field Validation
by Nuksit Noomwongs, Natchanon Kitpramongsri, Sunhapos Chantranuwathana and Gridsada Phanomchoeng
World Electr. Veh. J. 2025, 16(9), 485; https://doi.org/10.3390/wevj16090485 - 25 Aug 2025
Viewed by 397
Abstract
Effective communication between electric autonomous vehicles (EAVs) and pedestrians is critical for safety, yet the absence of a driver removes traditional cues such as eye contact or gestures. While external human–machine interfaces (eHMIs) have been proposed, few studies have systematically compared visual formats [...] Read more.
Effective communication between electric autonomous vehicles (EAVs) and pedestrians is critical for safety, yet the absence of a driver removes traditional cues such as eye contact or gestures. While external human–machine interfaces (eHMIs) have been proposed, few studies have systematically compared visual formats across demographic groups and validated findings in both simulation and real-world settings. This study addresses this gap by evaluating various eHMI designs using combinations of textual cues (“WALK” and “CROSS”), symbolic indicators (pedestrian and arrow icons), and display colors (white and green). Twenty simulated scenarios were developed in the CARLA simulator, where 100 participants observed an EAV equipped with eHMIs and responded by pressing a button upon understanding the vehicle’s intention. The results showed that green displays facilitated faster comprehension than white, “WALK” was understood more quickly than “CROSS,” and pedestrian symbols outperformed arrows in clarity. The fastest overall comprehension occurred with the green pedestrian symbol paired with the word “WALK.” A subsequent field experiment using a Level 3 autonomous vehicle with a smaller participant group and differing speed/distance conditions provided preliminary support for the consistency of these observed trends. The novelty of this work lies in combining simulation with preliminary field validation, using comprehension time as the primary metric, and comparing results across four age groups to derive evidence-based eHMI design recommendations. These findings offer practical guidance for enhancing pedestrian safety, comprehension, and trust in EAV–pedestrian interactions. Full article
Show Figures

Figure 1

25 pages, 4810 KB  
Review
Deep Reinforcement and IL for Autonomous Driving: A Review in the CARLA Simulation Environment
by Piotr Czechowski, Bartosz Kawa, Mustafa Sakhai and Maciej Wielgosz
Appl. Sci. 2025, 15(16), 8972; https://doi.org/10.3390/app15168972 - 14 Aug 2025
Viewed by 787
Abstract
Autonomous driving is a complex and fast-evolving domain at the intersection of robotics, machine learning, and control systems. This paper provides a systematic review of recent developments in reinforcement learning (RL) and imitation learning (IL) approaches for autonomous vehicle control, with a dedicated [...] Read more.
Autonomous driving is a complex and fast-evolving domain at the intersection of robotics, machine learning, and control systems. This paper provides a systematic review of recent developments in reinforcement learning (RL) and imitation learning (IL) approaches for autonomous vehicle control, with a dedicated focus on the CARLA simulator, an open-source, high-fidelity platform that has become a standard for learning-based autonomous vehicle (AV) research. We analyze RL-based and IL-based studies, extracting and comparing their formulations of state, action, and reward spaces. Special attention is given to the design of reward functions, control architectures, and integration pipelines. Comparative graphs and diagrams illustrate performance trade-offs. We further highlight gaps in generalization to real-world driving scenarios, robustness under dynamic environments, and scalability of agent architectures. Despite rapid progress, existing autonomous driving systems exhibit significant limitations. For instance, studies show that end-to-end reinforcement learning (RL) models can suffer from performance degradation of up to 35% when exposed to unseen weather or town conditions, and imitation learning (IL) agents trained solely on expert demonstrations exhibit up to 40% higher collision rates in novel environments. Furthermore, reward misspecification remains a critical issue—over 20% of reported failures in simulated environments stem from poorly calibrated reward signals. Generalization gaps, especially in RL, also manifest in task-specific overfitting, with agents failing up to 60% of the time when faced with dynamic obstacles not encountered during training. These persistent shortcomings underscore the need for more robust and sample-efficient learning strategies. Finally, we discuss hybrid paradigms that integrate IL and RL, such as Generative Adversarial IL, and propose future research directions. Full article
(This article belongs to the Special Issue Design and Applications of Real-Time Embedded Systems)
Show Figures

Figure 1

22 pages, 1961 KB  
Article
Research and Quantitative Analysis on Dynamic Risk Assessment of Intelligent Connected Vehicles
by Kailong Li, Feng Zhang, Min Li and Li Wang
World Electr. Veh. J. 2025, 16(8), 465; https://doi.org/10.3390/wevj16080465 - 14 Aug 2025
Viewed by 477
Abstract
Ensuring dynamic risk management for intelligent connected vehicles (ICVs) in complex urban environments is critical as autonomous driving technology advances. This study presents three key contributions: (1) a comprehensive risk indicator system, constructed using entropy-based weighting, extracts 13-dimensional data on abnormal behaviors (e.g., [...] Read more.
Ensuring dynamic risk management for intelligent connected vehicles (ICVs) in complex urban environments is critical as autonomous driving technology advances. This study presents three key contributions: (1) a comprehensive risk indicator system, constructed using entropy-based weighting, extracts 13-dimensional data on abnormal behaviors (e.g., speed, acceleration, position) to enhance safety and efficiency; (2) a multidimensional risk quantification method, simulated under single-vehicle and platooning modes on a CARLA-SUMO co-simulation platform, achieved >98% accuracy; (3) a cloud takeover strategy for high-level autonomous vehicles, directly linking risk assessment to real-time control. Analysis of 56,117 risk data points shows a 32% reduction in safety risks during simulations. These contributions provide methodological innovations and substantial data support for ICV field testing. Full article
Show Figures

Figure 1

21 pages, 10005 KB  
Article
Improved Genetic Algorithm-Based Path Planning for Multi-Vehicle Pickup in Smart Transportation
by Zeyu Liu, Chengyu Zhou, Junxiang Li, Chenggang Wang and Pengnian Zhang
Smart Cities 2025, 8(4), 136; https://doi.org/10.3390/smartcities8040136 - 14 Aug 2025
Viewed by 319
Abstract
With the rapid development of intelligent transportation systems and online ride-hailing platforms, the demand for promptly responding to passenger requests while minimizing vehicle idling and travel costs has grown substantially. This paper addresses the challenges of suboptimal vehicle path planning and partially connected [...] Read more.
With the rapid development of intelligent transportation systems and online ride-hailing platforms, the demand for promptly responding to passenger requests while minimizing vehicle idling and travel costs has grown substantially. This paper addresses the challenges of suboptimal vehicle path planning and partially connected pickup stations by formulating the task as a Capacitated Vehicle Routing Problem (CVRP). We propose an Improved Genetic Algorithm (IGA)-based path planning model designed to minimize total travel distance while respecting vehicle capacity constraints. To handle scenarios where certain pickup points are not directly connected, we integrate graph-theoretic techniques to ensure route continuity. The proposed model incorporates a multi-objective fitness function, a rank-based selection strategy with adjusted weights, and Dijkstra-based path estimation to enhance convergence speed and global optimization performance. Experimental evaluations on four benchmark maps from the Carla simulation platform demonstrate that the proposed approach can rapidly generate optimized multi-vehicle path planning solutions and effectively coordinate pickup tasks, achieving significant improvements in both route quality and computational efficiency compared to traditional methods. Full article
Show Figures

Figure 1

17 pages, 3359 KB  
Article
Automated Generation of Test Scenarios for Autonomous Driving Using LLMs
by Aaron Agyapong Danso and Ulrich Büker
Electronics 2025, 14(16), 3177; https://doi.org/10.3390/electronics14163177 - 10 Aug 2025
Viewed by 1267
Abstract
This paper introduces an approach that leverages large language models (LLMs) to convert detailed descriptions of an Operational Design Domain (ODD) into realistic, executable simulation scenarios for testing autonomous vehicles. The method combines model-based and data-driven techniques to decompose ODDs into three key [...] Read more.
This paper introduces an approach that leverages large language models (LLMs) to convert detailed descriptions of an Operational Design Domain (ODD) into realistic, executable simulation scenarios for testing autonomous vehicles. The method combines model-based and data-driven techniques to decompose ODDs into three key components: environmental, scenery, and dynamic elements. It then applies prompt engineering to generate ScenarioRunner scripts compatible with CARLA. The model-based component guides the LLM using structured prompts and a “Tree of Thoughts” strategy to outline the scenario, while a data-driven refinement process, drawing inspiration from red teaming, enhances the accuracy and robustness of the generated scripts over time. Experimental results show that while static components, such as weather and road layouts, are well captured, dynamic elements like vehicle and pedestrian behavior require further refinement. Overall, this approach not only reduces the manual effort involved in creating simulation scenarios but also identifies key challenges and opportunities for advancing safer and more adaptive autonomous driving systems. Full article
(This article belongs to the Special Issue Autonomous and Connected Vehicles)
Show Figures

Figure 1

4 pages, 976 KB  
Proceeding Paper
Developing a Risk Recognition System Based on a Large Language Model for Autonomous Driving
by Donggyu Min and Dong-Kyu Kim
Eng. Proc. 2025, 102(1), 7; https://doi.org/10.3390/engproc2025102007 - 29 Jul 2025
Viewed by 313
Abstract
Autonomous driving systems have the potential to reduce traffic accidents dramatically; however, conventional modules often struggle to accurately detect risks in complex environments. This study presents a novel risk recognition system that integrates the reasoning capabilities of a large language model (LLM), specifically [...] Read more.
Autonomous driving systems have the potential to reduce traffic accidents dramatically; however, conventional modules often struggle to accurately detect risks in complex environments. This study presents a novel risk recognition system that integrates the reasoning capabilities of a large language model (LLM), specifically GPT-4, with traffic engineering domain knowledge. By incorporating surrogate safety measures such as time-to-collision (TTC) alongside traditional sensor and image data, our approach enhances the vehicle’s ability to interpret and react to potentially dangerous situations. Utilizing the realistic 3D simulation environment of CARLA, the proposed framework extracts comprehensive data—including object identification, distance, TTC, and vehicle dynamics—and reformulates this information into natural language inputs for GPT-4. The LLM then provides risk assessments with detailed justifications, guiding the autonomous vehicle to execute appropriate control commands. The experimental results demonstrate that the LLM-based module outperforms conventional systems by maintaining safer distances, achieving more stable TTC values, and delivering smoother acceleration control during dangerous scenarios. This fusion of LLM reasoning with traffic engineering principles not only improves the reliability of risk recognition but also lays a robust foundation for future real-time applications and dataset development in autonomous driving safety. Full article
(This article belongs to the Proceedings of The 2025 Suwon ITS Asia Pacific Forum)
Show Figures

Figure 1

25 pages, 1299 KB  
Article
Quantifying Automotive Lidar System Uncertainty in Adverse Weather: Mathematical Models and Validation
by Behrus Alavi, Thomas Illing, Felician Campean, Paul Spencer and Amr Abdullatif
Appl. Sci. 2025, 15(15), 8191; https://doi.org/10.3390/app15158191 - 23 Jul 2025
Viewed by 599
Abstract
Lidar technology is a key sensor for autonomous driving due to its precise environmental perception. However, adverse weather and atmospheric conditions involving fog, rain, snow, dust, and smog can impair lidar performance, leading to potential safety risks. This paper introduces a comprehensive methodology [...] Read more.
Lidar technology is a key sensor for autonomous driving due to its precise environmental perception. However, adverse weather and atmospheric conditions involving fog, rain, snow, dust, and smog can impair lidar performance, leading to potential safety risks. This paper introduces a comprehensive methodology to simulate lidar systems under such conditions and validate the results against real-world experiments. Existing empirical models for the extinction and backscattering of laser beams are analyzed, and new models are proposed for dust storms and smog, derived using Mie theory. These models are implemented in the CARLA simulator and evaluated using Robot Operating System 2 (ROS 2). The simulation methodology introduced allowed the authors to set up test experiments replicating real-world conditions, to validate the models against real-world data available in the literature, and to predict the performance of the lidar system in all weather conditions. This approach enables the development of virtual test scenarios for corner cases representing rare weather conditions to improve robustness and safety in autonomous systems. Full article
(This article belongs to the Section Transportation and Future Mobility)
Show Figures

Figure 1

31 pages, 28041 KB  
Article
Cyberattack Resilience of Autonomous Vehicle Sensor Systems: Evaluating RGB vs. Dynamic Vision Sensors in CARLA
by Mustafa Sakhai, Kaung Sithu, Min Khant Soe Oke and Maciej Wielgosz
Appl. Sci. 2025, 15(13), 7493; https://doi.org/10.3390/app15137493 - 3 Jul 2025
Viewed by 877
Abstract
Autonomous vehicles (AVs) rely on a heterogeneous sensor suite of RGB cameras, LiDAR, GPS/IMU, and emerging event-based dynamic vision sensors (DVS) to perceive and navigate complex environments. However, these sensors can be deceived by realistic cyberattacks, undermining safety. In this work, we systematically [...] Read more.
Autonomous vehicles (AVs) rely on a heterogeneous sensor suite of RGB cameras, LiDAR, GPS/IMU, and emerging event-based dynamic vision sensors (DVS) to perceive and navigate complex environments. However, these sensors can be deceived by realistic cyberattacks, undermining safety. In this work, we systematically implement seven attack vectors in the CARLA simulator—salt and pepper noise, event flooding, depth map tampering, LiDAR phantom injection, GPS spoofing, denial of service, and steering bias control—and measure their impact on a state-of-the-art end-to-end driving agent. We then equip each sensor with tailored defenses (e.g., adaptive median filtering for RGB and spatial clustering for DVS) and integrate a unsupervised anomaly detector (EfficientAD from anomalib) trained exclusively on benign data. Our detector achieves clear separation between normal and attacked conditions (mean RGB anomaly scores of 0.00 vs. 0.38; DVS: 0.61 vs. 0.76), yielding over 95% detection accuracy with fewer than 5% false positives. Defense evaluations reveal that GPS spoofing is fully mitigated, whereas RGB- and depth-based attacks still induce 30–45% trajectory drift despite filtering. Notably, our research-focused evaluation of DVS sensors suggests potential intrinsic resilience advantages in high-dynamic-range scenarios, though their asynchronous output necessitates carefully tuned thresholds. These findings underscore the critical role of multi-modal anomaly detection and demonstrate that DVS sensors exhibit greater intrinsic resilience in high-dynamic-range scenarios, suggesting their potential to enhance AV cybersecurity when integrated with conventional sensors. Full article
(This article belongs to the Special Issue Intelligent Autonomous Vehicles: Development and Challenges)
Show Figures

Figure 1

15 pages, 4924 KB  
Communication
RGB-to-Infrared Translation Using Ensemble Learning Applied to Driving Scenarios
by Leonardo Ravaglia, Roberto Longo, Kaili Wang, David Van Hamme, Julie Moeyersoms, Ben Stoffelen and Tom De Schepper
J. Imaging 2025, 11(7), 206; https://doi.org/10.3390/jimaging11070206 - 20 Jun 2025
Viewed by 596
Abstract
Multimodal sensing is essential in order to reach the robustness required of autonomous vehicle perception systems. Infrared (IR) imaging is of particular interest due to its low cost and complementarity with traditional RGB sensors. However, the lack of IR data in many datasets [...] Read more.
Multimodal sensing is essential in order to reach the robustness required of autonomous vehicle perception systems. Infrared (IR) imaging is of particular interest due to its low cost and complementarity with traditional RGB sensors. However, the lack of IR data in many datasets and simulation tools limits the development and validation of sensor fusion algorithms that exploit this complementarity. To address this, we propose an augmentation method that synthesizes realistic IR data from RGB images using gradient-boosting decision trees. We demonstrate that this method is an effective alternative to traditional deep learning methods for image translation such as CNNs and GANs, particularly in data-scarce situations. The proposed approach generates high-quality synthetic IR, i.e., Near-Infrared (NIR) and thermal images from RGB images, enhancing datasets such as MS2, EPFL, and Freiburg. Our synthetic images exhibit good visual quality when evaluated using metrics such as R2, PSNR, SSIM, and LPIPS, achieving an R2 of 0.98 on the MS2 dataset and a PSNR of 21.3 dB on the Freiburg dataset. We also discuss the application of this method to synthetic RGB images generated by the CARLA simulator for autonomous driving. Our approach provides richer datasets with a particular focus on IR modalities for sensor fusion along with a framework for generating a wider variety of driving scenarios within urban driving datasets, which can help to enhance the robustness of sensor fusion algorithms. Full article
(This article belongs to the Section Computer Vision and Pattern Recognition)
Show Figures

Figure 1

32 pages, 107074 KB  
Article
A Comparative Study of Deep Reinforcement Learning Algorithms for Urban Autonomous Driving: Addressing the Geographic and Regulatory Challenges in CARLA
by Yechan Park, Woomin Jun and Sungjin Lee
Appl. Sci. 2025, 15(12), 6838; https://doi.org/10.3390/app15126838 - 17 Jun 2025
Cited by 2 | Viewed by 2168
Abstract
To enable autonomous driving in real-world environments that involve a diverse range of geographic variations and complex traffic regulations, it is essential to investigate Deep Reinforcement Learning (DRL) algorithms capable of policy learning in high-dimensional environments characterized by intricate state–action interactions. In particular, [...] Read more.
To enable autonomous driving in real-world environments that involve a diverse range of geographic variations and complex traffic regulations, it is essential to investigate Deep Reinforcement Learning (DRL) algorithms capable of policy learning in high-dimensional environments characterized by intricate state–action interactions. In particular, closed-loop experiments, which involve continuous interaction between an agent and their driving environment, serve as a critical framework for improving the practical applicability of DRL algorithms in autonomous driving systems. This study empirically analyzes the capabilities of several representative DRL algorithms—namely DDPG, SAC, TD3, PPO, TQC, and CrossQ—in handling various urban driving scenarios using the CARLA simulator within a closed-loop framework. To evaluate the adaptability of each algorithm to geographical variability and complex traffic laws, scenario-specific reward and penalty functions were carefully designed and incorporated. For a comprehensive performance assessment of the DRL algorithms, we defined several driving performance metrics, including Route Completion, Centerline Deviation Mean, Episode Reward Mean, and Success Rate, which collectively reflect the quality of the driving in terms of its completeness, stability, efficiency, and comfort. Experimental results demonstrate that TQC and SAC, both of which adopt off-policy learning and stochastic policies, achieve superior sample efficiency and learning performances. Notably, the presence of geographically variant features—such as traffic lights, intersections, and roundabouts—and their associated traffic rules within a given town pose significant challenges to driving performance, particularly in terms of Route Completion, Success Rate, and lane-keeping stability. In these challenging scenarios, the TQC algorithm achieved a Route Completion rate of 0.91, substantially outperforming the 0.23 rate observed with DDPG. This performance gap highlights the advantage of approaches like TQC and SAC, which address Q-value overestimation through statistical methods, in improving the robustness and effectiveness of autonomous driving in diverse urban environments. Full article
(This article belongs to the Special Issue Advances in Autonomous Driving and Smart Transportation)
Show Figures

Figure 1

13 pages, 2716 KB  
Article
Analysis of the Influence of Image Resolution in Traffic Lane Detection Using the CARLA Simulation Environment
by Aron Csato, Florin Mariasiu and Gergely Csiki
Vehicles 2025, 7(2), 60; https://doi.org/10.3390/vehicles7020060 - 16 Jun 2025
Viewed by 633
Abstract
Computer vision is one of the key technologies of advanced driver assistance systems (ADAS), but the incorporation of a vision-based driver assistance system (still) poses a great challenge due to the special characteristics of the algorithms, the neural network architecture, the constraints, and [...] Read more.
Computer vision is one of the key technologies of advanced driver assistance systems (ADAS), but the incorporation of a vision-based driver assistance system (still) poses a great challenge due to the special characteristics of the algorithms, the neural network architecture, the constraints, and the strict hardware/software requirements that need to be met. The aim of this study is to show the influence of image resolution in traffic lane detection using a virtual dataset from virtual simulation environment (CARLA) combined with a real dataset (TuSimple), considering four performance parameters: Mean Intersection over Union (mIoU), F1 precision score, Inference time, and processed frames per second (FPS). By using a convolutional neural network (U-Net) specifically designed for image segmentation tasks, the impact of different input image resolutions (512 × 256, 640 × 320, and 1024 × 512) on the efficiency of traffic line detection and on computational efficiency was analyzed and presented. Results indicate that a resolution of 512 × 256 yields the best trade-off, offering high mIoU and F1 scores while maintaining real-time processing speeds on a standard CPU. A key contribution of this work is the demonstration that combining synthetic and real datasets enhances model performance, especially when real data is limited. The novelty of this study lies in its dual analysis of simulation-based data and image resolution as key factors in training effective lane detection systems. These findings support the use of synthetic environments in training neural networks for autonomous driving applications. Full article
(This article belongs to the Special Issue Intelligent Mobility and Sustainable Automotive Technologies)
Show Figures

Figure 1

19 pages, 6852 KB  
Article
Quantitative Analysis of Situation Awareness During Autonomous Vehicle Handover on the Da Vinci Research Kit
by Tamás Levendovics, Dániel A. Drexler, Nikita Ukhrenkov, Árpád Takács and Tamás Haidegger
Sensors 2025, 25(11), 3514; https://doi.org/10.3390/s25113514 - 2 Jun 2025
Viewed by 1184
Abstract
The current trends in the research and development of self-driving technology aim for Level 3+ autonomy, where the vehicle controls both lateral and longitudinal motions of the dynamic driving task, while the driver is permitted to divert their attention, as long as they [...] Read more.
The current trends in the research and development of self-driving technology aim for Level 3+ autonomy, where the vehicle controls both lateral and longitudinal motions of the dynamic driving task, while the driver is permitted to divert their attention, as long as they are able to react properly to a handover request initiated by the vehicle. At this level of autonomy, situation awareness of the human driver has become one of the most important metrics of safety. This paper presents the results of a user study to evaluate handover performance at Level 3 autonomy. The study investigates whether the level of situation awareness during critical handover situations has a direct impact on task performance, with higher situation awareness expected to lead to better outcomes during emergency interventions. The study is performed in a simulated environment, using the CARLA driving simulator and the master console of the da Vinci Surgical System. The test subjects were asked to answer the questions of a questionnaire during the experiment; the answers for those questions and the measured control signals were analyzed to gain further knowledge on the safety of the handover process. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

28 pages, 2698 KB  
Article
Comparative Analysis of Machine Learning Methods with Chaotic AdaBoost and Logistic Mapping for Real-Time Sensor Fusion in Autonomous Vehicles: Enhancing Speed and Acceleration Prediction Under Uncertainty
by Mehmet Bilban and Onur İnan
Sensors 2025, 25(11), 3485; https://doi.org/10.3390/s25113485 - 31 May 2025
Viewed by 747
Abstract
This study presents a novel artificial intelligence-driven architecture for real-time sensor fusion in autonomous vehicles (AVs), leveraging Apache Kafka and MongoDB for synchronous and asynchronous data processing to enhance resilience against sensor failures and dynamic conditions. We introduce Chaotic AdaBoost (CAB), an advanced [...] Read more.
This study presents a novel artificial intelligence-driven architecture for real-time sensor fusion in autonomous vehicles (AVs), leveraging Apache Kafka and MongoDB for synchronous and asynchronous data processing to enhance resilience against sensor failures and dynamic conditions. We introduce Chaotic AdaBoost (CAB), an advanced variant of AdaBoost that integrates a logistic chaotic map into its weight update process, overcoming the limitations of deterministic ensemble methods. CAB is evaluated alongside k-Nearest Neighbors (kNNs), Artificial Neural Networks (ANNs), standard AdaBoost (AB), Gradient Boosting (GBa), and Random Forest (RF) for speed and acceleration prediction using CARLA simulator data. CAB achieves a superior 99.3% accuracy (MSE: 0.018 for acceleration, 0.010 for speed; MAE: 0.020 for acceleration, 0.012 for speed; R2: 0.993 for acceleration, 0.997 for speed), a mean Time-To-Collision (TTC) of 3.2 s, and jerk of 0.15 m/s3, outperforming AB (98.5%, MSE: 0.15, TTC: 2.8 s, jerk: 0.22 m/s3), GB (99.1%), ANN (98.2%), RF (97.5%), and kNN (87.0%). This logistic map-enhanced adaptability, reducing MSE by 88% over AB, ensures robust anomaly detection and data fusion under uncertainty, critical for AV safety and comfort. Despite a 20% increase in training time (72 s vs. 60 s for AB), CAB’s integration with Kafka’s high-throughput streaming maintains real-time efficacy, offering a scalable framework that advances operational reliability and passenger experience in autonomous driving. Full article
Show Figures

Figure 1

22 pages, 5002 KB  
Article
Enhancing Autonomous Driving Perception: A Practical Approach to Event-Based Object Detection in CARLA and ROS
by Jingxiang Feng, Peiran Zhao, Haoran Zheng, Jessada Konpang, Adisorn Sirikham and Phuri Kalnaowakul
Vehicles 2025, 7(2), 53; https://doi.org/10.3390/vehicles7020053 - 30 May 2025
Cited by 3 | Viewed by 1624 | Correction
Abstract
Robust object detection in autonomous driving is challenged by inherent limitations of conventional frame-based cameras, such as motion blur and limited dynamic range. In contrast, event-based cameras, which operate asynchronously and capture rapid changes with high temporal resolution and expansive dynamic range, offer [...] Read more.
Robust object detection in autonomous driving is challenged by inherent limitations of conventional frame-based cameras, such as motion blur and limited dynamic range. In contrast, event-based cameras, which operate asynchronously and capture rapid changes with high temporal resolution and expansive dynamic range, offer a promising augmentation. While the previous research on event-based object detection has predominantly focused on algorithmic enhancements via advanced preprocessing and network optimizations to improve detection accuracy, the practical engineering and integration challenges of deploying these sensors in real-world systems remain underexplored. To address this gap, our study investigates the integration of event-based cameras as a complementary sensor modality in autonomous driving. We adapted a conventional frame-based detection model (YOLOv8) for event-based inputs by training it on the GEN1 dataset, achieving a mean average precision (mAP) of 70.1%, a significant improvement over previous benchmarks. Additionally, we developed a real-time object detection pipeline optimized for event-based data, integrating it into the CARLA simulation environment and ROS for system prototyping. The model was further refined using transfer learning to better adapt to simulation conditions, and the complete pipeline was validated across diverse simulated scenarios to address practical challenges. These results underscore the feasibility of incorporating event cameras into existing perception systems, paving the way for their broader deployment in autonomous vehicle applications. Full article
Show Figures

Figure 1

13 pages, 436 KB  
Article
AI Training Data Management for Reliable Autonomous Vehicles Using Hashgraph
by Yeonsong Suh, Yoonseo Chung and Younghoon Park
Appl. Sci. 2025, 15(11), 6123; https://doi.org/10.3390/app15116123 - 29 May 2025
Viewed by 688
Abstract
Autonomous vehicles have attracted considerable attention from researchers and organizations, with artificial intelligence (AI) playing a key role in this technology. For AI models in autonomous vehicles to be reliable, the integrity of the training data is crucial, resulting in the development of [...] Read more.
Autonomous vehicles have attracted considerable attention from researchers and organizations, with artificial intelligence (AI) playing a key role in this technology. For AI models in autonomous vehicles to be reliable, the integrity of the training data is crucial, resulting in the development of various blockchain-based management systems. However, conventional blockchain systems incur significant time delays when processing training data transactions, posing challenges in autonomous vehicle environments that require real-time processing. In this study, we propose a hashgraph-based training data management system for trusted AI. To validate our system, we conducted simulations using the CARLA simulator and compared its performance to a conventional blockchain-based system. The simulation results show that Hedera achieved significantly lower latencies and better scalability than Ethereum, confirming its suitability for secure and efficient AI data verification in autonomous systems. Full article
(This article belongs to the Topic Emerging AI+X Technologies and Applications)
Show Figures

Figure 1

Back to TopTop