Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (116)

Search Parameters:
Keywords = CARLA simulator

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
32 pages, 13552 KB  
Article
Closing Sim2Real Gaps: A Versatile Development and Validation Platform for Autonomous Driving Stacks
by J. Felipe Arango, Rodrigo Gutiérrez-Moreno, Pedro A. Revenga, Ángel Llamazares, Elena López-Guillén and Luis M. Bergasa
Sensors 2026, 26(4), 1338; https://doi.org/10.3390/s26041338 - 19 Feb 2026
Viewed by 149
Abstract
The successful transfer of autonomous driving stacks (ADS) from simulation to the real world faces two main challenges: the Reality Gap (RG)—mismatches between simulated and real behaviors—and the Performance Gap (PG)—differences between expected and achieved performance across domains. We propose a [...] Read more.
The successful transfer of autonomous driving stacks (ADS) from simulation to the real world faces two main challenges: the Reality Gap (RG)—mismatches between simulated and real behaviors—and the Performance Gap (PG)—differences between expected and achieved performance across domains. We propose a Methodology for Closing Reality and Performance Gaps (MCRPG), a structured and iterative approach that jointly reduces RG and PG through parameter tuning, cross-domain metrics, and staged validation. MCRPG comprises three stages—Digital Twin, Parallel Execution, and Real-World—to progressively align ADS behavior and performance. To ground and validate the method, we present an open-source, cost-effective Development and Validation Platform (DVP) that integrates an ROS-based modular ADS with the CARLA simulator and a custom autonomous electric vehicle. We also introduce a two-level metric suite: (i) Reality Alignment via Maximum Normalized Cross-Correlation (MNCC) over multi-modal signals (e.g., ego kinematics, detections), and (ii) Ego-Vehicle Performance covering safety, comfort, and driving efficiency. Experiments in an urban scenario show convergence between simulated and real behavior and increasingly consistent performance across stages. Overall, MCRPG and DVP provide a replicable framework for robust, scalable, and accessible Sim2Real research in autonomous navigation techniques. Full article
14 pages, 3826 KB  
Article
Multi-Agent Sensor Fusion Methodology Using Deep Reinforcement Learning: Vehicle Sensors to Localization
by Túlio Oliveira Araújo, Marcio Lobo Netto and João Francisco Justo
Sensors 2026, 26(4), 1105; https://doi.org/10.3390/s26041105 - 8 Feb 2026
Viewed by 366
Abstract
Despite recent major advances in autonomous driving, several challenges remain. Even with modern advanced sensors and processing systems, vehicles are still unable to detect all possible obstacles present in complex urban settings and under diverse environmental conditions. Consequently, numerous studies have investigated artificial [...] Read more.
Despite recent major advances in autonomous driving, several challenges remain. Even with modern advanced sensors and processing systems, vehicles are still unable to detect all possible obstacles present in complex urban settings and under diverse environmental conditions. Consequently, numerous studies have investigated artificial intelligence methods to improve vehicle perception capabilities. This paper presents a new methodology using a framework named CarAware, which fuses multiple types of sensor data to predict vehicle positions using Deep Reinforcement Learning (DRL). Unlike traditional DRL applications centered on control, this approach focuses on perception. As a case study, the PPO algorithm was used to train and evaluate the effectiveness of this methodology. Full article
(This article belongs to the Special Issue Cooperative Perception and Control for Autonomous Vehicles)
Show Figures

Figure 1

20 pages, 2671 KB  
Article
Semantic-Aligned Multimodal Vision–Language Framework for Autonomous Driving Decision-Making
by Feng Peng, Shangju She and Zejian Deng
Machines 2026, 14(1), 125; https://doi.org/10.3390/machines14010125 - 21 Jan 2026
Viewed by 333
Abstract
Recent advances in Large Vision–Language Models (LVLMs) have demonstrated strong cross-modal reasoning capabilities, offering new opportunities for decision-making in autonomous driving. However, existing end-to-end approaches still suffer from limited semantic consistency, weak task controllability, and insufficient interpretability. To address these challenges, we propose [...] Read more.
Recent advances in Large Vision–Language Models (LVLMs) have demonstrated strong cross-modal reasoning capabilities, offering new opportunities for decision-making in autonomous driving. However, existing end-to-end approaches still suffer from limited semantic consistency, weak task controllability, and insufficient interpretability. To address these challenges, we propose SemAlign-E2E (Semantic-Aligned End-to-End), a semantic-aligned multimodal LVLM framework that unifies visual, LiDAR, and task-oriented textual inputs through cross-modal attention. This design enables end-to-end reasoning from scene understanding to high-level driving command generation. Beyond producing structured control instructions, the framework also provides natural-language explanations to enhance interpretability. We conduct extensive evaluations on the nuScenes dataset and CARLA simulation platform. Experimental results show that SemAlign-E2E achieves substantial improvements in driving stability, safety, multi-task generalization, and semantic comprehension, consistently outperforming state-of-the-art baselines. Notably, the framework exhibits superior behavioral consistency and risk-aware decision-making in complex traffic scenarios. These findings highlight the potential of LVLM-driven semantic reasoning for autonomous driving and provide a scalable pathway toward future semantic-enhanced end-to-end driving systems. Full article
(This article belongs to the Special Issue Control and Path Planning for Autonomous Vehicles)
Show Figures

Figure 1

26 pages, 6868 KB  
Article
A Novel Human–Machine Shared Control Strategy with Adaptive Authority Allocation Considering Scenario Complexity and Driver Workload
by Lijie Liu, Anning Ni, Linjie Gao, Yutong Zhu and Yi Zhang
Actuators 2026, 15(1), 51; https://doi.org/10.3390/act15010051 - 13 Jan 2026
Viewed by 254
Abstract
Human–machine shared control has been widely adopted to enhance driving performance and facilitate smooth transitions between manual and fully autonomous driving. However, existing authority allocation strategies often neglect real-time assessment of scenario complexity and driver workload. To address this gap, we leverage non-invasive [...] Read more.
Human–machine shared control has been widely adopted to enhance driving performance and facilitate smooth transitions between manual and fully autonomous driving. However, existing authority allocation strategies often neglect real-time assessment of scenario complexity and driver workload. To address this gap, we leverage non-invasive eye-tracking devices and the 3D virtual driving simulator Car Learning to Act (CARLA) to collect multimodal data—including physiological measures and vehicle dynamics—for the real-time classification of scenario complexity and cognitive workload. Feature importance is quantified using the SHAP (SHapley Additive exPlanations) values derived from Random Forest classifiers, enabling robust feature selection. Building upon a Hidden Markov Model (HMM) for workload inference and a Model Predictive Control (MPC) framework, we propose a novel human–machine shared control architecture with adaptive authority allocation. Human-in-the-loop validation experiments under both high- and low-workload conditions demonstrate that the proposed strategy significantly improves driving safety, stability, and overall performance. Notably, under high-workload scenarios, it achieves substantially greater reductions in Time to Collision (TTC) and Time to Lane Crossing (TLC) compared to low-workload conditions. Moreover, the adaptive approach yields lower controller load than alternative authority allocation methods, thereby minimizing human–machine conflict. Full article
(This article belongs to the Section Actuators for Surface Vehicles)
Show Figures

Figure 1

25 pages, 2694 KB  
Article
Minimum Risk Maneuver Strategy for Automated Driving System Under Multiple Conditions of Sensor Failure
by Junjie Tang, Chengxin Yang and Hidekazu Nishimura
Systems 2026, 14(1), 87; https://doi.org/10.3390/systems14010087 - 13 Jan 2026
Viewed by 379
Abstract
To ensure the safety of vehicles and occupants under failures or functional limitations of ego vehicles, a minimum risk maneuver (MRM) has been proposed as a key automated driving system (ADS) function. However, executing an MRM may pose certain potential risks when sensor [...] Read more.
To ensure the safety of vehicles and occupants under failures or functional limitations of ego vehicles, a minimum risk maneuver (MRM) has been proposed as a key automated driving system (ADS) function. However, executing an MRM may pose certain potential risks when sensor failures occur. This study proposed an MRM strategy designed to enhance highway-driving safety during MRM execution under multiple sensor-failure conditions. A hazard and operability study analysis, based on an ADS behavior model, is conducted to systematically identify hazards, determine potential hazardous events, and categorize the associated safety risks arising from sensor failures. Within the proposed strategy, virtual objects are generated to account for potential hazards and support risk assessments. Adaptive MRM behavior is determined in real time by analyzing surrounding objects and evaluating time-to-collision and time headway. The strategy is verified by using a MATLAB–CARLA co-simulation environment across three representative highway scenarios with combined sensor failures. The result demonstrates that the proposed MRM strategy can mitigate collision risk in hazardous scenarios while effectively leveraging the remaining functional sensors to guide the ego vehicle toward an appropriate minimum risk condition during MRM execution. Full article
(This article belongs to the Special Issue Application of the Safe System Approach to Transportation)
Show Figures

Figure 1

27 pages, 914 KB  
Article
Reinforcement Learning for Lane-Changing Decision Making in Autonomous Vehicles: A Survey
by Ammar Khaleel and Áron Ballagi
Smart Cities 2026, 9(1), 9; https://doi.org/10.3390/smartcities9010009 - 3 Jan 2026
Viewed by 899
Abstract
Autonomous lane-changing is one of the most critical and complex tasks in automated driving. Recent progress in reinforcement learning (RL) has shown strong potential to help autonomous vehicles (AVs) make safe and flexible lane-change decisions in real time under uncertain traffic conditions. In [...] Read more.
Autonomous lane-changing is one of the most critical and complex tasks in automated driving. Recent progress in reinforcement learning (RL) has shown strong potential to help autonomous vehicles (AVs) make safe and flexible lane-change decisions in real time under uncertain traffic conditions. In the current studies, there is a lack of a common structure that links RL algorithms, simulation tools, and performance evaluation methods. This paper presents a detailed examination of RL-based lane-changing systems in AVs, tracing their development from early rule-based models to modern learning-based approaches. It introduces a clear classification of lane-changing types—discretionary, mandatory, cooperative, and emergency—and connects each to the most suitable RL methods, including value-based, policy-based, actor–critic, model-based, and hybrid algorithms. Each method is examined for its performance, safety, and computational demands. Furthermore, it reviews major simulation environments, such as SUMO, CARLA, and SMARTS, and summarizes key evaluation measures related to safety, efficiency, comfort, and real-time performance. The comparison shows open research challenges, including model adaptation, safety assurance, and transfer from simulation to real-world driving. Finally, it outlines promising directions for future work, such as cooperative decision-making, safe and explainable RL, and lightweight models for real-time use. This review provides a clear foundation and practical guide for developing reliable and understandable RL-based lane-changing systems for future intelligent transportation. Full article
(This article belongs to the Section Smart Urban Mobility, Transport, and Logistics)
Show Figures

Figure 1

19 pages, 1680 KB  
Article
A Hybrid Decision-Making Framework for Autonomous Vehicles in Urban Environments Based on Multi-Agent Reinforcement Learning with Explainable AI
by Ameni Ellouze, Mohamed Karray and Mohamed Ksantini
Vehicles 2026, 8(1), 8; https://doi.org/10.3390/vehicles8010008 - 2 Jan 2026
Viewed by 866
Abstract
Autonomous vehicles (AVs) are expected to operate safely and efficiently in complex urban environments characterized by dynamic and uncertain elements such as pedestrians, cyclists and adverse weather. Although current neural network-based decision-making algorithms, fuzzy logic and reinforcement learning have shown promise, they often [...] Read more.
Autonomous vehicles (AVs) are expected to operate safely and efficiently in complex urban environments characterized by dynamic and uncertain elements such as pedestrians, cyclists and adverse weather. Although current neural network-based decision-making algorithms, fuzzy logic and reinforcement learning have shown promise, they often struggle to handle ambiguous situations, such as partially hidden road signs or unpredictable human behavior. This paper proposes a new hybrid decision-making framework combining multi-agent reinforcement learning (MARL) and explainable artificial intelligence (XAI) to improve robustness, adaptability and transparency. Each agent of the MARL architecture is specialized in a specific sub-task (e.g., obstacle avoidance, trajectory planning, intention prediction), enabling modular and cooperative learning. XAI techniques are integrated to provide interpretable rationales for decisions, facilitating human understanding and regulatory compliance. The proposed system will be validated using CARLA simulator, combined with reference data, to demonstrate improved performance in safety-critical and ambiguous driving scenarios. Full article
(This article belongs to the Special Issue AI-Empowered Assisted and Autonomous Driving)
Show Figures

Figure 1

29 pages, 39850 KB  
Article
MTP-STG: Spatio-Temporal Graph Transformer Networks for Multiple Future Trajectory Prediction in Crowds
by Zichen Zhang, Xingwen Cao, Yi Song, Wenjie Gong, Liyu Zhang, Yanzhen Zhang, Yingxiang Li and Haoran Zhang
Sensors 2025, 25(24), 7466; https://doi.org/10.3390/s25247466 - 8 Dec 2025
Viewed by 700
Abstract
Predicting multiple future pedestrian trajectories is a challenging task for real-world applications like autonomous driving and robotic motion planning. Existing methods primarily focus on immediate spatial interactions among pedestrians, often overlooking the impact of distant spatial environments on their future trajectory choices. Additionally, [...] Read more.
Predicting multiple future pedestrian trajectories is a challenging task for real-world applications like autonomous driving and robotic motion planning. Existing methods primarily focus on immediate spatial interactions among pedestrians, often overlooking the impact of distant spatial environments on their future trajectory choices. Additionally, aligning trajectory smoothness and temporal consistency remains challenging. We propose a multimodal trajectory prediction model that utilizes spatio-temporal graphical attention networks for crowd scenarios. Our method begins by generating simulated multiview pedestrian trajectory data using CARLA. It then combines original and selected multiview trajectories using a convex function to create augmented adversarial trajectories. This is followed by encoding pedestrian historical data with a multitarget detection and tracking algorithm. Using the augmented trajectories and encoded historical information as inputs, our spatio-temporal graph Transformer models scaled spatial interactions among pedestrians. We also integrate a trajectory smoothing method with a Memory Storage Module to predict multiple future paths based on historical crowd movement patterns. Extensive experiments demonstrate that our proposed MTP-STG model achieves state-of-the-art performance in predicting multiple future trajectories in crowds. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

15 pages, 1071 KB  
Article
Analysis of Automotive Lidar Corner Cases Under Adverse Weather Conditions
by Behrus Alavi, Thomas Illing, Felician Campean, Paul Spencer and Amr Abdullatif
Electronics 2025, 14(23), 4695; https://doi.org/10.3390/electronics14234695 - 28 Nov 2025
Viewed by 909
Abstract
The validation of sensor systems, particularly lidar, is crucial in advancing autonomous vehicle technology. Despite their robust perception capabilities, certain weather conditions and object characteristics can challenge detection performance, leading to potential safety concerns. This study investigates corner cases where object detection may [...] Read more.
The validation of sensor systems, particularly lidar, is crucial in advancing autonomous vehicle technology. Despite their robust perception capabilities, certain weather conditions and object characteristics can challenge detection performance, leading to potential safety concerns. This study investigates corner cases where object detection may fail due to physical constraints. Utilizing virtual testing environments like Carla and ROS2, simulations analyze how reflection characteristics affect detectability by implementing weather models into a real-time simulation. Results reveal challenges in detecting black objects compared to white ones, particularly in adverse weather conditions. A time-sensitive corner case was analyzed, revealing that while bad weather and wet roads restrict the safe driving speed range, complete deactivation of the driving assistant at certain speeds may be unnecessary despite current manufacturer practices. The study underscores the importance of considering such factors in future safety protocols to mitigate accidents and ensure reliable autonomous driving systems. Full article
(This article belongs to the Special Issue Autonomous Vehicles: Sensing, Mapping, and Positioning)
Show Figures

Figure 1

26 pages, 8517 KB  
Article
Seeing the City Live: Bridging Edge Vehicle Perception and Cloud Digital Twins to Empower Smart Cities
by Hafsa Iqbal, Jaime Godoy, Beatriz Martin, Abdulla Al-kaff and Fernando Garcia
Smart Cities 2025, 8(6), 197; https://doi.org/10.3390/smartcities8060197 - 25 Nov 2025
Viewed by 1115
Abstract
This paper presents a framework that integrates real-time onboard (ego vehicle) perception module with edge processing capabilities and a cloud-based digital twin for intelligent transportation systems (ITSs) in smart city applications. The proposed system combines onboard 3D object detection and tracking with low [...] Read more.
This paper presents a framework that integrates real-time onboard (ego vehicle) perception module with edge processing capabilities and a cloud-based digital twin for intelligent transportation systems (ITSs) in smart city applications. The proposed system combines onboard 3D object detection and tracking with low latency edge-to-cloud communication, achieving an average end-to-end latency below 0.02 s at 10 Hz update frequency. Experiments conducted on a real autonomous vehicle platform demonstrate a mean Average Precision (mAP@40) of 83.5% for the 3D perception module. The proposed system enables real-time traffic visualization while enabling scalable data management by reducing communication overhead. Future work will extend the system to multi-vehicle deployments and incorporate additional environmental semantics such as traffic signal states, road conditions, and predictive Artificial Intelligence (AI) models to enhance decision support in dynamic urban environments. Full article
Show Figures

Figure 1

23 pages, 4771 KB  
Article
Validating DVS Application in Autonomous Driving with Various AEB Scenarios in CARLA Simulator
by Jingxiang Feng, Peiran Zhao, Jessada Konpang, Adisorn Sirikham, Haoran Zheng, Phuri Kalnaowakul and Jia Wang
World Electr. Veh. J. 2025, 16(11), 634; https://doi.org/10.3390/wevj16110634 - 20 Nov 2025
Viewed by 894
Abstract
Predicting potential collisions with leading vehicles is a fundamental capability of autonomous and assisted driving systems. In particular, automatic emergency braking (AEB) demands reaction times on the order of microseconds. A key limitation of existing approaches lies in their update rate, which is [...] Read more.
Predicting potential collisions with leading vehicles is a fundamental capability of autonomous and assisted driving systems. In particular, automatic emergency braking (AEB) demands reaction times on the order of microseconds. A key limitation of existing approaches lies in their update rate, which is constrained by the sampling speed of conventional sensors. Event-based Dynamic Vision Sensors (DVSs), with their microsecond temporal resolution and high dynamic range, offer a promising alternative to frame-based cameras in challenging driving environments. In this work, we investigate the integration of DVS into autonomous driving pipelines, focusing specifically on AEB scenarios. Building on our earlier work, where a YOLO-based detection model was trained on real-world DVS data, we extend the approach to CARLA’s simulated DVS environment. We publish a CARLA-compatible 2-channel DVS dataset aligned with our detection model, bridging the gap between real-world recordings and simulation. Through a series of simulated AEB scenarios, we demonstrate how DVS enables earlier and more reliable detection compared to RGB cameras, resulting in improved braking performance. Full article
Show Figures

Figure 1

9 pages, 1557 KB  
Proceeding Paper
XAI-Interpreter: A Dual-Attention Framework for Transparent and Explainable Decision-Making in Autonomous Vehicles
by Candaş Ünal, Pelin Öksüz, Tolga Bodrumlu and Musa Yazar
Eng. Proc. 2025, 118(1), 84; https://doi.org/10.3390/ECSA-12-26531 - 7 Nov 2025
Viewed by 230
Abstract
Autonomous vehicles need to explain their actions to improve reliability and build user trust. This study focuses on enhancing the transparency and explainability of the decision-making process in such systems. A module named XAI-Interpreter is developed to identify and highlight the most influential [...] Read more.
Autonomous vehicles need to explain their actions to improve reliability and build user trust. This study focuses on enhancing the transparency and explainability of the decision-making process in such systems. A module named XAI-Interpreter is developed to identify and highlight the most influential factors in driving decisions. The module combines two complementary methods: Learned Attention Weights (LAW) and Object-Level Attention (OLA). In the LAW method, images captured from the ego vehicle’s front and rear cameras in the CARLA simulation environment are processed using the Faster R-CNN model for object detection. GRAD-CAM is then applied to generate visual attention heatmaps, showing which regions and objects in the images affect the model’s decisions. The OLA method analyzes nearby dynamic objects, such as other vehicles, based on their size, speed, position, and orientation relative to the ego vehicle. Each object receives a normalized attention score between 0 and 1, indicating its influence on the vehicle’s behavior. These scores can be used in downstream modules such as planning, control, and safety. The module is currently tested in simulation. Future work will involve deploying the system on real vehicles. By helping the vehicle focus on the most critical elements in its surroundings, the Explainable Artificial Intelligence (XAI)-Interpreter supports more transparent and explainable autonomous driving systems. Full article
Show Figures

Figure 1

15 pages, 7933 KB  
Article
A Framework for Testing and Evaluation of Automated Valet Parking Using OnSite and Unity3D Platforms
by Ouchan Chen, Lei Chen, Junru Yang, Hao Shi, Lin Xu, Haoran Li, Weike Lu and Guojing Hu
Machines 2025, 13(11), 1033; https://doi.org/10.3390/machines13111033 - 7 Nov 2025
Viewed by 788
Abstract
Automated valet parking (AVP) is a key component of autonomous driving systems. Its functionality and reliability need to be thoroughly tested before road application. Current testing technologies are limited by insufficient scenario coverage and lack of comprehensive evaluation indices. This study proposes an [...] Read more.
Automated valet parking (AVP) is a key component of autonomous driving systems. Its functionality and reliability need to be thoroughly tested before road application. Current testing technologies are limited by insufficient scenario coverage and lack of comprehensive evaluation indices. This study proposes an AVP testing and evaluation framework using OnSite (Open Naturalistic Simulation and Testing Environment) and Unity3D platforms. Through scenario construction based on field-collected data and model reconstruction, a testing scenario library is established, complying with industry standards. A simplified kinematic model, balancing simulation accuracy and operational efficiency, is applied to describe vehicle motion. A multidimensional evaluation system is developed with completion rate as a primary index and operation performance as a secondary index, which considers both parking efficiency and accuracy. Over 500 AVP algorithms are tested on the OnSite platform, and the testing results are evaluated through the Unity3D platform. The performance of the top 10 algorithms is analyzed. The evaluation platform is compared with CARLA simulation platform and field vehicle testing. This study finds that the framework provides an effective tool for AVP testing and evaluation; a variety of high-level AVP algorithms are developed, but their flexibility in complex dynamic scenarios has limitations. Future research should focus on exploring more sophisticated learning-based algorithms to enhance AVP adaptability and performance in complex dynamic environment. Full article
(This article belongs to the Special Issue Control and Path Planning for Autonomous Vehicles)
Show Figures

Figure 1

9 pages, 1420 KB  
Proceeding Paper
Simulation of Environment Recognition Systems for Autonomous Vehicles in CARLA Simulator
by Dávid Gorza, Gábor Saly and Dániel Csikor
Eng. Proc. 2025, 113(1), 30; https://doi.org/10.3390/engproc2025113030 - 3 Nov 2025
Viewed by 1965
Abstract
Towards the introduction of autonomous vehicles, studying their functionality is becoming increasingly important. Detecting the environment in a self-driving vehicle is a very complex issue. The combination of different sensors is essential for safe and reliable operation. Detection enables the vehicle to accurately [...] Read more.
Towards the introduction of autonomous vehicles, studying their functionality is becoming increasingly important. Detecting the environment in a self-driving vehicle is a very complex issue. The combination of different sensors is essential for safe and reliable operation. Detection enables the vehicle to accurately recognize and track surrounding objects, understand changes in the dynamic environment, and adapt to different situations. Improving environmental sensing and object recognition is essential for the widespread deployment of self-driving vehicles. In addition to real-world tests, simulation environments provide an opportunity to investigate the operation of autonomous vehicles. Simulations are cost-effective methods for examining the processing of information from the vehicle environment and identifying the current limitations and problems of these technologies. In the CARLA simulator environment, object detection is reproduced in realistic traffic situations. Based on the results, the detection performance was analyzed using interference matrices, F1 scores, accuracy, and coverage metrics. Full article
(This article belongs to the Proceedings of The Sustainable Mobility and Transportation Symposium 2025)
Show Figures

Figure 1

23 pages, 17007 KB  
Article
Demonstrating a Scenario-Based Safety Assurance Framework in Practice
by Martin Skoglund, Anders Thorsén, Ramana Reddy Avula, Karl Lundgren and Fredrik Warg
Vehicles 2025, 7(4), 124; https://doi.org/10.3390/vehicles7040124 - 29 Oct 2025
Cited by 1 | Viewed by 1221
Abstract
Automated driving systems (ADSs) have the potential to make mobility services both safer and more accessible. The New Assessment/Test Method (NATM) from the UNECE establishes a multi-pillar framework for ADS safety assessment, centred on comprehensive scenario-based testing of the operational design domain (ODD). [...] Read more.
Automated driving systems (ADSs) have the potential to make mobility services both safer and more accessible. The New Assessment/Test Method (NATM) from the UNECE establishes a multi-pillar framework for ADS safety assessment, centred on comprehensive scenario-based testing of the operational design domain (ODD). While NATM sets out the vision, it leaves unresolved how such assessments can be scaled and applied in practice. The SUNRISE safety assurance framework (SAF) addresses this challenge by offering a concrete and scalable pathway for operationalising NATM principles. The core contribution of this paper is the successful execution of the SAF process. Rather than validating the performance of a specific automated driving function, the work demonstrates how the SAF can be applied end-to-end: starting from external requirements for the system under test (SUT), through scenario generation based on ODD, dynamic driving task (DDT), and test objectives to the allocation of scenarios across heterogeneous test environments and the consolidation of outcomes into a structured safety argument. The approach is exemplified through the use case of automated truck docking in confined logistics environments. Simulation (CARLA), a scaled model truck, and a full-size truck are employed not to validate the ADS function itself, but to show that the SAF enables consistent, traceable, and defensible execution of NATM-aligned safety assessment. This walk-through highlights the scalability, practicality, and applicability of the SAF to real-world ADS features. Full article
Show Figures

Figure 1

Back to TopTop