Next Article in Journal
Hybrid Control Strategy for 5G Base Station Virtual Battery-Assisted Power Grid Peak Shaving
Previous Article in Journal
Ambient Backscatter-Based User Cooperation for mmWave Wireless-Powered Communication Networks with Lens Antenna Arrays
Previous Article in Special Issue
Research on a Personalized Decision Control Algorithm for Autonomous Vehicles Based on the Reinforcement Learning from Human Feedback Strategy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Virtual Tools for Testing Autonomous Driving: A Survey and Benchmark of Simulators, Datasets, and Competitions

1
College of Mechanical and Vehicle Engineering, Hunan University, Changsha 410082, China
2
Department of Engineering Mechanics, Dalian University of Technology, Dalian 116024, China
*
Author to whom correspondence should be addressed.
Electronics 2024, 13(17), 3486; https://doi.org/10.3390/electronics13173486
Submission received: 2 July 2024 / Revised: 17 August 2024 / Accepted: 29 August 2024 / Published: 2 September 2024

Abstract

:
Traditional road testing of autonomous vehicles faces significant limitations, including long testing cycles, high costs, and substantial risks. Consequently, autonomous driving simulators and dataset-based testing methods have gained attention for their efficiency, low cost, and reduced risk. Simulators can efficiently test extreme scenarios and provide quick feedback, while datasets offer valuable real-world driving data for algorithm training and optimization. However, existing research often provides brief and limited overviews of simulators and datasets. Additionally, while the role of virtual autonomous driving competitions in advancing autonomous driving technology is recognized, comprehensive surveys on these competitions are scarce. This survey paper addresses these gaps by presenting an in-depth analysis of 22 mainstream autonomous driving simulators, focusing on their accessibility, physics engines, and rendering engines. It also compiles 35 open-source datasets, detailing key features in scenes and data-collecting sensors. Furthermore, the paper surveys 10 notable virtual competitions, highlighting essential information on the involved simulators, datasets, and tested scenarios involved. Additionally, this review analyzes the challenges in developing autonomous driving simulators, datasets, and virtual competitions. The aim is to provide researchers with a comprehensive perspective, aiding in the selection of suitable tools and resources to advance autonomous driving technology and its commercial implementation.

1. Introduction

Autonomous vehicles integrate environmental perception, decision-making [1], and motion control technologies, offering advantages such as enhanced driving safety, alleviated traffic congestion, and reduced traffic accidents [2]. They hold substantial potential in future intelligent transportation systems [3]. However, in real-world driving scenarios, rare but high-risk events, such as sudden roadblocks, unexpected pedestrian or animal crossings, and emergency maneuvers under complex conditions, can lead to severe traffic accidents [4]. Therefore, before autonomous vehicles can be commercialized, they must undergo rigorous and comprehensive testing to ensure reliability across various scenarios [5].
Validating the reliability of autonomous vehicles requires billions of miles of testing on public roads [6]. Traditional road-testing methods present several limitations. Firstly, the testing cycle is prolonged due to the need to evaluate performance under diverse weather, road, and traffic conditions, requiring substantial time and resources [7]. Secondly, the costs are high, encompassing vehicle and equipment investments, personnel wages, insurance, and potential accident liabilities [8]. Lastly, testing carries significant risks, as accidents can occur even under strict testing conditions, potentially harming test personnel and damaging the reputation of autonomous vehicle suppliers [9]. Thus, relying solely on road tests is insufficient for efficiently validating autonomous vehicle performance [10].
New testing methods and strategies are being explored to more effectively assess the reliability of autonomous vehicles [11]. Currently, companies widely use autonomous driving simulators and datasets to evaluate the performance of autonomous driving modules [12]. These simulators are capable of offering high-fidelity virtual environments, accurate vehicle dynamics and sensor models, and realistic simulations of weather conditions, lighting variations, and traffic scenarios [13]. Furthermore, simulators can rapidly generate and repeatedly test extreme scenarios that are difficult and risky to replicate in reality, significantly enhancing testing efficiency and addressing the lengthy, resource-intensive, and less reproducible nature of road tests [14]. Therefore, simulators can provide a safe and cost-effective environment for performance evaluation [15].
Simulators serve as efficient virtual testing platforms, but they require specific data to accurately simulate the authentic environment surrounding automated vehicles. In this context, datasets comprising sensor data collected from real driving scenarios are essential to reconstruct the diverse situations a vehicle may encounter. This approach enables the validation and optimization of autonomous driving hardware and software in realistic simulated environments [16]. Under such circumstances, simulations involving specific datasets enhance the performance of autonomous driving systems in real-world applications, ensuring algorithms can operate reliably under complex and variable driving conditions [17]. Furthermore, they help identify potential defects and limitations in algorithms, providing researchers with valuable insights for improvement [18].
Simulators and datasets significantly enhance the efficiency of research and development in autonomous driving systems, particularly within enterprises. Road testing typically serves as the final validation stage for these systems. However, research groups in academic institutions often lack the personnel and resources to perform extensive road tests. Consequently, virtual autonomous driving competitions have emerged, allowing research groups to compete in their research outputs, particularly autonomous driving algorithms, against those from enterprises [19]. These competitions, which provide a practical platform for technology development and foster technical exchange and progress, rely on virtual testing platforms and realistic data from efficient datasets [20]. Based on real and critical driving and traffic accident scenarios, virtual competitions allow participants to demonstrate their algorithms in a controlled environment. Furthermore, compared to real-vehicle competitions, virtual competitions do not require significant investment in or maintenance of physical vehicles, nor do they incur related operational costs [21].

1.1. Related Works

This subsection surveys previous reviews related to autonomous driving simulators, datasets, and competitions.
In terms of simulators, Rosique et al. [22] focused on the simulation of perception systems in autonomous vehicles, highlighting key elements and briefly comparing 11 simulators based on open-source status, physics/graphics engines, scripting languages, and sensor simulation. However, their study lacks detailed introductions to the functional features of each simulator and a comprehensive analysis of these dimensions. Yang et al. [23] categorized common autonomous driving simulators into point cloud-based and 3D engine-based types, providing a brief functional overview. Unfortunately, their survey was incomplete and did not thoroughly cover the basic functions or highlight distinctive features of the simulators, limiting readers’ understanding of their specific characteristics and advantages. Kaur et al. [24] provided detailed descriptions of six common simulators: Matlab/Simulink, CarSim, PreScan, CARLA, Gazebo, and LGSVL. They compared CARLA and LGSVL across simulation environment, sensor simulation, map generation, and software architecture. Similarly, Zhou et al. [25] summarized the features of seven common simulators and compared them across several dimensions, such as environment and scenario generation, sensor setup, and traffic participant control. Despite their detailed functional descriptions and comparative analyses, these studies surveyed a limited number of simulators and provided only practical functional comparisons without deeply exploring the differences in simulation principles.
In the domain of autonomous driving datasets, Janai et al. [26] surveyed datasets related to computer vision applications in autonomous driving. Yin et al. [27] provided a summary of several open-source datasets primarily collected on public roads. Kang et al. [28] expanded this work, including more datasets and simulators, and offered comparative summaries based on key influencing factors. Guo et al. [29] introduced the concept of drivability in autonomous driving and categorized an even greater number of open-source datasets. Liu et al. [30] provided an overview of over a hundred datasets, classifying them by sensing domains and use cases, and proposed an innovative evaluation metric to assess their impact. While these studies have made significant progress in collecting, categorizing, and summarizing datasets, they often present key information in a simplified form. Although these summaries are useful for quick reference, they basically omit essential details about data collection methods, labeling, and usage restrictions.
Regarding autonomous driving competitions, most existing surveys focus on physical autonomous racing. For instance, Li et al. [31] reviewed governance rules of autonomous racing competitions to enhance their educational value and proposed recommendations for rule-making. Betz et al. [32] conducted a comprehensive survey of algorithms and methods used in perception, planning, control, and end-to-end learning in autonomous racing, and they summarized various autonomous racing competitions and related vehicles. These studies primarily concentrate on the field of autonomous racing, while virtual autonomous driving competitions using simulators and datasets are scarcely addressed.

1.2. Motivations and Contributions

The effective evaluation and selection of autonomous driving simulators require a comprehensive understanding of their functional modules and underlying technologies. However, existing studies often lack a thorough explanation of these functional modules and overlook in-depth discussions of core components such as physics engines and rendering engines. To address these gaps, this study expands on the content of simulator surveys, providing a detailed exploration of their physics and rendering engines. Furthermore, a deep understanding of key functions is crucial for the practical application of simulators. Existing research frequently falls short in its discussion of these functions. Thus, this study focuses on key functionalities such as scenario simulation, sensor simulation, and the implementation of vehicle dynamics simulation. Additionally, we highlight simulators that excel in these functional domains, showcasing their specific applications.
Existing surveys on autonomous driving datasets either cover a limited number of datasets or omit essential information. This study provides a comprehensive introduction to datasets, encompassing various aspects such as dataset types, collection periods, covered scenarios, usage restrictions, and data scales.
Finally, while virtual autonomous driving competitions play a significant role in advancing simulators and datasets, reviews in this area are scarce. This study investigates and summarizes existing virtual autonomous driving competitions.
Our work presents the following contributions:
  • This survey is the first to systematically investigate autonomous driving simulators by providing a deep analysis of their physics and rendering engines to support informed simulator selection.
  • This survey provides an in-depth analysis of three key functions in autonomous driving simulators: scenario simulation, sensor simulation, and the implementation of vehicle dynamics simulation.
  • This survey is the first to systematically review virtual autonomous driving competitions that are valuable for virtually testing autonomous driving systems.

1.3. Organizations

In the rest of the article, Section 2 introduces the main functions of 22 mainstream autonomous driving simulators, providing key information on their physics and rendering engines and key functions. Section 3 presents a timeline-based summary of 35 datasets, categorizing them according to specific task requirements and analyzing their construction methods, key features, and applicable scenarios. Section 4 reviews 10 virtual autonomous driving competitions. Section 5 addresses perspectives in the development of autonomous driving simulators, datasets, and virtual competitions. Finally, Section 6 concludes the survey.

2. Autonomous Driving Simulators

This section provides an overview of 22 autonomous driving simulators, categorized into two groups: open-source and proprietary, which are presented in Section 2.1 and Section 2.2. Additionally, Section 2.3 discusses four critical aspects: accessibility, physics engines, rendering engines, and the essential capabilities required for comprehensive autonomous driving simulators.

2.1. Open Source Simulators

  • AirSim
AirSim (v 1.8.1, Figure 1a) [33] is engineered by Microsoft (Redmond, WA, USA) on the Unreal Engine framework, which is specifically tailored for the development and testing of autonomous driving algorithms. AirSim is distinguished by its modular architecture, which encompasses comprehensive environmental, vehicular, and sensor models.
The simulator is designed for high extensibility, featuring dedicated interfaces for rendering, a universal API layer, and interfaces for vehicle firmware. This facilitates seamless adaptation to a diverse range of vehicles, hardware configurations, and software protocols. Additionally, AirSim’s plugin-based structure allows for straightforward integration into any project utilizing Unreal Engine 4. This platform further enhances the simulation experience by offering hundreds of test scenarios, predominantly crafted through photogrammetry to achieve a high-fidelity replication of real-world environments.
  • Autoware
Autoware (v 1.0, Figure 1b) [41], developed on the Robot Operating System (ROS), was created by a research team at Nagoya University. It is designed for scalable deployment across a variety of autonomous applications, with a strong emphasis on practices and standards to ensure quality and safety in real-world implementations. The platform exhibits a high degree of modularity, featuring dedicated modules for perception, localization, planning, and control, each equipped with clearly defined interfaces and APIs. Autoware facilitates direct integration with vehicle hardware, focusing on both the deployment and testing of autonomous systems on actual vehicles.
  • Baidu Apollo
Baidu Apollo (v 9.0, Figure 1c) [42] was designed for autonomous driving simulation and testing. It offers standards for drive-by-wire vehicles and open interfaces for vehicle hardware, such as the response time of the chassis with control signal inputs. Additionally, it includes specifications for sensors and computing units with functional, performance, and safety indicators to ensure they meet the necessary performance and reliability standards for autonomous driving. The software suite includes modules for environmental perception, decision-making, planning, and vehicle control, allowing for customizable integration based on project needs. Apollo also features high-precision mapping, simulation, and a data pipeline for autonomous driving cloud services. These capabilities enable the mass training of models and vehicle setups in the cloud, which can be directly deployed to vehicles post-simulation.
  • CARLA
CARLA (v 0.9.12, Figure 1d) [43] is designed for the development, training, and validation of autonomous driving systems. It was developed jointly by Intel Labs in Santa Clara, USA and the Computer Vision Center in Barcelona, Spain. With the support of Unreal Engine 4 for simulation testing in CARLA, users can create diverse simulation environments with 3D models, including buildings, vegetation, traffic signs, infrastructure, vehicles, and pedestrians. Pedestrians and vehicles in CARLA exhibit realistic behavior patterns, allowing for testing of autonomous driving systems under various weather and lighting conditions to assess performance across diverse environments.
CARLA also offers a variety of sensor models matched with real sensors for generating machine learning training data, which users can flexibly configure according to their needs. Furthermore, CARLA supports integration with the ROS and provides a robust API, enabling users to control the simulation environment through scripting.
  • Gazebo
Gazebo (v 11.0, Figure 1e) [44], initially developed by Andrew Howard and Nate Koenig at the University of Southern California, provides a 3D virtual environment for simulating and testing robotic systems. Users can easily create and import various robot models in Gazebo. It supports multiple physics engines, such as DART [45], ODE [46], and Bullet [47], to simulate the motion and behavior of robots in the real world. Additionally, Gazebo supports the simulation of various sensors, such as cameras and LiDAR.
Gazebo seamlessly integrates with ROS, offering a robust simulation platform for developers using ROS for robotics development. Additionally, Gazebo also provides rich plugin interfaces and APIs.
  • 51Sim-One
51Sim-One (v 2.2.3, Figure 1f) [38], developed by 51WORLD (Beijing, China), features a rich library of scenarios and supports the import of both static and dynamic data. Furthermore, the scenario generation function in 51Sim-One follows the OpenX standard.
For perception simulation, the platform supports sensor simulation including physical-level cameras, LiDAR, and mmWave radar, and can generate virtual datasets. For motion planning and tracking control simulation, 51Sim-One provides data interfaces, traffic flow simulation, vehicle dynamics simulation, driver models, and an extensive library of evaluation metrics for user configuration. Traffic flow simulation supports integration with software such as SUMO and PTV Vissim, while the vehicle dynamics simulation module includes a 26-degree-of-freedom model for fuel and electric vehicles, supporting co-simulation with software like CarSim, CarMaker, and VI-grade. Moreover, 51Sim-One supports large-scale cloud simulation testing, with daily accumulated test mileage reaching up to one hundred thousand kilometers.
  • LGSVL
LGSVL (v 2021.03, Figure 1g) [39] was developed by LG Silicon Valley Lab (Santa Clara, CA, USA). Leveraging Unity physics engine technology, LGSVL supports scenario, sensor, vehicle dynamics, and tracking control simulation, enabling the creation of high-fidelity simulation testing scenarios. These generated testing scenarios include various factors, such as time, weather, road conditions, as well as the distribution and movement of pedestrians and vehicles. Additionally, LGSVL provides a Python API, allowing users to modify scene parameters according to their specific requirements. Furthermore, LGSVL allows users to customize the platform’s built-in 3D HD maps, vehicle dynamics models, and sensor models. LGSVL also offers interfaces with open-source autonomous driving system platforms, such as Apollo and Autoware.
  • Waymax
Waymax (0.1.0, Figure 1h) [40], developed by Waymo (Mountain View, CA, USA), focuses on data-driven autonomous driving multi-agent systems. It primarily concentrates on large-scale simulation and testing of the autonomous driving decision-making and planning modules. Unlike generating high-fidelity simulation scenes, Waymax’s simulation scenes are rather simple, consisting mainly of straight lines, curves, or blocks. However, the realism and complexity of Waymax are primarily reflected in the interaction of multiple agents.
Waymax is implemented using a machine learning framework named JAX, allowing for simulations to fully exploit the powerful computing capabilities of hardware accelerators such as GPUs and TPUs, thereby speeding up simulation execution. Furthermore, Waymax utilizes Waymo’s real driving dataset WOMD to construct simulation scenarios, and it provides data loading and processing capabilities for other datasets. Additionally, Waymax offers general benchmarks and simulation agents to facilitate users in evaluating the effectiveness of planning methods. Finally, Waymax can train the behavior of interactive agents in different scenarios using methods such as imitation learning and reinforcement learning.

2.2. Non-Open-Source Simulators

  • Ansys Autonomy
Ansys Autonomy (v 2024R2, Figure 2a) [48] is capable of performing real-time closed-loop simulations with multiple sensors, traffic objects, scenarios, and environments. It features a rich library of scene resources and utilizes physically accurate 3D scene modeling. Users can customize and generate realistic simulation scenes through the scenario editor module. Moreover, Ansys Autonomy supports the import of 3D high-precision maps such as OpenStreetMap, automatically generating road models that match the map.
For traffic scenario simulation, Ansys Autonomy provides semantic-based road traffic flow design, enabling simulation of pedestrian, vehicle, and traffic sign behaviors, as well as interaction behaviors between pedestrians and vehicles, and pedestrians and the environment [49]. Ansys Autonomy also features a built-in script language interface, supporting interface control with languages such as Python and C++ to define traffic behaviors. Furthermore, various vehicle dynamics models and sensor models are pre-installed in Ansys Autonomy, allowing users to customize parameters and import external vehicle dynamics models (such as CarSim) and user-developed models.
Figure 2. User interfaces of the non-open-source simulators: (a) Ansys Autonomy [50]; (b) CarCraft [51]; (c) Cognata [52]; (d) CarSim [53]; (e) CarMaker [54]; (f) HUAWEI Octopus [55]; (g) Matlab Automated Driving Toolbox [56]; (h) NVIDA DRIVE Constellation [57]; (i) Oasis Sim [58]; (j) PanoSim [59]; (k) PreScan [60]; (l) PDGaiA [61]; (m) SCANeR Studio [62]; (n) TAD Sim 2.0 [63].
Figure 2. User interfaces of the non-open-source simulators: (a) Ansys Autonomy [50]; (b) CarCraft [51]; (c) Cognata [52]; (d) CarSim [53]; (e) CarMaker [54]; (f) HUAWEI Octopus [55]; (g) Matlab Automated Driving Toolbox [56]; (h) NVIDA DRIVE Constellation [57]; (i) Oasis Sim [58]; (j) PanoSim [59]; (k) PreScan [60]; (l) PDGaiA [61]; (m) SCANeR Studio [62]; (n) TAD Sim 2.0 [63].
Electronics 13 03486 g002
  • CarCraft
Developed by the autonomous driving team at Waymo(Mountain View, USA), CarCraft (Figure 2b) is a simulation platform exclusive to Waymo’s use for training autonomous vehicles [64]. First revealed in 2017, it has not been made open-source or commercialized. CarCraft is primarily employed for the large-scale training of autonomous vehicles. It enhances the vehicles’ capability to navigate through various scenarios by constructing traffic scenes within the platform and continuously altering the motion states of pedestrians and vehicles within these scenes. On CarCraft, up to 25,000 or more virtual autonomous vehicles can operate continuously, accumulating over eight million miles of training daily [65].
  • Cognata
Cognata (Figure 2c) [52], developed by an Israeli company Cognata, enables large-scale simulations in the cloud, covering a vast array of driving scenarios to accelerate the commercialization of autonomous vehicles. The platform employs digital twin technology and deep neural networks to generate high-fidelity simulation scenarios and includes a customizable library of scenes that users can modify or augment with their custom scenarios. Additionally, Cognata provides extensive traffic models and a database for AI interactions with scenarios, allowing for accurate simulation of drivers, cyclists, and pedestrians and their behavioral habits within these settings. Moreover, the platform offers a variety of sensors and sensor fusion models, including RGB HD cameras, fisheye cameras, lens distortion corrections, support for mmWave radar, LiDAR, Long-Wave Infrared Cameras, and ultrasonic cameras, along with a deep neural network-based radar sensor model. It also provides a toolkit for rapid sensor creation. Cognata features robust data analytics and visualization capabilities, offers ready-to-use autonomous driving verification standards, and allows users to customize rule-writing. Users can create specific rules or conditions, such as setting rules according to specific data ranges, thresholds, or other parameters to filter, classify, or analyze data. Integration with model interfaces (e.g., Simulink, ROS) and tool interfaces (e.g., SUMO), along with support for OpenDrive and OpenScenario standards, facilitates convenient interaction across different platforms.
  • CarSim
CarSim (v 2024.2, Figure 2d) [66], developed by Mechanical Simulation Corporation (Ann Arbor, USA), features a solver developed based on extensive real vehicle data. It supports various testing environments, including software-in-the-loop (SIL), model-in-the-loop (MIL), and hardware-in-the-loop (HIL). CarSim features a set of vehicle dynamics models, and users can modify component parameters according to specific requirements. The software also boasts significant extensibility; it is configured with specific data interfaces for real-time data exchange with external software, facilitating multifunctional simulation tests, such as joint simulations with PreScan and Matlab/Simulink.
  • CarMaker
CarMaker (v 13.0, Figure 2e) [67], developed by IPG Automotive in Karlsruhe, Germany, focuses on vehicle dynamics, Advanced Driver Assistance Systems (ADAS), and autonomous driving simulation. The platform provides vehicle dynamics models with extensive customization options, allowing users to replace components with specific models or hardware as required. The platform includes two specialized modules for traffic scenario construction: IPG Road and IPG Traffic. The IPG Road module enables the editing of various road types and conditions, while IPG Traffic offers selections of traffic elements. Additionally, CarMaker is equipped with MovieNX, a visualization tool capable of generating realistic environmental changes such as daily weather or lighting variations. The platform’s extensibility supports integration with Matlab/Simulink, Unreal Engine, and dSPACE, suitable for use in MIL, SIL, HIL, and Vehicle in the Loop (VIL) configurations. CarMaker also excels in data analytics, not only storing vast amounts of data but also enabling real-time viewing, comparison, and analysis through the integrated IPG Control tool.
  • HUAWEI Octopus
HUAWEI Octopus (Figure 2f) [68] was developed by Huawei (Shenzhen, China ) to address challenges in data collection, processing, and model training in the autonomous driving development process. The platform offers three main services to users: data services, training services, and simulation services. The data service processes data collected from autonomous vehicles and refines new training datasets through a labeling platform. The training service provides online algorithm management and model training, continually iterating labeling algorithms to produce datasets and enhance model accuracy. This service also accelerates training efficiency through both software and hardware enhancements. The simulation service includes a preset library of 20,000 simulation scenarios, offers online decision-making and rule-control algorithm simulation testing, and conducts large-scale parallel simulations in the cloud to accelerate iteration speed.
  • Matlab
Matlab (v R2024a, Figure 2g) [69] Automated Driving Toolbox [70] and Deep Learning Toolbox [71] were developed by MathWorks (Natick, MA, USA). The primary functionalities of the Matlab Automated Driving Toolbox include the design and testing of perception systems based on computer vision, LiDAR, and radar; providing object tracking and multi-sensor fusion algorithms; accessing high-definition map data; and offering reference applications for common ADAS and autonomous driving functions. The Deep Learning Toolbox, on the other hand, focuses on the design, training, analysis, and simulation of deep learning networks. It features a deep network designer, support for importing various pre-trained models, flexible API configuration and model training, multi-GPU accelerated training, automatic generation of optimized code, visualization tools, and methods for interpreting network results.
  • NVIDIA DRIVE Constellation
NVIDIA DRIVE Constellation (Figure 2h) [72] is a real-time HIL simulation testing platform built on two distinct servers designed for rapid and efficient large-scale testing and validation. On one server, NVIDIA DRIVE Sim software simulates real-world scenarios and the sensors of autonomous vehicles. This software generates data streams to create test scenarios capable of simulating various weather conditions, changes in environmental lighting, complex road and traffic scenarios, and sudden or extreme events during driving. The other server hosts the NVIDIA DRIVE Pegasus AI automotive computing platform, which runs a complete autonomous vehicle software stack and processes the sensor data output from the NVIDIA DRIVE Sim software. DRIVE Sim creates the test scenarios and feeds the sensor data generated by the vehicle models within these scenarios to the target vehicle hardware in NVIDIA DRIVE Pegasus AI. After processing the data, NVIDIA DRIVE Pegasus AI sends the decision-making and planning results back to NVIDIA DRIVE Sim, thereby validating the developed autonomous driving software and hardware systems.
  • Oasis Sim
Oasis Sim (v 3.0, Figure 2i) [58] is an autonomous driving simulation platform developed by SYNKROTRON (Xi’an, China) based on CARLA. This platform offers digital twin environments that can simulate changes in lighting and weather conditions. Oasis Sim features a multi-degree-of-freedom vehicle dynamics model and supports co-simulation with vehicle dynamics software such as CarSim and CarMaker.
Additionally, Oasis Sim provides various AI-enhanced, physics-based sensor models, including cameras and LiDAR, driven by real-world data. For scenario construction, Oasis Sim includes a rich library of scenarios, such as annotated regulatory scenarios, natural driving scenarios, and hazardous condition scenarios. It also features intelligent traffic flow and AI driver models.
Oasis Sim supports AI-driven automated generation of edge-case scenarios and includes a graphical scenario editing tool. For large-scale simulation testing, Oasis Sim supports cloud-based large-scale concurrency, enabling automated testing of massive scenarios. The platform is equipped with extensive interfaces, supporting ROS, C++, Simulink, and Python, and is compatible with OpenDrive and OpenScenario formats.
  • PanoSim
PanoSim (v 5.0, Figure 2j) [73], developed by the company PanoSim (Jiaxing, China), offers a variety of vehicle dynamics models, driving scenarios, traffic models, and a wide range of sensor models, with rich testing scenarios. PanoSim can also conduct joint simulations with Matlab/Simulink, providing an integrated solution that includes offline simulation, MIL, SIL, HIL, and VIL.
  • PreScan
Originally developed by TASS International (Helmond, The Netherlands), PreScan (v 2407, Figure 2k) [74] offers a wide range of sensor models, such as radar, LiDAR, cameras, and ultrasonic sensors, which users can customize in terms of parameters and installation locations. PreScan also features an extensive array of scenario configuration resources, including environmental elements like weather and lighting, as well as static objects such as buildings, vegetation, roads, traffic signs, and infrastructure, plus dynamic objects like pedestrians, motor vehicles, and bicycles.
Additionally, PreScan provides various vehicle models and dynamic simulation capabilities and supports integration with Matlab/Simulink and CarSim for interactive simulations. The platform is also compatible with MIL, SIL, and HIL.
  • PDGaiA
Developed by Peidai Automotive (Shanghai, China), PDGaiA (v 7.0, Figure 2l) [61] provides cloud-based simulations and computational services, allowing users to access cloud data through independent tools and perform local tests or to engage in parallel processing and algorithm validation using cloud computing centers.
PDGaiA includes a suite of full-physics sensor models, including mmWave radar, ground-truth sensors for algorithm validation, cameras, LiDAR, and GPS. These models support the simulation of various visual effects and dynamic sensor performance.
Additionally, PDGaiA enables the creation of static and dynamic environments, weather conditions, and a diverse library of scenarios. The platform also includes tool modules for event logging and playback during tests and features that allow for the importation of real-world data.
  • SCANeR Studio
SCANeR Studio (v 2024.1, Figure 2m) [62], developed by Boulogne-Billancourt, France’s AVSimulation, is organized into five modules: TERRAIN, VEHICLE, SCENARIO, SIMULATION, and ANALYSIS. The TERRAIN module enables road creation, allowing users to adjust road segment parameters and combine various segments to construct different road networks. The VEHICLE module facilitates the construction of vehicle dynamics models. The SCENARIO module, dedicated to scenario creation, offers a resource library for setting up road facilities, weather conditions, pedestrians, vehicles, and interactions among dynamic objects. The SIMULATION module executes simulations. The ANALYSIS module saves 3D animations of the simulation process, supports playback, and provides curves of various parameter changes during the simulation for result analysis. Additionally, SCANeR Studio includes script control capabilities, enabling users to manipulate scene changes through scripting, such as initiating rain at specific times or turning off streetlights. The platform also integrates interfaces for closed-loop simulations with external software, such as Matlab/Simulink.
  • TAD Sim 2.0
TAD Sim 2.0 (v 2.0, Figure 2n) [23], developed by Tencent (Shenzhen, China), is established using Unreal 4 physics engine, virtual reality, and cloud gaming technologies. The platform employs a data-driven approach to facilitate closed-loop simulation testing of autonomous driving modules including perception, decision-making, planning, and control. TAD Sim 2.0 incorporates various industrial-grade vehicle dynamics models and supports simulations of sensor models such as LiDAR, mmWave radar, and cameras, offering detailed test scenarios. These scenarios include simulations of real-world conditions like weather and sunlight. In addition to supporting conventional scene editing and traffic flow simulation, TAD Sim 2.0 uses Agent AI technology to train traffic flow AI (including pedestrians, vehicles, etc.) based on road data collection. Additionally, the platform features the TAD Viz visualization component and standard international interface formats (e.g., OpenDrive, OpenCRG, OpenSCENARIO).

2.3. Discussions of Autonomous Driving Simulators

Table 1 summarizes key information on the above discussed autonomous driving simulators from six aspects: accessibility, operating system, programming language used, engines, and sensor models. The discussion of these autonomous driving simulators in terms of accessibility, simulation engines, and key functions are as follows.

2.3.1. Accessibility

Accessibility is a prime consideration when selecting an autonomous driving simulator. Open-source simulators provide a freely accessible environment and, due to their cost-free nature, benefit from extensive community support. This community can continuously improve and optimize the simulators, allowing for them to rapidly adapt to new technologies and market demands. However, open-source simulators may have drawbacks, such as a lack of professional technical support, training services, and customized solutions, which can make it difficult for users to receive timely assistance when facing complex issues [28]. Moreover, open-source simulators often prioritize generality and flexibility, potentially falling short in providing customized solutions for specific application scenarios.
In contrast, proprietary simulators often excel in stability and customization. They are typically developed by professional teams and come with comprehensive technical support, training, and consulting services, making them more advantageous for addressing complex problems and meeting specific needs [13]. For example, CarCraft was developed by the Waymo and Google teams specifically for their private testing activities [64]. However, proprietary simulators also have clear disadvantages. They are often expensive, which can be a significant burden for academic and research projects. Additionally, as proprietary software, they may not adapt as easily to new research or development needs as open-source simulators.
When choosing between these types of simulators, it is essential to carefully consider the specific requirements and goals of the project [75]. For academic and research projects, open-source simulators might be more appealing due to their cost-effectiveness and flexibility. For instance, the CARLA simulator is a popular open-source option, offering a wide range of tools and community support that allows researchers to build and test autonomous driving systems more easily [76]. On the other hand, for enterprises, proprietary simulators may be more attractive because they offer higher stability and customization services [77]. For example, PanoSim provides Nezha Automobile with an integrated simulation system for vehicle-in-the-loop development and testing [59], and Autoware offers ADASTEC, a Level 4 full-size commercial vehicle automation platform [78].

2.3.2. Physics Engines

The physical engine is a core component of the simulator, responsible for simulating physical phenomena in the virtual environment such as gravity, collisions, and rigid body dynamics to ensure that the motion and behavior of objects in the virtual environment conform to real-world physical laws. An overview of several common physics engines applied to the previous simulators is presented as follows.
  • ODE
The Open Dynamics Engine (ODE, v 0.5) [46] is an open-source library for simulating rigid body dynamics, developed by Russell Smith with contributions from several other developers. It is notable for its performance, flexibility, and robustness, making it a prominent tool in game development, virtual reality, robotics, and other simulation applications [79]. ODE is capable of simulating rigid body dynamics as well as collision primitives. ODE uses constraint equations to simulate joint and contact constraints, determining the new location and orientation of rigid bodies at each simulation step [80]. The simulation process involves applying external control forces or torques to each rigid body to simulate external physical forces, performing collision detection to identify contacts between rigid bodies, creating contact joints for detected collisions to simulate contact forces, building Jacobian matrices for constraints, solving the linear complementarity problem (LCP [81]) to determine contact forces that satisfy all constraints, and calculating the next moment’s velocities and angular velocities to compute the next positions and rotation states through numerical integration. ODE provides two LCP solvers: an iterative Projected Gauss–Seidel solver [82] and a Lemke algorithm-based solver [83].
  • Bullet
Bullet (v 3.25) [47] is an open-source physics simulation engine developed by Erwin Coumans in 2005. Bullet supports GPU acceleration, enhancing the performance and efficiency of physical simulations through parallel computation, especially suitable for large-scale scenes and complex objects. It offers functionalities for rigid-body dynamics simulation, collision detection, constraint solving, and soft body simulation. Bullet uses a time-stepping scheme to solve rigid body systems [84], with a simulation process similar to ODE but provides a broader range of LCP solvers, including Sequential Impulse [85], Lemke, Dantzig [86], and Projected Gauss–Seidel solvers.
  • DART
The Dynamic Animation and Robotics Toolkit (DART, v 6.14.5) [45] is an open-source C++ library designed for 3D physics simulation and robotics. Developed by the Graphics Lab and Humanoid Robotics Lab at the Georgia Institute of Technology, it supports platforms such as Ubuntu, Archlinux, FreeBSD, macOS, and Windows, and it seamlessly integrates with Gazebo. DART supports URDF and SDF model formats and offers various extensible APIs, including default and additional numerical integration methods (semi-implicit Euler and RK4), 3D visualization APIs using OpenGL and OpenSceneGraph with ImGui support, nonlinear programming and multi-objective optimization APIs for handling optimization problems, and collision detection using FCL, Bullet, and ODE collision detectors. DART also supports constraint solvers based on Lemke, Dantzig, and PSG methods.
  • PhysX
The PhysX engine (v 5.4.1) [87], initially developed by AGEIA and later acquired by Nvidia (Santa Clara, USA), has been upgraded to its fifth generation. PhysX can simulate various physical phenomena in real time, including rigid body dynamics, soft body dynamics, fluid dynamics, particle systems, joint dynamics, collision detection, and response. It leverages GPU processing power to accelerate physical simulations and computations [88]. For solving the LCP problem, PhysX uses the sequential impulse method. Both the Unity Engine [89] for 3D physical simulation and the Unreal Engine prior to its fifth generation have used PhysX as their physics engine.
  • Unigine Engine
The Unigine Engine [90], developed by the company UNIGINE (Clemency, Luxembourg), supports high-performance physical simulation, making it suitable for various game development genres. Currently, it has been iterated to version 2.18.1. Unigine Engine utilizes its built-in physics module for physical simulation, excelling in scenarios like collision detection, elastic collision simulation, and basic physical phenomena simulation. For high-precision physical simulation, flight dynamics simulation, and gravitational field simulation, it relies on external physical engines or alternative methods [91].
  • Chaos Physics
Chaos Physics [92] was introduced in Unreal Engine starting from version 4.23, and by Unreal Engine 5, it became the default physics engine, replacing PhysX. Developed by Epic Games (Cary, NC, USA), Chaos Physics is a technology capable of modeling complex physical phenomena such as collisions, gravity, friction, and fluid dynamics. Chaos Physics also offers powerful particle simulation capabilities for creating realistic fire, smoke, water flow, and other effects. It also supports multi-threaded computation and GPU acceleration.
  • Selection of Physics Engines
In autonomous driving simulations, selecting the appropriate physics engine requires careful consideration of simulation accuracy, computational resources, real-time performance, and the need to simulate physical phenomena. For early development stages or simpler scenarios, ODE offers certain advantages due to its stability and ease of use. However, ODE’s use of the maximum coordinate method may result in inefficiencies when dealing with highly complex systems or when high-precision simulations are required, particularly in large-scale scenarios or complex vehicle dynamics [93]. Additionally, the lack of GPU acceleration support may render ODE less suitable for modern high-performance simulations. Nevertheless, its open-source nature makes it attractive for resource-limited small projects.
Bullet’s GPU acceleration and soft body simulation capabilities make it excel in multi-object systems. However, while Bullet supports soft body physics simulation, its effectiveness and precision may be limited when simulating complex deformations or high-precision soft body interactions [94]. Additionally, the Bullet physics engine’s documentation is not always intuitive, which may require developers to spend more time familiarizing themselves with its principles and API in the initial stages.
DART offers significant advantages in robotics and complex dynamics simulations, making it suitable for autonomous driving simulations related to robotics technology [95]. However, DART’s high computational overhead may impact real-time simulation performance, limiting its application in scenarios where efficient real-time response is critical for autonomous driving simulation [96].
The Unigine engine comes with a powerful built-in physics module capable of handling high-performance simulations. However, achieving a high precision in certain scenarios often requires the use of external engines or alternative methods, adding complexity to the development process, which may increase both development time and costs and demand higher hardware resources [91].
The PhysX and Chaos Physics engines perform exceptionally well in handling complex environments and high-precision physical simulations, particularly in applications with high real-time and performance requirements. PhysX effectively accelerates physical simulation and computation processes by employing GPUs, making it effective in simulating large-scale scenarios [97]. Additionally, PhysX’s collision detection algorithms and constraint solvers excel in complex scenarios involving vehicle dynamics, collision response, and multi-body system interactions [98].
Chaos Physics excels in simulating destruction effects, fracturing, and particle systems. By utilizing multi-threading and GPU acceleration technologies, Chaos Physics also enhances the performance in handling large-scale concurrent computations [92]. This makes Chaos Physics suitable for generating realistic scenarios.

2.3.3. Rendering Engines

Rendering engines focus on the display and rendering of 3D graphics. Through a series of complex calculations and processes, they transform 3D models into realistic images displayed on screens. In this section, we introduce the rendering features of Unigine Engine, Unreal Engine, and Unity Engine, as well as two rendering engines supported by Gazebo: OGRE and OptiX.
  • Unigine Engine
Unigine Engine [90] is a powerful cross-platform real-time 3D engine known for its photo-realistic graphics and rich rendering features. It supports physically based rendering (PBR) and provides advanced lighting technologies, including SSRTGI and voxel-based GI, detailed vegetation effects, and cinematic post-processing effects.
CarMaker integrated the Unigine Engine in its 10.0 version, utilizing Unigine’s PBR and real-world camera parameters. CarMaker’s visualization tool, MovieNX, uses real-world camera parameters (exposure, ISO, etc.) for scene rendering [99].
  • Unreal Engine
Unreal Engine [100] offers rendering capabilities, animation systems, and a comprehensive development toolchain. Currently, it has been upgraded to its fifth generation [101]. Unreal Engine’s rendering capabilities integrate PBR, real-time ray tracing, Nanite virtualized micro-polygon geometry, and Lumen real-time global illumination [102]. Additionally, optimization strategies like multi-threaded rendering, level of detail control, and occlusion culling are also integrated.
Several simulators adopt Unreal Engine. For example, TAD Sim 2.0 employs Unreal Engine to simulate lighting conditions, weather changes, and real-world physical laws [63].
  • Unity Engine
The Unity Engine (v 6000.0.17) [103] rendering engine is known for its flexibility and efficiency. It supports multiple rendering pipelines, including the Built-in Render Pipeline, Universal Render Pipeline, and High-Definition Render Pipeline. Unity’s rendering engine offers graphics rendering features and effects, such as high-quality 2D and 3D graphics, real-time lighting, shadows, particle systems, and PBR. Unity’s rendering performance is enhanced through multi-threaded rendering and dynamic batching technologies. Additionally, Unity boasts a rich asset store and plugin ecosystem, providing developers with extensive resources.
  • OGRE
OGRE (Object-Oriented Graphics Rendering Engine, v 14.2.6) [104], developed by OGRE Team, is an open-source, cross-platform real-time 3D graphics rendering engine. It can run on multiple operating systems, including Windows, Linux, and Mac OS X. By abstracting underlying graphics APIs like OpenGL, Direct3D, and Vulkan, OGRE achieves efficient 3D graphics rendering. The OGRE engine provides scene and resource management systems and supports plugin architecture and scripting systems. OGRE has a stable and reliable codebase and a large developer community, offering extensive tutorials, sample code, and third-party plugins.
  • OptiX
OptiX (v 8.0) [105] was developed by NVIDIA (Santa Clara, USA) for achieving optimal ray tracing performance on GPUs. The OptiX engine can handle complex scenes and light interactions. OptiX provides a programmable GPU-accelerated ray-tracing pipeline. It scales transparently across multiple GPUs, supporting large-scene rendering through an NVLink and GPU memory combination. OptiX is optimized for NVIDIA GPU architectures. Starting with OptiX 5.0, it includes an AI-based denoiser that uses GPU-accelerated artificial intelligence technology to reduce rendering iteration requirements.
  • Selection of Rendering Engines
In the application of autonomous driving simulators, Unigine and Unreal Engine excel in scenarios requiring high realism and complex lighting simulations. For instance, when simulating autonomous driving in urban environments, these engines can reproduce light reflections, shadow variations, and environmental effects under various weather conditions. The Unigine engine is famous for its global illumination technology and physics-based rendering capabilities [106], while Unreal Engine is notable for its Nanite virtualized geometry and Lumen real-time global illumination technology [107].
In contrast, Unity Engine is better suited for medium-scale autonomous driving simulation projects, especially when multi-platform support or high development efficiency is required. While Unity’s high-definition rendering pipeline can deliver reasonably good visual quality, it may not match the realism of Unigine or Unreal Engine in simulation scenarios with high precision requirements, such as complex urban traffic simulations. This makes Unity more suitable for autonomous driving simulation applications that prioritize flexibility and quick development cycles [108].
The OGRE engine, with its open-source and straightforward architecture, is well-suited for small-scale or highly customized simulation projects. In small simulation systems that require specific functionality customization, OGRE’s flexibility and extensibility allow developers to quickly implement specific rendering needs [104]. However, OGRE’s limitations in rendering capabilities make it challenging to tackle large-scale autonomous driving simulation tasks.
OptiX stands out in autonomous driving simulation scenarios that require precise ray tracing and high-performance computing. When simulating light reflections during nighttime driving or under extreme weather conditions, OptiX’s ray tracing capabilities can deliver exceptionally realistic visual effects [105]. However, due to its complexity and high computational resource requirements, OptiX is more suitable for advanced autonomous driving simulation applications rather than general simulation tasks. In these specialized applications, OptiX’s high-precision ray tracing can enhance the realism of simulations.

2.3.4. Critical Functions

This subsection reviews the critical functions (as shown in Figure 3) for a completed autonomous driving simulator, including scenario simulation, sensor simulation, and implementation of vehicle dynamics simulation.
  • Scenario Simulation
The scenario simulation is divided into two parts: static and dynamic scenario simulations.
The static scenario simulation involves constructing and simulating the static elements within different road environments, such as road networks, traffic signs, buildings, streetlights, and vegetation [109]. The realism of these static scenes is critical, as it directly affects the quality of sensor data during simulation. If the scene modeling is not detailed or realistic enough, it can lead to discrepancies between simulated sensor data and real-world data, negatively impacting the training and testing of perception algorithms. The primary method currently used to build static scenarios involves creating a library of scene assets with professional 3D modeling software, reconstructing road elements based on vectorized high-definition maps, and then importing and optimizing these static elements in physics and rendering engines like Unity or Unreal Engine to produce visually realistic and physically accurate scenario s [110]. In addition, Waymo is exploring another approach that leverages neural rendering techniques, such as Surfel GAN and Block NeRF, to transform real-world camera and LiDAR data into 3D scenes [111].
The dynamic scenario simulation, on the other hand, refers to the simulation of varying environmental conditions (e.g., weather changes, lighting variations) and dynamic objects (e.g., pedestrians, vehicles, non-motorized vehicles, animals), ensuring that these elements’ actions and impacts strictly adhere to real-world physical laws and behavioral logic [112]. This type of simulation depends on high-precision physics engines and Agent AI technology [113]. Physics engines assign realistic physical properties to each object in the scene, ensuring that dynamic elements like weather, lighting changes, and the movement, collision, and deformation of vehicles and pedestrians are consistent with real-world behavior. While many simulators support predefined behaviors before simulation begins, some simulators have already integrated Agent AI technology. This technology is used to simulate the decision-making [114] and behavior of traffic participants, such as vehicles and pedestrians [115]. For example, when the autonomous vehicle under test attempts to overtake, the vehicles controlled by Agent AI can respond with realistic avoidance or other strategic behaviors. At present, TAD Sim 2.0 uses Unreal Engine to simulate realistic weather and lighting changes, and it leverages Agent AI technology to train pedestrians, vehicles, and other scene elements using real-world data [116].
  • Sensor Simulation
Sensor simulation involves the virtual replication of various sensors (e.g., LiDAR, radar, cameras) equipped on vehicles. The goal is to closely replicate real-world sensor performance to ensure the reliability and safety of autonomous systems in actual environments. Sensor simulation can be categorized into three types: geometric-level simulation, physics-based simulation, and data-driven simulation.
Geometric-level simulation is the most basic approach, primarily using simple geometric models to simulate sensor field of view, detection range, and coverage area [117]. While geometric-level simulation is fast, it often overlooks complex physical phenomena, limiting its ability to accurately simulate sensor performance in complex environments.
Physics-based simulation introduces physical principles from optics, electronics, and electromagnetics to perform more detailed sensor modeling [118]. It can simulate phenomena such as light reflection, refraction, scattering, multipath effects of electromagnetic waves, and the impact of complex weather conditions (e.g., rain, fog, snow) on sensors [119]. This approach provides a more realistic sensor behavior in different environments. For example, the PD GaiA simulator uses proprietary PlenRay technology to achieve full physics-based sensor modeling, achieving a simulation fidelity of 95% [120]. Ansys Autonomy also provides physics-based sensor simulation, including LiDAR, radar, and camera sensors [121]. This technology generally uses Ansys’ multiphysics solutions for the physics-level of the signal replications [122]. Similarly, 51Sim-One provides physics-based models for cameras, LiDAR, and mmWave radar, calibrated using real sensor data [38].
Data-driven simulation leverages actual sensor data for modeling and simulation, combining real-world sensor data with virtual scenarios to generate realistic simulation results [123]. For example, TAD Sim’s sensor simulation combines collected real data with 3D reconstruction technology to build models, which are dynamically loaded into testing scenarios through Unreal engine.
  • Implementation of Vehicle Dynamics Simulation
The vehicle dynamics simulation is a crucial part of autonomous driving simulators, directly influencing the accurate prediction of vehicle behavior under various driving conditions. The quality of the vehicle dynamics model in a simulator directly affects the accuracy of dynamic behavior predictions. Therefore, using high-precision vehicle dynamics models is essential for improving the accuracy and reliability of a simulator. Since this area has been deeply investigated, only the implementation of vehicle dynamics simulation is discussed here. There are two main approaches in this field for autonomous driving simulators: developing models in-house or relying on third-party software.
The self-developed vehicle dynamics modeling techniques still have a high industry threshold due to their reliance on experimental data accumulated over long-term engineering practice. Traditional simulation providers continue to offer precise and stable vehicle dynamics models [124]. A typical vehicle dynamics simulator is CarSim, whose solver was developed based on real vehicle data with high reliability [125]. Users can select the vehicle model they need and modify critical parameters accordingly. Many autonomous driving simulators currently have interfaces reserved for joint simulation with CarSim, making it common practice to import CarSim’s vehicle dynamics models into specific simulators for joint simulation. Additionally, the PanoSim simulator offers a high-precision 27-degree-of-freedom 3D nonlinear vehicle dynamics model [59]. The 51Sim-One simulator features an in-house-developed vehicle dynamics simulation engine, while also supporting joint simulation with third-party dynamics modules [38]. However, among the simulators surveyed, some do not provide vehicle dynamics models or reserve interfaces for joint simulation. For example, CARLA and AirSim do not have built-in vehicle dynamics models, but they can integrate CarSim’s vehicle dynamics models through the Unreal Plugin [126,127].

3. Autonomous Driving Datasets

The datasets used in the field of autonomous driving are categorized based on specific tasks, including perception, mapping, prediction, and planning (as shown in Figure 4), and they are summarized in Section 3.1, covering various aspects such as their sources, scale, functions, and characteristics. The application of these scenarios is specifically discussed in Section 3.2.

3.1. Datasets

  • CamVid Dataset
The CamVid dataset (Figure 5) [128], collected and publicly released by the University of Cambridge, is the first video collection to include semantic labels for object classes. It provides 32 ground truth semantic labels, associating each pixel with one of these categories. This dataset addresses the need for experimental data to quantitatively evaluate emerging algorithms. The footage was captured from the driver’s perspective, comprising approximately 86 min of video, which includes 55 min recorded during daylight and 31 min during dusk.
  • Caltech Pedestrian Dataset
The Caltech Pedestrian dataset (Figure 6) [129], collected and publicly released by the California Institute of Technology, was designed for pedestrian detection research. It comprises approximately 10 h of 30 Hz video captured from vehicles traveling through urban environments. This dataset features annotated videos recorded from moving vehicles, including images that are low-resolution and often obscured, presenting challenging conditions for analysis. Annotations are provided for 250,000 frames across 137 segments of approximately a minute in length each, totaling 350,000 labeled bounding boxes and 2300 unique pedestrians.
  • KITTI Dataset
The KITTI dataset [130], developed collaboratively by the Karlsruhe Institute of Technology in Germany and Toyota Research Institute in the US, includes extensive data from car-mounted sensors like LiDAR, cameras, and GPS/IMU. It is used for evaluating technologies such as stereo vision, 3D object detection, and 3D tracking.
  • Cityscapes Dataset
The Cityscapes dataset [131], collected by researchers from the Mercedes-Benz Research Center and the Technical University of Darmstadt, provides a large-scale benchmark for training and evaluating methods for pixel-level and instance-level semantic labeling. It captures complex urban traffic scenarios from 50 different cities, with 5000 images having high-quality annotations and 20,000 with coarse annotations.
  • Oxford RobotCar Dataset
The Oxford RobotCar dataset [132], collected by the Oxford Mobile Robotics Group, features data collected over a year by navigating a consistent route in Oxford, UK. This extensive dataset encompasses 23 TB of multimodal sensory information, including nearly 20 million images, LiDAR scans, and GPS readings. It captures a diverse array of weather and lighting conditions, such as rain, snow, direct sunlight, overcast skies, and nighttime settings, providing detailed environmental data for research purposes.
  • SYNTHIA Dataset
The SYNTHIA dataset (Figure 7) [133], collected by researchers from the Autonomous University of Barcelona and the University of Vienna, consists of frames rendered from a virtual city to facilitate semantic segmentation research in driving scenes. It provides pixel-accurate semantic annotations for 13 categories including sky, buildings, roads, sidewalks, fences, vegetation, lane markings, poles, cars, traffic signs, pedestrians, cyclists, and others. Frames are captured from multiple viewpoints at each location, with up to eight different views available. Each frame also includes a corresponding depth map. This dataset comprises over 213,400 synthetic images, featuring a mix of random snapshots and video sequences from virtual cityscapes. The images were generated to simulate various seasons, weather conditions, and lighting scenarios, enhancing the dataset’s utility for testing and developing computer vision algorithms under diverse conditions.
  • Mapillary Vistas Dataset
The Mapillary Vistas dataset (Figure 8) [134] is a large and diverse collection of street-level images aimed at advancing the research and training of computer vision models across varying urban, suburban, and rural scenes. This dataset contains 25,000 high-resolution images, split into 18,000 for training, 2000 for validation, and 5000 for testing. The dataset covers a wide range of environmental settings and weather conditions such as sunny, rainy, and foggy days. It includes extensive annotations for semantic segmentation, enhancing the understanding of visual road scenes.
  • Bosch Small Traffic Lights Dataset
The Bosch Small Traffic Lights dataset [135], collected by researchers in Bosch, focuses on the detection, classification, and tracking of traffic lights. It includes 5000 training images and a video sequence of 8334 frames for testing, labeled with 10,756 traffic lights at a resolution of 1280 × 720. The test set includes 13,493 traffic lights annotated with four states: red, green, yellow, and off.
  • KAIST Urban Dataset
The KAIST Urban dataset (Figure 9) [136] caters to tasks such as simultaneous localization and mapping (SLAM), featuring diverse urban features captured from various complex urban environments. It includes LiDAR and image data from city centers, residential complexes, and underground parking lots. It provides not only raw sensor data but also reconstructed 3D points and vehicle positions estimated using SLAM techniques.
  • ApolloScape Dataset
The ApolloScape dataset [137], collected by Baidu, is distinguished by its extensive and detailed annotations, which include semantic dense point clouds, stereo imagery, per-pixel semantic annotations, lane marking annotations, instance segmentation, and 3D car instances. These features are provided for each frame across a variety of urban sites and daytime driving videos, enriched with high-precision location data. ApolloScape’s continuous development has led to a dataset that integrates multi-sensor fusion and offers annotations across diverse scenes and weather conditions.
  • CULane Dataset
The CULane dataset (Figure 10) [138], created by the Multimedia Laboratory at the Chinese University of Hong Kong, is a large-scale dataset designed for road lane detection. Cameras were mounted on six different vehicles, driven by different drivers, to collect over 55 h of video across various time sections in Beijing, which is one of the world’s largest and busiest cities. This collection resulted in 133,235 frames of image data. The dataset is uniquely annotated: each frame was manually marked with cubic splines to highlight the four most critical lane markings, while other lane markings are not annotated. It includes 88,880 frames for training, 9675 for validation, and 34,680 for testing, covering a wide range of traffic scenes including urban areas, rural roads, and highways.
  • DBNet Dataset
The DBNet dataset [139], released by Shanghai Jiao Tong University and Xiamen University, provides a vast collection of LiDAR point clouds and recorded video data captured on vehicles driven by experienced drivers. This dataset is designed for driving behavior strategy learning and evaluating the gap between model predictions and expert driving behaviors.
  • HDD Dataset
The Honda Research Institute Driving Dataset (HDD) (Figure 11) [140], collected by Honda Research Institute USA, contains 104 h of video from sensor-equipped vehicles navigating the San Francisco Bay Area, aiming to study human driver behaviors and interactions with traffic participants. The dataset includes 137 sessions, each averaging 45 min, corresponding to different navigation tasks. Additionally, a novel annotation method breaks down driver behaviors into four layers: goal-oriented, stimulus-driven, causal, and attentional.
  • KAIST Multispectral Dataset
The KAIST Multispectral dataset (Figure 12) [141], collected by RCV Lab in KAIST, provides large-scale, varied data of drivable areas, capturing scenes in urban, campus, and residential areas under both well lit and poorly lit conditions. The dataset features GPS measurements, IMU accelerations, and object annotations including type, size, location, and occlusion, addressing the lack of multispectral data suitable for diverse lighting conditions.
  • IDD Dataset
The IDD dataset (Figure 13) [142] was collected by researchers from the International Institute of Information Technology, Hyderabad, Intel Bangalore, and the University of California, San Diego. It is designed to address autonomous navigation challenges in unstructured driving conditions commonly found on Indian roads. It includes 10,004 images from 182 driving sequences, meticulously annotated across 34 categories. Unlike datasets focused on structured road environments, IDD offers a broader range of labels. The dataset introduces a four-level hierarchical labeling system. The first level consists of seven broad categories: drivable, non-drivable, living things, vehicles, roadside objects, far objects, and sky. The second level refines these into 16 more specific labels, such as breaking down “living things” into person, animal, and rider. The third level provides even more detail with 26 labels. The fourth and most detailed level contains 30 labels. Each level in the hierarchy groups ambiguous or hard-to-classify labels into the subsequent, more detailed level. Additionally, the IDD dataset supports research into domain adaptation, few-shot learning, and behavior prediction in road scenes.
  • NightOwls Dataset
The NightOwls dataset [143], collected by researchers from the University of Oxford, Nanjing University of Science and Technology, and the Max Planck Institute for Informatics, addresses the significant gap in pedestrian detection datasets for nighttime conditions, which are typically underrepresented compared to those available for daytime. Captured using automotive industry-standard cameras with a resolution of 1024 × 640, this dataset includes 279,000 frames, each fully annotated with details such as occlusions, poses, difficulty levels, and tracking information. The dataset provides diversity scenarios over three European countries, which are the UK, Germany, and The Netherlands, across various lighting conditions at dawn and night. It also covers all four seasons and includes adverse weather conditions such as rain and snow.
  • EuroCity Persons Dataset
The EuroCity Persons dataset [144], developed in a collaboration between the Intelligent Vehicles Research Group at Delft University of Technology and the Environment Perception Research Group at Daimler AG, is a diverse pedestrian detection dataset gathered from 31 cities across 12 European countries, annotated with 238,200 human instances in over 47,300 images under different lighting and weather conditions.
  • BDD100K Dataset
The BDD100K dataset (Figure 14) [145], collected by the Berkeley Artificial Intelligence Research lab at the University of California, Berkeley, is a large-scale driving video dataset comprising 100,000 videos over 10 different tasks. It features extensive annotations for heterogeneous task learning, aimed at facilitating research into multi-task learning and understanding the impact of domain differences on object detection. The dataset is organized into categories based on time of day and scene types, with urban street scenes during daylight hours used for validation. The data collection is crowd-sourced exclusively from drivers, linking each annotated image to a video sequence.
  • DR(eye)VE Dataset
The DR(eye)VE dataset [146], collected by the ImageLab group in the University of Modena and Reggio Emilia, is the first publicly released dataset focused on predicting driver attention. It comprises 555,000 frames, each captured using precise eye-tracking devices to record the driver’s gaze, correlated with external views from rooftop cameras. The dataset encompasses diverse environments such as city centers, rural areas, and highways under varying weather conditions (sunny, rainy, cloudy) and times of day (day and night).
  • Argoverse Dataset
The Argoverse dataset [147], jointly collected by Argo AI, Carnegie Mellon University, and the Georgia Institute of Technology, offers extensive mapping details, including lane centerlines, ground elevation, and drivable areas, with 3D object tracking annotations. It features a significant amount of data collected using LiDAR, 360-degree cameras, and stereo cameras. Additionally, the dataset establishes a large-scale trajectory prediction benchmark that captures complex driving scenarios like intersection turns, nearby traffic, and lane changes. This dataset is notable as the first to gather full panoramic, high-frame-rate large-scale data on outdoor vehicles, facilitating new approaches to photometric urban reconstruction using direct methods.
  • nuScenes Dataset
The nuScenes dataset (Figure 15) [148], provided by Motional (Boston, MA, USA), is an all-weather, all-lighting dataset. It is the first dataset collected from autonomous vehicles approved for testing on public roads and includes a complete 360-degree sensor suite (LiDAR, cameras, and radar). The data were gathered in Singapore and Boston, cities known for their busy urban traffic, including 1000 driving scenes, each approximately 20 s long, with nearly 1.4 million RGB images. nuScenes marks a significant advancement in dataset size and complexity, being the first to offer 360-degree coverage from a full sensor suite. It is also the first dataset to include modalities for nighttime and rainy conditions, with annotations for object types, locations, attributes, and scene descriptions.
  • Waymo Open Dataset
The Waymo Open dataset [149], released in 2019 by the company Waymo (Mountain View, CA, USA), is a large-scale multimodal camera-LiDAR dataset, surpassing all similar existing datasets in size, quality, and geographic diversity at the time of its release. It includes data collected from Waymo vehicles over millions of miles in diverse environments such as San Francisco, Phoenix, Mountain View, and Kirkland. This dataset captures a wide range of conditions, including day and night, dawn and dusk, and various weather conditions like sunny and rainy days, across both urban and suburban settings.
  • Unsupervised Llamas Dataset
The Unsupervised Llamas dataset [150], collected by Bosch N.A. Research, is a large high-quality dataset for lane markings, including 100,042 marked images from approximately 350 km of highway driving. This dataset utilizes high-precision maps and additional optimization steps to automatically annotate lane markings in images, enhancing annotation accuracy. It offers a variety of information, including 2D and 3D lines, individual dashed lines, pixel-level segmentation, and lane associations.
  • D2-City Dataset
The D2-City dataset [151] is a comprehensive collection of dashcam video footage gathered from vehicles on the DiDi platform, featuring over 10,000 video segments. Approximately 1000 of these videos provide frames annotated with bounding boxes and tracking labels for 12 types of objects, while the key frames of the remaining videos offer detection annotations. This dataset was collected across more than ten cities in China under various weather conditions and traffic scenarios, capturing the diversity and complexity of real-world Chinese traffic environments.
  • Highway Driving Dataset
The Highway Driving dataset (Figure 16) [152], collected by KAIST (Daejeon, Republic of Korea), is a semantic video dataset composed of 20 video sequences captured in highway settings at a frame rate of 30 Hz. It features annotations for ten categories including roads, lanes, sky, barriers, buildings, traffic signs, cars, trucks, vegetation, and unspecified objects. Each frame is densely annotated in both spatial and temporal dimensions, with attention to the coherence between consecutive frames. This dataset addresses the gap in semantic segmentation research, which had previously focused predominantly on images rather than videos.
  • CADC Dataset
The CADC dataset [153], jointly collected by the Toronto-Waterloo AI Institute and the Waterloo Intelligent Systems Engineering Lab at the University of Waterloo, comprises 7000 annotated frames collected under various winter weather conditions, covering 75 driving sequences and over 20 km of driving distance in traffic and snow. This dataset is specifically designed for studying autonomous driving under adverse driving conditions, enabling researchers to test their object detection, localization, and mapping technologies in challenging winter weather scenarios.
  • Mapillary Traffic Sign Dataset
The Mapillary Traffic Sign dataset (Figure 17) [154], collected by Mapillary AB (Malmö, Sweden), is a large-scale, diverse benchmark dataset for traffic signs. It contains 100,000 high-resolution images, with over 52,000 fully annotated and about 48,000 partially annotated images. This dataset includes 313 different traffic sign categories from around the world, covering all inhabited continents with 20% from North America, 20% from Europe, 20% from Asia, 15% from South America, 15% from Oceania, and 10% from Africa. It features data from various settings such as urban and rural roads across different weather conditions, seasons, and times of day.
  • A2D2 Dataset
The A2D2 dataset [155], collected by Audi (Ingolstadt, Germany), consists of images and 3D point clouds, 3D bounding boxes, semantic segmentation, instance segmentation, and data extracted from the vehicle bus, totaling 2.3 TB of data across various road scenes such as highways, rural areas, and urban areas. It includes 41,227 annotated non-sequential frames with semantic segmentation labels and point cloud annotations. Of these, 12,497 frames include 3D bounding boxes for objects within the field of view of the front camera, along with 392,556 frames of continuous unannotated sensor data. This dataset provides rich annotation details, including 3D bounding boxes and semantic and instance segmentation, which are valuable for research and development in multiple visual tasks.
  • nuPlan Dataset
The nuPlan dataset [156], collected by the company Motional (Boston, USA), is a machine-learning-based, closed-loop autonomous driving planning dataset collected from four cities, which are Las Vegas, Boston, Pittsburgh, and Singapore, each known for unique driving challenges. This dataset includes 1500 h of driving data and provides LiDAR point clouds, camera images, positioning data, and steering inputs, along with semantic maps and APIs for efficient map querying. The dataset is precisely labeled using an offline perception system to ensure high accuracy across the large-scale dataset. nuPlan aims to serve as a public benchmark to advance machine learning-based planning methodologies.
  • AutoMine Dataset
The AutoMine dataset (Figure 18) [157], jointly collected by research teams from Hong Kong Baptist University and the Beijing Normal University-Hong Kong Baptist University United International College, is the first dataset designed for autonomous driving in mining environments, featuring real mining scene data. Data were collected using various platforms including SUVs, wide-body trucks, and mining trucks, each equipped with at least one forward-facing LiDAR, an inertial navigation system, and two monocular cameras. This dataset includes over 18 h of driving data from 70 different open-pit mining scenes across five locations in Inner Mongolia and Shaanxi Province, China. It includes 18,000 annotated frames of LiDAR and camera images, primarily used for unmanned 3D perception in various mining environments.
  • AIODrive Dataset
The AIODrive dataset [158], collected by researchers in Carnegie Mellon University, is a synthetic, large-scale dataset that includes eight sensor modalities: RGB, Stereo, Depth, LiDAR, SPAD-LiDAR, radar, IMU, and GPS. It provides annotations for all major perception tasks such as detection, tracking, prediction, segmentation, and depth estimation, corresponding to scenarios that exceed typical driving conditions, including adverse weather, challenging lighting, congested scenes, high-speed driving, traffic rule violations, and vehicle collisions. Additionally, this dataset offers high-density, long-range point cloud data from LiDAR and SPAD-LiDAR sensors for advanced remote sensing research.
  • SHIFT Dataset
The SHIFT dataset [159], collected by the Visual Intelligence and Systems Group at ETH Zurich, is a synthetic, multi-task dataset for autonomous driving, presenting variable conditions, such as cloud cover, rain, fog intensity, and the density of vehicles and pedestrians over the course of a day. It offers completed annotations and environmental setups to support a wide range of conditions. SHIFT includes a complete sensor suite and supports a variety of perception tasks including semantic/instance segmentation, monocular/stereo depth regression, 2D/3D object detection, 2D/3D Multi-Object Tracking, optical flow estimation, point cloud alignment, visual odometry, trajectory prediction, and human pose estimation.
  • OPV2V Dataset
The OPV2V dataset (Figure 19) [160], collected by Mobility Lab at UCLA, is the first large-scale open dataset for vehicle-to-vehicle perception simulation, featuring 73 diverse scenarios, 11,464 frames of LiDAR point clouds and RGB images, and 232,913 annotated 3D vehicle bounding boxes; most data are derived from eight standard towns provided by the CARLA simulator. Additionally, the dataset includes six different types of roads to simulate common driving scenarios encountered in real life. The OPV2V dataset also allows for connected automated vehicles to share perception information and provide multiple perspectives on the same obstacle.
  • TAS-NIR Dataset
The TAS-NIR dataset [161], collected by the Institute of Autonomous Systems Technology at the Bundeswehr University Munich, is designed for fine-grained semantic segmentation in unstructured outdoor scenes using Visible and Near-Infrared (VIS-NIR) imaging. It consists of 209 paired VIS-NIR images collected with two cameras during the spring, summer, and autumn seasons across various unstructured outdoor environments. However, due to the limited number of images, the TAS-NIR dataset is not suitable for training algorithms but is instead used for validation and testing purposes.
  • OpenLane-V2 Dataset
The OpenLane-V2 dataset [162], collected by the Autonomous Driving team at the OpenAI Lab in Shanghai, China focuses on the structural topology inference of traffic scenes to enhance perception and reasoning about scene structure. This dataset includes three main sub-tasks: 3D lane detection, traffic element recognition, and topology recognition. OpenLane-V2 is developed based on Argoverse 2 [163] and nuScenes, featuring images from 2000 scenes collected under various lighting conditions, weather, and in different cities globally. It includes validated annotations for 2.1 million instances and 1.9 million accurate topological relationships.

3.2. Discussions of Autonomous Driving Datasets

Table 2 provides an overview of the key information for these datasets, summarized by year, region, scenario, sensor, and data type.
The specific applications of the datasets are highlighted as follows:
  • The CADC dataset focuses on autonomous driving in adverse weather conditions.
  • The CULane dataset is designed for road detection.
  • The KAIST multispectral dataset is suitable for low-light environments.
  • The DR(eye)VE dataset addresses driver attention prediction.
  • The Caltech Pedestrian, NightOwls, and EuroCity Persons datasets focus on pedestrian detection.
  • The HDD and DBNet datasets are centered on human driver behavior.
  • The Oxford RobotCar dataset emphasizes long-term autonomous driving.
  • The Complex Urban, D2-City, and Cityscapes datasets are aimed at urban scenarios.
  • The KAIST Urban dataset is mainly for SLAM tasks.
  • The Argoverse dataset targets 3D tracking and motion prediction.
  • The Mapillary Traffic Sign dataset focuses on traffic signs.
  • The SYNTHIA, SHIFT, and OPV2V datasets originate from virtual worlds.
The evolution of open-source datasets for autonomous driving continues. From the CamVid dataset introduced in 2009 to the OpenLane-V2 dataset around 2023, these datasets have improved in complexity, scale, and diversity, expanding from initial perception tasks to decision-making and planning aspects.

4. Virtual Autonomous Driving Competitions

Section 4.1 surveys virtual autonomous driving competitions, focusing primarily on those that utilize the simulators and datasets reviewed in Section 2 and Section 3. Section 4.2 summarizes the key information of these virtual autonomous driving competitions.

4.1. Virtual Competitions

  • Baidu Apollo Starfire Autonomous Driving Competition
The Baidu Apollo Starfire Autonomous Driving Competition (Figure 20) [164], organized by Baidu in collaboration with the China Robotics and Artificial Intelligence Competition Committee, is a technical competition based on industry practice cases. The Apollo Starfire Autonomous Driving Competition mandates the use of the Baidu Apollo platform to complete a specified number of tasks, each with specific scenarios, requirements, and scoring standards within a time limit. For example, the 2023 trajectory planning task on dead-end roads requires vehicles to replan their route to complete a left turn when reaching a dead-end area. The scoring standard is set such that failing to complete the left turn results in a score of zero.
  • China Intelligent and Connected Vehicle Algorithm Competition
China Intelligent and Connected Vehicle Algorithm Competition (CIAC) (Figure 21) [165], jointly initiated by the Chinese Association for Artificial Intelligence, the China Society of Automotive Engineers, the National Innovation Center of Intelligent and Connected Vehicles, and other entities, is an annual event held in collaboration with key universities, research institutions, and technology companies. The competition consists of two categories: perception and regulation/control. The perception tasks typically involve target detection in specific scenarios and provide datasets for participants. The regulation/control tasks, also known as simulation tasks, focus on current technological hotspots and typical application scenarios such as Formula Student scenarios, highway scenarios, urban intersection scenarios, or parking scenarios, and they must be completed on the PanoSim simulation platform.
  • CVPR Autonomous Driving Challenge
The CVPR Autonomous Driving Challenge (Figure 22) [166] is held to explore the tasks and challenges faced by autonomous driving perception and decision-making systems. The competition in 2023 consists of four tracks: the OpenLane Topology Challenge, the Online HD Map Construction Challenge, the 3D Occupancy Grid Prediction Challenge, and the nuPlan Planning Challenge. The OpenLane Topology Challenge requires participants to provide perception results of lanes and traffic elements and their topological relationships using multi-view images covering the entire panoramic view. The Online HD Map Construction Challenge requires models to construct a local high-definition map from multi-view images and submit vectorized outputs. The 3D Occupancy Grid Prediction Challenge requires participants to predict the semantic state of each voxel in a 3D grid based on a large-scale 3D occupancy grid dataset and multi-camera images covering a 360-degree view. The nuPlan Planning Challenge requires participants to plan the trajectory of a vehicle based on the semantic representation of traffic participants and static obstacles from a bird’s eye view.
  • Waymo Open Dataset Challenge
The Waymo Open Dataset Challenge (Figure 23) [167] is organized by Waymo and associated with the CVPR Autonomous Driving Workshop. The challenges use Waymo’s open datasets, featuring various challenge tracks. For instance, the 2023 challenge included four tracks: 2D Video Panoptic Segmentation, Pose Estimation, Motion Prediction, and Simulated Agent. The 2D Video Panoptic Segmentation track involves generating panoptic segmentation labels for each pixel in panoramic videos captured by cameras and ensuring continuous tracking of objects across cameras over time. The Pose Estimation track requires inferring 3D key points for pedestrians and cyclists based on LiDAR and camera image data. The Motion Prediction track involves predicting the future positions of multiple agents in a scene based on the previous second’s data. The Simulated Agent track focuses on designing agent models in virtual scenarios, evaluated based on the similarity of their behavior to human behavior. Winners of each track receive a cash prize and the opportunity to present their work at the CVPR Autonomous Driving Workshop.
  • Argoverse Challenge
The Argoverse Challenge (Figure 24) [169] is an international competition based on the Argoverse dataset. The 2023 challenge included four tracks: multi-agent prediction using the Argoverse 2 dataset; detection, tracking, and prediction of 26 object classes using the Argoverse 2 dataset; evaluation of the LiDAR scene flow using the Argoverse 2 dataset; and prediction of 3D occupancy using the Argoverse 2 dataset. Outstanding contributions from this competition are invited to share their research results at the CVPR 2023 Autonomous Driving Workshop.
  • BDD100K Challenge
The BDD100K Challenge (Figure 25) [170], initiated in 2022, is based on the BDD100K dataset. The competition includes two primary tracks: Multi-Object Tracking (MOT) and Multi-Object Tracking and Segmentation (MOTS). The MOT track involves predicting and tracking objects in video sequences captured by cameras. The MOTS track extends this by also requiring participants to predict segmentation masks for each object. Winners receive cash prizes and the opportunity to present their research at the CVPR Autonomous Driving Workshop.
  • CARLA Autonomous Driving Challenge
The CARLA Autonomous Driving Challenge (Figure 26) [20], initiated in February 2019, aims to advance autonomous driving technology. The competition is conducted on the CARLA simulation platform, where teams must navigate virtual vehicles through a set of predefined routes. Each route requires the agent to drive from start to finish, with various lighting and weather conditions (e.g., daytime, dusk, rain, fog, night) and traffic scenarios (e.g., lane merging, lane changing, traffic signs, traffic lights) introduced along the way. Scores are based on route completion and adherence to traffic rules, with penalties for violations such as running red lights, encroaching on sidewalks, and colliding with pedestrians or vehicles.
  • CARSMOS International Autonomous Driving Algorithm Challenge
The CARSMOS International Autonomous Driving Algorithm Challenge (Figure 27) [171], organized by the Ministry of Industry and Information Technology of the People’s Republic of China, is co-hosted by the OpenAtom Foundation, IEEE, the CARSMOS under the OpenAtom Foundation, and CARLA. The competition consists of two stages: an online simulation competition and a final defense. All online competitions are conducted on the Oasis Sim simulation platform provided by Shenzhen Xinchuang. The maps include various types of urban roads, such as two-lane and four-lane bidirectional roads, straight and curved non-intersection sections, and intersections with traffic signals. Participants are provided with 28 basic scenarios for training and debugging their algorithms, such as vehicle cut-ins from the left under foggy night conditions and intersection passage under clear daylight conditions. Participants can also use the provided training scenarios to generate more scenarios through scenario generalization functions to meet the needs of autonomous driving algorithm development. The test scenarios comprise 10 complex traffic scenarios created by combining 2–4 basic scenarios, evaluating each team’s perception, planning, and decision-making capabilities.
After uploading their algorithms, the system scores them based on a set of evaluation metrics. These metrics assess various aspects of the autonomous driving performance, such as scenario completion time, adherence to traffic rules, and the ability to avoid collisions and other infractions. Each test scenario is scored, and the final score is the average of all scenario scores. Key factors in the evaluation include completion time, reaching the endpoint, and avoiding penalties for infractions such as running red lights, lane departures, and collisions. It is crucial to note that failure to reach the endpoint or collisions results in a zero score for that scenario.
In the final defense, invited teams present their solutions, and the jury ranks them based on their performance in both the online competition and the final defense.
  • The Competition of Trajectory Planning for Automated Parking
The Competition of Trajectory Planning for Automated Parking (TPCAP) (Figure 28) [172] is a global online challenge for autonomous parking trajectory planning, hosted by the Institute of Automation, Chinese Academy of Sciences, Qingdao Institute of Intelligent Industries, Tsinghua University, and Hunan University, under the IEEE Intelligent Transportation Systems Conference. The competition focuses solely on trajectory planning technology, with no restrictions on programming languages, requiring only that participants submit planned trajectory results in a standard format. The competition consists of preliminary and final rounds. The preliminary round features 20 tasks, including parallel, perpendicular, and angled parking scenarios, often with obstacles and narrow parking spaces, each reflecting real-world problems with clear design intentions. Rankings are updated daily during the preliminary round, with participants allowed to upload one trajectory planning solution per day and receive their scores the next day. The final round provides eight tasks, and finalists must upload their solutions within 24 h, with only one submission allowed.
  • OnSite Autonomous Driving Challenge
The OnSite Autonomous Driving Algorithm Challenge (Figure 29) [173] has been held twice based on the OnSite autonomous driving simulation test platform. The first edition was conducted entirely online, with four major tracks: basic highway segment, highway merging and diverging segment, intersection, and comprehensive scenarios (covering the first three tracks). The second edition included regular and playoff stages. The regular stage, conducted online, featured four tracks: urban and highway scenarios, unstructured road scenarios, parking scenarios, and a perception-planning-control comprehensive challenge. Playoffs combined virtual and real vehicle testing. Each track in the regular stage had A, B, and C sets. The A set, released earliest, allowed teams to develop and debug algorithms. The B and C sets were official competition tasks contributing to 70% and 30% of the final score, respectively. The B set was revealed at the start of the competition, while the C set remained undisclosed until after the B set tests. Each track had a daily updated leaderboard, where participants uploaded their Docker-packaged autonomous driving algorithms. Multiple submissions were allowed during the regular stage, with the highest score among all versions displayed on the leaderboard.

4.2. Discussions of Virtual Autonomous Driving Competitions

Table 3 summarizes the key information regarding these virtual autonomous driving competitions, including initial year, simulators and datasets employed, and scenarios involved.

5. Perspectives of Simulators, Datasets, and Competitions

  • Closing the Gap Between Simulators and the Real World
The viability of using autonomous driving simulation platforms to test the performance of autonomous vehicles is contingent upon the consistency of test results with those from real-world scenarios. To ensure the reliability and accuracy of these test results, simulation platforms must strive to minimize discrepancies with real environments, enhancing the realism and precision of the simulations. This involves accurately simulating real-world scenarios. For instance, the impact of rain or snow on road surface friction, reduced visibility in foggy conditions, and changes in vehicle dynamic response due to strong winds must be meticulously replicated in simulations, as these factors directly affect the perception and decision-making capabilities of autonomous systems.
In terms of vehicle modeling, they must precisely reflect the dynamics and driving performance of actual vehicles, since the accuracy of these models is crucial for evaluating the perception and decision-making capabilities of autonomous systems [174].
Additionally, the simulation should be capable of accurately reproducing the interactions among pedestrians, vehicles, and the environment in real traffic scenarios. For example, scenarios like pedestrians suddenly crossing the road or non-motorized vehicles running red lights, although difficult to predict in reality, have a significant impact on the response speed and decision-making accuracy of autonomous systems. Therefore, simulation platforms should be capable of generating these complex interactions to ensure that autonomous systems can make safe and reliable decisions when confronted with such scenarios.
  • Modeling Sensors Accurately
Modeling sensors is a complex and critical module in simulations, fraught with both technical and practical challenges. One of the primary issues is balancing the trade-off between rendering accuracy and efficiency. Sensor modeling involves intricate tasks such as ray tracing and material simulation, all of which are necessary to replicate the optical characteristics of real-world environments as accurately as possible. However, achieving high rendering speeds often comes at the expense of accuracy. Moreover, the diversity and complexity of sensors exacerbate the challenges of modeling. Sensor modeling faces a “trilemma” where accuracy, efficiency, and generality are difficult to achieve simultaneously. High-precision models can lead to a significant increase in computational load, reducing efficiency, while enhancing generality often compromises the detailed representation of specific sensors. Thus, balancing these demands across multiple sensors in large-scale simulations presents a significant challenge [175].
Additionally, sensor modeling is constrained by the physical properties of target objects and the complexity of environmental parameters. In urban environments, various factors such as buildings, road signs, and vehicles affect sensor signals, especially in situations involving occlusions and reflections, significantly increasing the complexity of modeling. The physical models of sensors must account for the material properties, reflectivity, and other factors of target objects, as the accuracy of these parameters directly influences the credibility of the simulation. However, obtaining these detailed physical parameters is often difficult, particularly in multi-object, multi-scenario environments [176].
Noise processing also poses a major challenge in sensor modeling. In the real world, sensor noise is highly random, and simulations should introduce noise into the ideal physical model in a way that closely mimics the output of real sensors [177]. However, accurately simulating this noise remains an unsolved challenge.
Resource limitations further complicate sensor modeling. Simulation companies often struggle to obtain the underlying data from sensor manufacturers, which is crucial for building accurate sensor models. The internal complexity of sensors and the black-box nature of their algorithms make it difficult for external developers to fully understand their operational mechanisms, thereby affecting the accuracy of the models.
  • Generating Critical Scenarios
Although simulation platforms can generate a wide range of scenarios, they often involve repetitive testing of numerous low-risk situations, while small-probability, high-risk scenarios are not fully covered. This gap can leave autonomous vehicles unprepared for handling such incidents. To improve the safety of autonomous vehicles, simulation platforms should enhance their coverage of critical scenarios, effectively identifying and constructing low-probability but high-risk event scenarios to expose potential flaws in autonomous systems during testing. This approach allows for the detection of potential flaws in autopilot systems during testing. For example, simulation platforms should be capable of modeling scenarios such as a vehicle ahead suddenly losing control or a pedestrian unexpectedly crossing a highway.
  • Enhancing Data Diversity
For autonomous vehicles to operate safely and reliably across various scenarios and conditions, datasets must exhibit diversity. Different weather conditions affect vehicle perception and decision-making, necessitating the inclusion of driving data under varying weather conditions in datasets. Autonomous vehicles must also adapt to diverse road environments, including urban streets, rural roads, and highways, each presenting unique driving challenges. Consequently, datasets should encompass driving data from these different road types. Moreover, autonomous vehicles may need to navigate different countries with distinct traffic laws and road signs. To ensure compliance with local traffic rules, datasets should reflect these differences. Additionally, testing the reliability of autonomous vehicles under extreme weather conditions requires datasets to include these scenarios such as heavy rain, fog, and icy roads. Collecting such diverse data is a challenging task, requiring significant human and financial resources and carefully designed collection strategies.
  • Enhancing Privacy Protection
In the process of collecting autonomous driving datasets, privacy concerns have become a significant challenge. These datasets often contain sensitive personal information. For instance, images and video data captured by cameras may include pedestrians’ facial features, license plate numbers, and other identifiable details [178]. Although such information is crucial for the perception and decision-making processes of autonomous driving systems, it also poses a risk of being leaked or misused, potentially leading to identity recognition, tracking, or even more severe privacy violations. Additionally, data collection typically occurs on public roads and at various times and locations, meaning that the datasets may contain information that reveals specific individuals’ activity patterns, daily routines, travel habits, or even private activities. Furthermore, the collection of geolocation information introduces additional privacy risks. While geolocation data are vital for navigation and route planning, when combined with other sensory information, they can disclose sensitive details such as an individual’s residence, workplace, and other private locations.
Data anonymization and de-identification techniques are commonly used to address privacy issues. However, in autonomous driving systems, complete anonymization and de-identification might result in the loss of valuable information, which can negatively impact algorithm training and system performance. Balancing the protection of personal privacy while maintaining the validity and usefulness of the data remains a complex technical challenge.
  • Enhancing Competitiveness in Competitions
Current autonomous driving competitions based on simulation platforms predominantly focus on evaluating single-vehicle algorithms, with virtual vehicles completing tasks for scoring and ranking. This type lacks competitiveness and engagement, limiting the promotion of autonomous driving technology and offering a constrained evaluation of the technology. A significant challenge for simulation-based autonomous driving competitions is designing formats that highlight multi-vehicle interactions and competition, and creating rules that emphasize the strategic decision-making advantages of autonomous vehicles [179].
  • Optimizing Algorithm Reliability Verification
The reliability verification of algorithms in autonomous driving competitions faces two main challenges. Firstly, the design of the competition tasks is crucial. The challenge lies in accurately and comprehensively simulating the various technical difficulties that autonomous vehicles may encounter in real-world road environments. This requires the competition tasks to cover diverse road conditions, complex traffic scenarios, and the ability to handle emergencies. Only in this way can the robustness and adaptability of autonomous driving algorithms be thoroughly tested during the competition. Secondly, the design of evaluation metrics for algorithms is equally important. A scientific and reasonable evaluation system needs to consider multiple dimensions, including but not limited to safety, efficiency, comfort, and accuracy of path planning. These metrics should not only have clear quantitative standards to objectively and fairly assess the performance of different algorithms but also reflect the algorithms’ overall performance in handling various complex situations. By addressing these challenges, autonomous driving competitions can more effectively validate the reliability and practicality of the algorithms developed.

6. Conclusions

This survey provides a comprehensive overview of 22 mainstream autonomous driving simulators, 35 open-source datasets, and 10 virtual autonomous driving competitions. The simulators are surveyed in terms of their key information, followed by an analysis of their accessibility, as well as the physics and rendering engines they employ. Based on this analysis, specific recommendations are provided for selecting either open-source or proprietary simulators and choosing appropriate physics and rendering engines. Furthermore, the paper examines three key functions of comprehensive autonomous driving simulators: scenario generation, sensor simulation, and the integration of vehicle dynamics simulation.
In addition, the datasets are classified according to task requirements into categories such as perception, mapping, prediction, and planning, with a detailed review of their applicable scenarios. This survey paper also summarizes key information, including information about the simulators, datasets, and scenarios involved in virtual autonomous driving competitions.
Finally, this survey discusses the perspectives for autonomous driving simulators, datasets, and virtual competitions. While simulators offer a safe and efficient environment for validating autonomous driving systems, they are still constrained by simulation accuracy and the difficulty of replicating real-world dynamics. Autonomous driving datasets provide researchers with extensive data for training models and validating algorithms, but challenges persist in collecting more diverse data and addressing privacy concerns during data collection. Virtual autonomous driving competitions positively contribute to the development of simulators and datasets, but further exploration is needed to enhance the competitiveness of these competitions and improve the algorithm reliability verification methods.

Author Contributions

Conceptualization, T.Z. and X.W.; methodology, T.Z. and H.L.; investigation, H.L., W.W. and X.W.; writing—original draft preparation, H.L. and W.W.; writing—review and editing, T.Z. and H.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Yuelushan Center for Industrial Innovation, grant number 2023YCII0126.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Li, B.; Ouyang, Y.; Li, L.; Zhang, Y. Autonomous Driving on Curvy Roads without Reliance on Frenet Frame: A Cartesian-Based Trajectory Planning Method. IEEE Trans. Intell. Transp. Syst. 2022, 23, 15729–15741. [Google Scholar] [CrossRef]
  2. Bimbraw, K. Autonomous Cars: Past, Present and Future a Review of the Developments in the Last Century, the Present Scenario and the Expected Future of Autonomous Vehicle Technology. In Proceedings of the 2015 12th International Conference on Informatics in Control, Automation and Robotics (ICINCO), Colmar, France, 21–23 July 2015; IEEE: New York, NY, USA, 2015; Volume 1, pp. 191–198. [Google Scholar]
  3. Bathla, G.; Bhadane, K.; Singh, R.K.; Kumar, R.; Aluvalu, R.; Krishnamurthi, R.; Kumar, A.; Thakur, R.N.; Basheer, S. Autonomous Vehicles and Intelligent Automation: Applications, Challenges, and Opportunities. Mob. Inf. Syst. 2022, 2022, 7632892. [Google Scholar] [CrossRef]
  4. Wang, J.; Zhang, L.; Huang, Y.; Zhao, J. Safety of Autonomous Vehicles. J. Adv. Transp. 2020, 2020, 8867757. [Google Scholar] [CrossRef]
  5. Alghodhaifi, H.; Lakshmanan, S. Autonomous Vehicle Evaluation: A Comprehensive Survey on Modeling and Simulation Approaches. IEEE Access 2021, 9, 151531–151566. [Google Scholar] [CrossRef]
  6. Kalra, N.; Paddock, S.M. Driving to Safety: How Many Miles of Driving Would It Take to Demonstrate Autonomous Vehicle Reliability? Transp. Res. Part A Policy Pract. 2016, 94, 182–193. [Google Scholar] [CrossRef]
  7. Feng, S.; Sun, H.; Yan, X.; Zhu, H.; Zou, Z.; Shen, S.; Liu, H.X. Dense Reinforcement Learning for Safety Validation of Autonomous Vehicles. Nature 2023, 615, 620–627. [Google Scholar] [CrossRef]
  8. Huang, Z.; Arief, M.; Lam, H.; Zhao, D. Synthesis of Different Autonomous Vehicles Test Approaches. In Proceedings of the 2018 21st International Conference on Intelligent Transportation Systems (ITSC), Maui, HI, USA, 4–7 November 2018; IEEE: New York, NY, USA, 2018; pp. 2000–2005. [Google Scholar]
  9. Huang, W.; Wang, K.; Lv, Y.; Zhu, F. Autonomous Vehicles Testing Methods Review. In Proceedings of the 2016 IEEE 19th International Conference on Intelligent Transportation Systems (ITSC), Rio de Janeiro, Brazil, 1–4 November 2016; IEEE: New York, NY, USA, 2016; pp. 163–168. [Google Scholar]
  10. Schöner, H.-P. Simulation in Development and Testing of Autonomous Vehicles. In 18 Internationales Stuttgarter Symposium; Bargende, M., Reuss, H.-C., Wiedemann, J., Eds.; Springer: Berlin/Heidelberg, Germany, 2018; pp. 1083–1095. [Google Scholar]
  11. Li, B.; Zhang, Y.; Zhang, T.; Acarman, T.; Ouyang, Y.; Li, L.; Dong, H.; Cao, D. Embodied Footprints: A Safety-Guaranteed Collision-Avoidance Model for Numerical Optimization-Based Trajectory Planning. IEEE Trans. Intell. Transp. Syst. 2023, 25, 2046–2060. [Google Scholar] [CrossRef]
  12. Stadler, C.; Montanari, F.; Baron, W.; Sippl, C.; Djanatliev, A. A Credibility Assessment Approach for Scenario-Based Virtual Testing of Automated Driving Functions. IEEE Open J. Intell. Transp. Syst. 2022, 3, 45–60. [Google Scholar] [CrossRef]
  13. Li, Y.; Yuan, W.; Zhang, S.; Yan, W.; Shen, Q.; Wang, C.; Yang, M. Choose Your Simulator Wisely: A Review on Open-Source Simulators for Autonomous Driving. IEEE Trans. Intell. Veh. 2024, 9, 4861–4876. [Google Scholar] [CrossRef]
  14. Chance, G.; Ghobrial, A.; Lemaignan, S.; Pipe, T.; Eder, K. An Agency-Directed Approach to Test Generation for Simulation-Based Autonomous Vehicle Verification. In Proceedings of the 2020 IEEE International Conference on Artificial Intelligence Testing (AITest), Oxford, UK, 3–6 August 2020; IEEE: New York, NY, USA, 2020; pp. 31–38. [Google Scholar]
  15. Li, L.; Huang, W.-L.; Liu, Y.; Zheng, N.-N.; Wang, F.-Y. Intelligence Testing for Autonomous Vehicles: A New Approach. IEEE Trans. Intell. Veh. 2016, 1, 158–166. [Google Scholar] [CrossRef]
  16. Chen, L.; Li, Y.; Huang, C.; Li, B.; Xing, Y.; Tian, D.; Li, L.; Hu, Z.; Na, X.; Li, Z. Milestones in Autonomous Driving and Intelligent Vehicles: Survey of Surveys. IEEE Trans. Intell. Veh. 2022, 8, 1046–1056. [Google Scholar] [CrossRef]
  17. Wang, J.; Wang, X.; Shen, T.; Wang, Y.; Li, L.; Tian, Y.; Yu, H.; Chen, L.; Xin, J.; Wu, X. Parallel Vision for Long-Tail Regularization: Initial Results from IVFC Autonomous Driving Testing. IEEE Trans. Intell. Veh. 2022, 7, 286–299. [Google Scholar] [CrossRef]
  18. Wang, Y.; Han, Z.; Xing, Y.; Xu, S.; Wang, J. A Survey on Datasets for the Decision Making of Autonomous Vehicles. IEEE Intell. Transp. Syst. Mag. 2024, 16, 23–40. [Google Scholar] [CrossRef]
  19. Zhang, T.; Sun, Y.; Wang, Y.; Li, B.; Tian, Y.; Wang, F.-Y. A Survey of Vehicle Dynamics Modeling Methods for Autonomous Racing: Theoretical Models, Physical/Virtual Platforms, and Perspectives. IEEE Trans. Intell. Veh. 2024, 9, 4312–4334. [Google Scholar] [CrossRef]
  20. Rosero, L.A.; Gomes, I.P.; da Silva, J.A.R.; dos Santos, T.C.; Nakamura, A.T.M.; Amaro, J.; Wolf, D.F.; Osório, F.S. A Software Architecture for Autonomous Vehicles: Team Lrm-b Entry in the First Carla Autonomous Driving Challenge. arXiv 2020, arXiv:2010.12598. [Google Scholar]
  21. Leathrum, J.F.; Mielke, R.R.; Shen, Y.; Johnson, H. Academic/Industry Educational Lab for Simulation-Based Test & Evaluation of Autonomous Vehicles. In Proceedings of the 2018 Winter Simulation Conference (WSC), Gothenburg, Sweden, 9–12 December 2018; IEEE: New York, NY, USA, 2018; pp. 4026–4037. [Google Scholar]
  22. Rosique, F.; Navarro, P.J.; Fernández, C.; Padilla, A. A Systematic Review of Perception System and Simulators for Autonomous Vehicles Research. Sensors 2019, 19, 648. [Google Scholar] [CrossRef] [PubMed]
  23. Yang, G.; Xue, Y.; Meng, L.; Wang, P.; Shi, Y.; Yang, Q.; Dong, Q. Survey on Autonomous Vehicle Simulation Platforms. In Proceedings of the 2021 8th International Conference on Dependable Systems and Their Applications (DSA), Yinchuan, China, 11–12 September 2021; IEEE: New York, NY, USA, 2021; pp. 692–699. [Google Scholar]
  24. Kaur, P.; Taghavi, S.; Tian, Z.; Shi, W. A Survey on Simulators for Testing Self-Driving Cars. In Proceedings of the 2021 Fourth International Conference on Connected and Autonomous Driving (MetroCAD), Detroit, MI, USA, 28–29 April 2021; IEEE: New York, NY, USA, 2021; pp. 62–70. [Google Scholar]
  25. Zhou, J.; Zhang, Y.; Guo, S.; Guo, Y. A Survey on Autonomous Driving System Simulators. In Proceedings of the 2022 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW), Charlotte, NC, USA, 31 October–3 November 2022; IEEE: New York, NY, USA, 2022; pp. 301–306. [Google Scholar]
  26. Janai, J.; Güney, F.; Behl, A.; Geiger, A. Computer Vision for Autonomous Vehicles: Problems, Datasets and State of the Art. Found. Trends® Comput. Graph. Vis. 2020, 12, 1–308. [Google Scholar] [CrossRef]
  27. Yin, H.; Berger, C. When to Use What Data Set for Your Self-Driving Car Algorithm: An Overview of Publicly Available Driving Datasets. In Proceedings of the 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC), Yokohama, Japan, 16–19 October 2017; IEEE: New York, NY, USA, 2017; pp. 1–8. [Google Scholar]
  28. Kang, Y.; Yin, H.; Berger, C. Test Your Self-Driving Algorithm: An Overview of Publicly Available Driving Datasets and Virtual Testing Environments. IEEE Trans. Intell. Veh. 2019, 4, 171–185. [Google Scholar] [CrossRef]
  29. Guo, J.; Kurup, U.; Shah, M. Is It Safe to Drive? An Overview of Factors, Metrics, and Datasets for Driveability Assessment in Autonomous Driving. IEEE Trans. Intell. Transp. Syst. 2019, 21, 3135–3151. [Google Scholar] [CrossRef]
  30. Liu, M.; Yurtsever, E.; Zhou, X.; Fossaert, J.; Cui, Y.; Zagar, B.L.; Knoll, A.C. A Survey on Autonomous Driving Datasets: Data Statistic, Annotation, and Outlook. arXiv 2024, arXiv:2401.01454. [Google Scholar]
  31. Li, B.; Fang, Y.; Ma, S.; Wang, H.; Wang, Y.; Li, X.; Zhang, T.; Bian, X.; Wang, F.-Y. Toward Fair and Thrilling Autonomous Racing: Governance Rules and Performance Metrics for Autonomous One. IEEE Trans. Intell. Veh. 2023, 8, 3974–3982. [Google Scholar] [CrossRef]
  32. Betz, J.; Zheng, H.; Liniger, A.; Rosolia, U.; Karle, P.; Behl, M.; Krovi, V.; Mangharam, R. Autonomous Vehicles on the Edge: A Survey on Autonomous Vehicle Racing. IEEE Open J. Intell. Transp. Syst. 2022, 3, 458–488. [Google Scholar] [CrossRef]
  33. Shah, S.; Dey, D.; Lovett, C.; Kapoor, A. Airsim: High-Fidelity Visual and Physical Simulation for Autonomous Vehicles. In Proceedings of the Field and Service Robotics: Results of the 11th International Conference, Zurich, Switzerland, 12–15 September 2017; Springer: Berlin/Heidelberg, Germany, 2018; pp. 621–635. [Google Scholar]
  34. Autoware—The World’s Leading Open-Source Software Project for Autonomous Driving. Available online: https://github.com/autowarefoundation/autoware (accessed on 2 July 2024).
  35. Feng, M.; Zhang, H. Application of Baidu Apollo Open Platform in a Course of Control Simulation Experiments. Comput. Appl. Eng. Educ. 2022, 30, 892–906. [Google Scholar] [CrossRef]
  36. CARLA Simulator. Available online: https://carla.org/ (accessed on 2 July 2024).
  37. Cook, D.; Vardy, A.; Lewis, R. A Survey of AUV and Robot Simulators for Multi-Vehicle Operations. In Proceedings of the 2014 IEEE/OES Autonomous Underwater Vehicles (AUV), Oxford, MS, USA, 6–9 October 2014; IEEE: New York, NY, USA, 2014; pp. 1–8. [Google Scholar]
  38. 51Sim-One. Available online: https://wdp.51aes.com/news/27 (accessed on 2 June 2024).
  39. Rong, G.; Shin, B.H.; Tabatabaee, H.; Lu, Q.; Lemke, S.; Možeiko, M.; Boise, E.; Uhm, G.; Gerow, M.; Mehta, S. Lgsvl Simulator: A High Fidelity Simulator for Autonomous Driving. In Proceedings of the 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC), Rhodes, Greece, 20–23 September 2020; IEEE: New York, NY, USA, 2020; pp. 1–6. [Google Scholar]
  40. Gulino, C.; Fu, J.; Luo, W.; Tucker, G.; Bronstein, E.; Lu, Y.; Harb, J.; Pan, X.; Wang, Y.; Chen, X. Waymax: An Accelerated, Data-Driven Simulator for Large-Scale Autonomous Driving Research. In Proceedings of the 37th Conference on Neural Information Processing Systems Track on Datasets and Benchmarks, New Orleans, LA, USA, 10–16 December 2023. [Google Scholar]
  41. Autoware. Available online: https://autoware.org/ (accessed on 2 June 2024).
  42. Baidu Apollo. Available online: https://apollo.baidu.com/ (accessed on 2 June 2024).
  43. Dosovitskiy, A.; Ros, G.; Codevilla, F.; Lopez, A.; Koltun, V. CARLA: An Open Urban Driving Simulator. In Proceedings of the Conference on Robot Learning; Proceedings of Machine Learning Research, 2017, Mountain View, CA, USA, 13–15 November 2017; pp. 1–16. [Google Scholar]
  44. Koenig, N.; Howard, A. Design and Use Paradigms for Gazebo, an Open-Source Multi-Robot Simulator. In Proceedings of the 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat. No. 04CH37566), Sendai, Japan, 28 September–2 October 2004; IEEE: New York, NY, USA, 2004; Volume 3, pp. 2149–2154. [Google Scholar]
  45. DART. Available online: https://dartsim.github.io./ (accessed on 2 July 2024).
  46. Smith, R. Open Dynamics Engine. 2005. Available online: https://ode.org/ode-latest-userguide.pdf (accessed on 28 August 2024).
  47. Bullet. Available online: https://github.com/bulletphysics/bullet3 (accessed on 2 July 2024).
  48. Sovani, S. Simulation Accelerates Development of Autonomous Driving. ATZ Worldw. 2017, 119, 24–29. [Google Scholar] [CrossRef]
  49. ANSYS Autonomous Driving Simulation Verification Platform. Available online: http://www.app17.com/supply/offerdetail/9574940.html (accessed on 22 June 2024).
  50. Ansys Autonomy: Designing and Validating Safe Automated Driving Systems. Available online: https://www.ansys.com/products/av-simulation/ansys-avxcelerate-autonomy (accessed on 2 July 2024).
  51. Welcome to Simulation City, the Virtual World Where Waymo Tests Its Autonomous Vehicles. Available online: https://www.theverge.com/2021/7/6/22565448/waymo-simulation-city-autonomous-vehicle-testing-virtual (accessed on 23 June 2024).
  52. Autonomous and ADAS Vehicles Simulation Software. Available online: https://www.cognata.com/simulation/ (accessed on 23 June 2024).
  53. CarSim Overview. Available online: https://www.carsim.com/products/carsim/ (accessed on 23 June 2024).
  54. CarMaker. Available online: https://www.ipg-automotive.com/cn/products-solutions/software/carmaker/ (accessed on 2 July 2024).
  55. Why Is Huawei’s Autonomous Driving Cloud Service Named “Huawei Octopus”? Available online: https://baijiahao.baidu.com/s?id=1660135510834912717&wfr=spider&for=pc (accessed on 23 June 2024).
  56. Automated Driving Toolbox. Available online: https://ww2.mathworks.cn/products/automated-driving.html (accessed on 13 August 2024).
  57. NVIDIA DRIVE Constellation. Available online: https://www.nvidia.com/content/dam/en-zz/Solutions/self-driving-cars/drive-constellation/nvidia-drive-constellation-datasheet-2019-oct.pdf (accessed on 23 June 2024).
  58. OASIS SIM Simulation Platform. Available online: https://www.synkrotron.ai/sim.html (accessed on 23 June 2024).
  59. PanoSim. Available online: https://www.panosim.com/ (accessed on 30 June 2024).
  60. Simcenter Prescan Software Simulation Platform. Available online: https://plm.sw.siemens.com/en-US/simcenter/autonomous-vehicle-solutions/prescan/ (accessed on 30 June 2024).
  61. PDGaiA. Available online: http://www.pd-automotive.com/pc/#/ (accessed on 30 June 2024).
  62. SCANeR Studio. Available online: https://www.avsimulation.com/scaner/ (accessed on 30 June 2024).
  63. TAD Sim 2.0. Available online: https://tadsim.com/ (accessed on 30 June 2024).
  64. Li, W.; Pan, C.W.; Zhang, R.; Ren, J.P.; Ma, Y.X.; Fang, J.; Yan, F.L.; Geng, Q.C.; Huang, X.Y.; Gong, H.J. AADS: Augmented Autonomous Driving Simulation Using Data-Driven Algorithms. Sci. Robot 2019, 4, eaaw0863. [Google Scholar] [CrossRef]
  65. Yao, S.; Zhang, J.; Hu, Z.; Wang, Y.; Zhou, X. Autonomous-driving Vehicle Test Technology Based on Virtual Reality. J. Eng. 2018, 2018, 1768–1771. [Google Scholar] [CrossRef]
  66. Zhang, S.; Li, G.; Wang, L. Trajectory Tracking Control of Driverless Racing Car under Extreme Conditions. IEEE Access 2022, 10, 36778–36790. [Google Scholar] [CrossRef]
  67. Hong, C.J.; Aparow, V.R. System Configuration of Human-in-the-Loop Simulation for Level 3 Autonomous Vehicle Using IPG CarMaker. In Proceedings of the 2021 IEEE International Conference on Internet of Things and Intelligence Systems (IoTaIS), Bandung, Indonesia, 23–24 November 2021; IEEE: New York, NY, USA, 2021; pp. 215–221. [Google Scholar]
  68. HUAWEI Octopus. Available online: https://developer.huaweicloud.com/techfield/car/oct.html (accessed on 30 June 2024).
  69. Matlab, S. Matlab. MathWorks Natick MA 2012, 9. Available online: https://itb.biologie.hu-berlin.de/~kempter/Teaching/2003_SS/gettingstarted.pdf (accessed on 27 August 2024).
  70. Vergara, P.F.E.; Malla, E.E.G.; Paillacho, E.X.M.; Arévalo, F.D.M. Object Detection in a Virtual Simulation Environment with Automated Driving Toolbox. In Proceedings of the 2021 16th Iberian Conference on Information Systems and Technologies (CISTI), Chaves, Portugal, 23–26 June 2021; IEEE: New York, NY, USA, 2021; pp. 1–6. [Google Scholar]
  71. Beale, M.H.; Hagan, M.T.; Demuth, H.B. Deep Learning Toolbox. In R2018b User’s Guide; The MathWorks, Inc.: Natick, MA, USA, 2018. [Google Scholar]
  72. Vukić, M.; Grgić, B.; Dinčir, D.; Kostelac, L.; Marković, I. Unity Based Urban Environment Simulation for Autonomous Vehicle Stereo Vision Evaluation. In Proceedings of the 2019 42nd International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), Opatija, Croatia, 20–24 May 2019; IEEE: New York, NY, USA, 2019; pp. 949–954. [Google Scholar]
  73. Zhang, L.; Du, Z.; Zhao, S.; Zhai, Y.; Shen, Y. Development and Verification of Traffic Confrontation Simulation Test Platform Based on PanoSim. In Proceedings of the 2020 IEEE 4th Information Technology, Networking, Electronic and Automation Control Conference (ITNEC), Chongqing, China, 12–14 June 2020; IEEE: New York, NY, USA, 2020; Volume 1, pp. 1814–1818. [Google Scholar]
  74. Ortega, J.; Lengyel, H.; Szalay, Z. Overtaking Maneuver Scenario Building for Autonomous Vehicles with PreScan Software. Transp. Eng. 2020, 2, 100029. [Google Scholar] [CrossRef]
  75. Kusari, A.; Li, P.; Yang, H.; Punshi, N.; Rasulis, M.; Bogard, S.; LeBlanc, D.J. Enhancing SUMO Simulator for Simulation Based Testing and Validation of Autonomous Vehicles. In Proceedings of the 2022 IEEE Intelligent Vehicles Symposium (IV), Aachen, Germany, 5–9 June 2022; IEEE: New York, NY, USA, 2022; pp. 829–835. [Google Scholar]
  76. Barbour, E.; McFall, K. Autonomous Vehicle Simulation Using Open Source Software Carla. J. UAB ECTC 2019, 18, 51–57. [Google Scholar]
  77. Staranowicz, A.; Mariottini, G.L. A Survey and Comparison of Commercial and Open-Source Robotic Simulator Software. In Proceedings of the 4th International Conference on PErvasive Technologies Related to Assistive Environments, Crete, Greece, 25–27 May 2011; pp. 1–8. [Google Scholar]
  78. Leading by Example. Available online: https://autoware.org/case-studies/ (accessed on 2 July 2024).
  79. Arslan, E.; Yıldırım, Ş. ODE (Open Dynamics Engine) Based Walking Control Algorithm for Six Legged Robot. J. New Results Sci. 2018, 7, 35–46. [Google Scholar]
  80. Hsu, J.M.; Peters, S.C. Extending Open Dynamics Engine for the DARPA Virtual Robotics Challenge. In Proceedings of the International Conference on Simulation, Modeling, and Programming for Autonomous Robots, Bergamo, Italy, 20–23 October 2014; Springer: Berlin/Heidelberg, Germany, 2014; pp. 37–48. [Google Scholar]
  81. Drumwright, E.; Shell, D.A. Extensive Analysis of Linear Complementarity Problem (Lcp) Solver Performance on Randomly Generated Rigid Body Contact Problems. In Proceedings of the 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vilamoura-Algarve, Portugal, 7–12 October 2012; IEEE: New York, NY, USA, 2012; pp. 5034–5039. [Google Scholar]
  82. Liu, T.; Wang, M.Y. Computation of Three-Dimensional Rigid-Body Dynamics with Multiple Unilateral Contacts Using Time-Stepping and Gauss-Seidel Methods. IEEE Trans. Autom. Sci. Eng. 2005, 2, 19–31. [Google Scholar] [CrossRef]
  83. Shapley, L.S. A Note on the Lemke-Howson Algorithm. In Pivoting and Extension: In Honor of AW Tucker; Springer: Berlin/Heidelberg, Germany, 2009; pp. 175–189. [Google Scholar]
  84. Izadi, E.; Bezuijen, A. Simulating Direct Shear Tests with the Bullet Physics Library: A Validation Study. PLoS ONE 2018, 13, e0195073. [Google Scholar] [CrossRef] [PubMed]
  85. Catto, E. Fast and Simple Physics Using Sequential Impulses. In Proceedings of the Game Developer Conference, San Jose, CA, USA, 20–24 March 2006. [Google Scholar]
  86. Cottle, R.W.; Dantzig, G.B. Complementary Pivot Theory of Mathematical Programming. Linear Algebra Its Appl. 1968, 1, 103–125. [Google Scholar] [CrossRef]
  87. NVIDIA PhysX. Available online: https://developer.nvidia.com/physx-sdk (accessed on 2 July 2024).
  88. Rieffel, J.; Saunders, F.; Nadimpalli, S.; Zhou, H.; Hassoun, S.; Rife, J.; Trimmer, B. Evolving Soft Robotic Locomotion in PhysX. In Proceedings of the 11th Annual Conference Companion on Genetic and Evolutionary Computation Conference: Late Breaking Papers, Montreal, QC, Canada, 8–12 July 2009; pp. 2499–2504. [Google Scholar]
  89. Hussain, A.; Shakeel, H.; Hussain, F.; Uddin, N.; Ghouri, T.L. Unity Game Development Engine: A Technical Survey. Univ. Sindh J. Inf. Commun. Technol. 2020, 4, 73–81. [Google Scholar]
  90. UNIGINE: Real-Time 3D Engine. Available online: https://unigine.com/ (accessed on 2 July 2024).
  91. Unigine Physic. Available online: https://developer.unigine.com/ch/docs/latest/ (accessed on 2 July 2024).
  92. Overview of Chaos Physics. Available online: https://docs.unrealengine.com/4.27/zh-CN/InteractiveExperiences/Physics/ChaosPhysics/Overview/ (accessed on 2 July 2024).
  93. Yoon, J.; Son, B.; Lee, D. Comparative Study of Physics Engines for Robot Simulation with Mechanical Interaction. Appl. Sci. 2023, 13, 680. [Google Scholar] [CrossRef]
  94. Erez, T.; Tassa, Y.; Todorov, E. Simulation Tools for Model-Based Robotics: Comparison of Bullet, Havok, Mujoco, Ode and Physx. In Proceedings of the 2015 IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 26–30 May 2015; IEEE: New York, NY, USA, 2015; pp. 4397–4404. [Google Scholar]
  95. Shafikov, A.; Tsoy, T.; Lavrenov, R.; Magid, E.; Li, H.; Maslak, E.; Schiefermeier-Mach, N. Medical Palpation Autonomous Robotic System Modeling and Simulation in Ros/Gazebo. In Proceedings of the 2020 13th International Conference on Developments in eSystems Engineering (DeSE), Virtual, 14–17 December 2020; IEEE: New York, NY, USA, 2020; pp. 200–205. [Google Scholar]
  96. Fields, M.; Brewer, R.; Edge, H.L.; Pusey, J.L.; Weller, E.; Patel, D.G.; DiBerardino, C.A. Simulation Tools for Robotics Research and Assessment. In Proceedings of the Unmanned Systems Technology XVIII, Baltimore, MD, USA, 20–21 April 2016; SPIE: Bellingham, WA, USA, 2016; Volume 9837, pp. 156–171. [Google Scholar]
  97. Kumar, K. Learning Physics Modeling with PhysX; Packt Publishing: Birmingham, UK, 2013; ISBN 1849698147. [Google Scholar]
  98. Maciel, A.; Halic, T.; Lu, Z.; Nedel, L.P.; De, S. Using the PhysX Engine for Physics-based Virtual Surgery with Force Feedback. Int. J. Med. Robot. Comput. Assist. Surg. 2009, 5, 341–353. [Google Scholar] [CrossRef] [PubMed]
  99. CarMaker 10.0 Release By IPG Automotive. Available online: https://unigine.com/news/2021/carmaker-10-0-release-by-ipg-automotive (accessed on 2 July 2024).
  100. Šmíd, A. Comparison of Unity and Unreal Engine. Czech Tech. Univ. Prague 2017, 41–61. Available online: https://core.ac.uk/download/pdf/84832291.pdf (accessed on 27 August 2024).
  101. Unreal Engine: The Most Powerful Real-Time 3D Creation Tool. Available online: https://www.unrealengine.com/zh-CN (accessed on 2 July 2024).
  102. What Is Real-Time Ray Tracing, and Why Should You Care? Available online: https://www.unrealengine.com/en-US/explainers/ray-tracing/what-is-real-time-ray-tracing (accessed on 2 July 2024).
  103. Unity Real-Time Development Platform. Available online: https://unity.com/cn (accessed on 2 July 2024).
  104. OGRE—Open Source 3D Graphics Engine. Available online: https://www.ogre3d.org/ (accessed on 2 July 2024).
  105. NVIDIA OptiX Ray Tracing Engine. Available online: https://developer.nvidia.com/rtx/ray-tracing/optix (accessed on 2 July 2024).
  106. Shergin, D. Unigine Engine Render: Flexible Cross-Api Technologies. In ACM SIGGRAPH 2012 Computer Animation Festival; Association for Computing Machinery: New York, NY, USA, 2012; p. 85. [Google Scholar]
  107. Sanders, A. An Introduction to Unreal Engine 4; AK Peters/CRC Press: Natick, MA, USA, 2016; ISBN 1315382555. [Google Scholar]
  108. Haas, J. A History of the Unity Game Engine. 2014. Available online: http://www.daelab.cn/wp-content/uploads/2023/09/A_History_of_the_Unity_Game_Engine.pdf (accessed on 27 August 2024).
  109. Feng, Y.; Xia, Z.; Guo, A.; Chen, Z. Survey of Testing Techniques of Autonomous Driving Software. J. Image Graph. 2021, 26, 13–27. [Google Scholar]
  110. Zhong, Z.; Tang, Y.; Zhou, Y.; Neves, V.D.O.; Liu, Y.; Ray, B. A Survey on Scenario-Based Testing for Automated Driving Systems in High-Fidelity Simulation. arXiv 2021, arXiv:2112.00964. [Google Scholar]
  111. Tancik, M.; Casser, V.; Yan, X.; Pradhan, S.; Mildenhall, B.; Srinivasan, P.P.; Barron, J.T.; Kretzschmar, H. Block-Nerf: Scalable Large Scene Neural View Synthesis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 17–20 June 2022; pp. 8248–8258. [Google Scholar]
  112. Li, Y.; Guan, H.; Jia, X.; Duan, C. Decision-Making Model for Dynamic Scenario Vehicles in Autonomous Driving Simulations. Appl. Sci. 2023, 13, 8515. [Google Scholar] [CrossRef]
  113. Wen, M.; Park, J.; Sung, Y.; Park, Y.W.; Cho, K. Virtual Scenario Simulation and Modeling Framework in Autonomous Driving Simulators. Electronics 2021, 10, 694. [Google Scholar] [CrossRef]
  114. Li, B.; Wang, Y.; Ma, S.; Bian, X.; Li, H.; Zhang, T.; Li, X.; Zhang, Y. Adaptive Pure Pursuit: A Real-Time Path Planner Using Tracking Controllers to Plan Safe and Kinematically Feasible Paths. IEEE Trans. Intell. Veh. 2023, 8, 4155–4168. [Google Scholar] [CrossRef]
  115. Duan, J.; Yu, S.; Tan, H.L.; Zhu, H.; Tan, C. A Survey of Embodied Ai: From Simulators to Research Tasks. IEEE Trans. Emerg. Top Comput. Intell. 2022, 6, 230–244. [Google Scholar] [CrossRef]
  116. How Tencent TAD Sim Can Improve Gaming Productivity. Available online: https://www.leiphone.com/category/transportation/QyMdqw9BMeQdAvwN.html (accessed on 13 August 2024).
  117. Deng, W.; Zeng, S.; Zhao, Q.; Dai, J. Modelling and Simulation of Sensor-Guided Autonomous Driving. Int. J. Veh. Des. 2011, 56, 341–366. [Google Scholar] [CrossRef]
  118. Negrut, D.; Serban, R.; Elmquist, A. Physics-Based Sensor Models for Virtual Simulation of Connected and Autonomous Vehicles. 2020. Available online: https://rosap.ntl.bts.gov/view/dot/60196 (accessed on 27 August 2024).
  119. Blasband, C.; Bleak, J.; Schultz, G. High Fidelity, Physics-based Sensor Simulation for Military and Civil Applications. Sens. Rev. 2004, 24, 151–155. [Google Scholar] [CrossRef]
  120. PilotD Automotive Provides Sensor Physical Level Simulation for Autonomous Driving. Available online: https://letschuhai.com/automated-driving-system-development-and-validation-services (accessed on 14 August 2024).
  121. Ansys AVxcelerate Sensors Test and Validate Sensor Perception for Autonomous Vehicles. Available online: https://www.ansys.com/products/av-simulation/ansys-avxcelerate-sensors#tab1-2 (accessed on 14 August 2024).
  122. Autonomous Driving Sensor Development. Available online: https://www.ansys.com/zh-cn/applications/autonomous-sensor-development (accessed on 14 August 2024).
  123. Lindenmaier, L.; Aradi, S.; Bécsi, T.; Törő, O.; Gáspár, P. Object-Level Data-Driven Sensor Simulation for Automotive Environment Perception. IEEE Trans. Intell. Veh. 2023, 8, 4341–4356. [Google Scholar] [CrossRef]
  124. Schramm, D.; Hiller, M.; Bardini, R. Vehicle Dynamics. Model. Simulation. Berl. Heidelb. 2018, 6–11. [Google Scholar]
  125. Yang, Y.; Dogara, B.T.; He, M. The Research of Dynamic Stability Control System for Passenger Cars Using CarSim and Matlab-Simulink. In Proceedings of the 2016 International Conference on Advanced Electronic Science and Technology (AEST 2016), Shenzhen, China, 19–21 August 2016; Atlantis Press: Amsterdam, The Netherlands, 2016; pp. 706–710. [Google Scholar]
  126. VehicleSim Dynamics. Available online: https://www.unrealengine.com/marketplace/en-US/product/carsim-vehicle-dynamics (accessed on 15 August 2024).
  127. VehicleSim Dynamics Plugin for Unreal Engine. Available online: https://www.carsim.com/products/supporting/unreal/index.php (accessed on 16 August 2024).
  128. Brostow, G.J.; Shotton, J.; Fauqueur, J.; Cipolla, R. Segmentation and Recognition Using Structure from Motion Point Clouds. In Proceedings of the Computer Vision–ECCV 2008: 10th European Conference on Computer Vision, Part I 10, Marseille, France, 12–18 October 2008; Springer: Berlin/Heidelberg, Germany, 2008; pp. 44–57. Available online: http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid (accessed on 14 August 2024).
  129. Dollár, P.; Wojek, C.; Schiele, B.; Perona, P. Pedestrian Detection: A Benchmark. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; IEEE: New York, NY, USA, 2009; pp. 304–311. Available online: https://data.caltech.edu/records/f6rph-90m20 (accessed on 14 August 2024).
  130. Geiger, A.; Lenz, P.; Stiller, C.; Urtasun, R. Vision Meets Robotics: The Kitti Dataset. Int. J. Robot. Res. 2013, 32, 1231–1237. [Google Scholar] [CrossRef]
  131. Cordts, M.; Omran, M.; Ramos, S.; Rehfeld, T.; Enzweiler, M.; Benenson, R.; Franke, U.; Roth, S.; Schiele, B. The Cityscapes Dataset for Semantic Urban Scene Understanding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 3213–3223. Available online: https://www.cityscapes-dataset.com/ (accessed on 14 August 2024).
  132. Maddern, W.; Pascoe, G.; Linegar, C.; Newman, P. 1 Year, 1000 Km: The Oxford Robotcar Dataset. Int. J. Robot. Res. 2017, 36, 3–15. [Google Scholar] [CrossRef]
  133. Ros, G.; Sellart, L.; Materzynska, J.; Vazquez, D.; Lopez, A.M. The Synthia Dataset: A Large Collection of Synthetic Images for Semantic Segmentation of Urban Scenes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 3234–3243. Available online: https://synthia-dataset.net/ (accessed on 14 August 2024).
  134. Neuhold, G.; Ollmann, T.; Rota Bulo, S.; Kontschieder, P. The Mapillary Vistas Dataset for Semantic Understanding of Street Scenes. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 4990–4999. Available online: https://www.mapillary.com/dataset/vistas (accessed on 16 August 2024).
  135. Behrendt, K.; Novak, L.; Botros, R. A Deep Learning Approach to Traffic Lights: Detection, Tracking, and Classification. In Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore, 29 May–3 June 2017; IEEE: New York, NY, USA, 2017; pp. 1370–1377. Available online: https://zenodo.org/records/12706046 (accessed on 16 August 2024).
  136. Jeong, J.; Cho, Y.; Shin, Y.-S.; Roh, H.; Kim, A. Complex Urban Dataset with Multi-Level Sensors from Highly Diverse Urban Environments. Int. J. Robot. Res. 2019, 38, 642–657. [Google Scholar] [CrossRef]
  137. Huang, X.; Cheng, X.; Geng, Q.; Cao, B.; Zhou, D.; Wang, P.; Lin, Y.; Yang, R. The Apolloscape Dataset for Autonomous Driving. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Salt Lake City, UT, USA, 18–22 June 2018; pp. 954–960. Available online: https://apolloscape.auto/ (accessed on 16 August 2024).
  138. Pan, X.; Shi, J.; Luo, P.; Wang, X.; Tang, X. Spatial as Deep: Spatial Cnn for Traffic Scene Understanding. In Proceedings of the AAAI Conference on Artificial Intelligence, New Orleans, LA, USA, 2–7 February 2018; Volume 32. Available online: https://xingangpan.github.io/projects/CULane.html (accessed on 16 August 2024).
  139. Chen, Y.; Wang, J.; Li, J.; Lu, C.; Luo, Z.; Xue, H.; Wang, C. Lidar-Video Driving Dataset: Learning Driving Policies Effectively. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 5870–5878. Available online: https://github.com/driving-behavior/DBNet. (accessed on 16 August 2024).
  140. Ramanishka, V.; Chen, Y.-T.; Misu, T.; Saenko, K. Toward Driving Scene Understanding: A Dataset for Learning Driver Behavior and Causal Reasoning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 7699–7707. Available online: https://usa.honda-ri.com/hdd (accessed on 16 August 2024).
  141. Choi, Y.; Kim, N.; Hwang, S.; Park, K.; Yoon, J.S.; An, K.; Kweon, I.S. KAIST Multi-Spectral Day/Night Data Set for Autonomous and Assisted Driving. IEEE Trans. Intell. Transp. Syst. 2018, 19, 934–948. [Google Scholar] [CrossRef]
  142. Varma, G.; Subramanian, A.; Namboodiri, A.; Chandraker, M.; Jawahar, C. V IDD: A Dataset for Exploring Problems of Autonomous Navigation in Unconstrained Environments. In Proceedings of the 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), Waikoloa Village, HI, USA, 7–11 January 2019; IEEE: New York, NY, USA, 2019; pp. 1743–1751. Available online: https://idd.insaan.iiit.ac.in/ (accessed on 16 August 2024).
  143. Neumann, L.; Karg, M.; Zhang, S.; Scharfenberger, C.; Piegert, E.; Mistr, S.; Prokofyeva, O.; Thiel, R.; Vedaldi, A.; Zisserman, A. Nightowls: A Pedestrians at Night Dataset. In Proceedings of the Computer Vision–ACCV 2018: 14th Asian Conference on Computer Vision, Revised Selected Papers, Part I 14, Perth, Australia, 2–6 December 2018; Springer: Berlin/Heidelberg, Germany, 2019; pp. 691–705. Available online: https://www.nightowls-dataset.org/ (accessed on 16 August 2024).
  144. Braun, M.; Krebs, S.; Flohr, F.; Gavrila, D.M. Eurocity Persons: A Novel Benchmark for Person Detection in Traffic Scenes. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 41, 1844–1861. [Google Scholar] [CrossRef] [PubMed]
  145. Yu, F.; Chen, H.; Wang, X.; Xian, W.; Chen, Y.; Liu, F.; Madhavan, V.; Darrell, T. Bdd100k: A Diverse Driving Dataset for Heterogeneous Multitask Learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 2636–2645. Available online: https://dl.cv.ethz.ch/bdd100k/data/ (accessed on 16 August 2024).
  146. Palazzi, A.; Abati, D.; Solera, F.; Cucchiara, R. Predicting the Driver’s Focus of Attention: The Dr (Eye) ve Project. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 41, 1720–1733. [Google Scholar] [CrossRef]
  147. Chang, M.-F.; Lambert, J.; Sangkloy, P.; Singh, J.; Bak, S.; Hartnett, A.; Wang, D.; Carr, P.; Lucey, S.; Ramanan, D. Argoverse: 3d Tracking and Forecasting with Rich Maps. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 8748–8757. Available online: https://www.argoverse.org/av1.html (accessed on 16 August 2024).
  148. Caesar, H.; Bankiti, V.; Lang, A.H.; Vora, S.; Liong, V.E.; Xu, Q.; Krishnan, A.; Pan, Y.; Baldan, G.; Beijbom, O. Nuscenes: A Multimodal Dataset for Autonomous Driving. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 11621–11631. Available online: https://www.nuscenes.org/nuscenes (accessed on 16 August 2024).
  149. Sun, P.; Kretzschmar, H.; Dotiwalla, X.; Chouard, A.; Patnaik, V.; Tsui, P.; Guo, J.; Zhou, Y.; Chai, Y.; Caine, B. Scalability in Perception for Autonomous Driving: Waymo Open Dataset. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 2446–2454. Available online: https://waymo.com/open/ (accessed on 16 August 2024).
  150. Behrendt, K.; Soussan, R. Unsupervised Labeled Lane Markers Using Maps. In Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops, Seoul, Republic of Korea, 27–28 October 2019; Available online: https://unsupervised-llamas.com/llamas/ (accessed on 16 August 2024).
  151. Che, Z.; Li, G.; Li, T.; Jiang, B.; Shi, X.; Zhang, X.; Lu, Y.; Wu, G.; Liu, Y.; Ye, J. D2-City: A Large-Scale Dashcam Video Dataset of Diverse Traffic Scenarios. arXiv 2019, arXiv:1904.01975. Available online: https://www.scidb.cn/en/detail?dataSetId=804399692560465920 (accessed on 16 August 2024).
  152. Kim, B.; Yim, J.; Kim, J. Highway Driving Dataset for Semantic Video Segmentation. arXiv 2020, arXiv:2011.00674. Available online: https://arxiv.org/abs/2011.00674 (accessed on 16 August 2024).
  153. Pitropov, M.; Garcia, D.E.; Rebello, J.; Smart, M.; Wang, C.; Czarnecki, K.; Waslander, S. Canadian Adverse Driving Conditions Dataset. Int. J. Rob. Res. 2021, 40, 681–690. [Google Scholar] [CrossRef]
  154. Ertler, C.; Mislej, J.; Ollmann, T.; Porzi, L.; Neuhold, G.; Kuang, Y. The Mapillary Traffic Sign Dataset for Detection and Classification on a Global Scale. In Proceedings of the European Conference on Computer Vision, Glasgow, UK, 23–28 August 2020; Springer: Berlin/Heidelberg, Germany, 2020; pp. 68–84. Available online: https://www.mapillary.com/dataset/trafficsign (accessed on 16 August 2024).
  155. Geyer, J.; Kassahun, Y.; Mahmudi, M.; Ricou, X.; Durgesh, R.; Chung, A.S.; Hauswald, L.; Pham, V.H.; Mühlegg, M.; Dorn, S. A2d2: Audi Autonomous Driving Dataset. arXiv 2020, arXiv:2004.06320. Available online: https://a2d2.audi/a2d2/en.html (accessed on 16 August 2024).
  156. Caesar, H.; Kabzan, J.; Tan, K.S.; Fong, W.K.; Wolff, E.; Lang, A.; Fletcher, L.; Beijbom, O.; Omari, S. Nuplan: A Closed-Loop Ml-Based Planning Benchmark for Autonomous Vehicles. arXiv 2021, arXiv:2106.11810. Available online: https://www.nuscenes.org/nuplan (accessed on 16 August 2024).
  157. Li, Y.; Li, Z.; Teng, S.; Zhang, Y.; Zhou, Y.; Zhu, Y.; Cao, D.; Tian, B.; Ai, Y.; Xuanyuan, Z. AutoMine: An Unmanned Mine Dataset. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 21308–21317. Available online: https://automine.cc/ (accessed on 16 August 2024).
  158. Weng, X.; Man, Y.; Park, J.; Yuan, Y.; O’Toole, M.; Kitani, K.M. All-in-One Drive: A Comprehensive Perception Dataset with High-Density Long-Range Point Clouds. 2021. Available online: https://opendatalab.com/OpenDataLab/AIOdrive (accessed on 16 August 2024).
  159. Sun, T.; Segu, M.; Postels, J.; Wang, Y.; Van Gool, L.; Schiele, B.; Tombari, F.; Yu, F. SHIFT: A Synthetic Driving Dataset for Continuous Multi-Task Domain Adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 21371–21382. Available online: https://www.vis.xyz/shift/ (accessed on 16 August 2024).
  160. Xu, R.; Xiang, H.; Xia, X.; Han, X.; Li, J.; Ma, J. Opv2v: An Open Benchmark Dataset and Fusion Pipeline for Perception with Vehicle-to-Vehicle Communication. In Proceedings of the 2022 International Conference on Robotics and Automation (ICRA), Philadelphia, PA, USA, 23–27 May 2022; IEEE: New York, NY, USA, 2022; pp. 2583–2589. Available online: https://mobility-lab.seas.ucla.edu/opv2v/ (accessed on 16 August 2024).
  161. Mortimer, P.; Wuensche, H.-J. TAS-NIR: A VIS+ NIR Dataset for Fine-Grained Semantic Segmentation in Unstructured Outdoor Environments. arXiv 2022, arXiv:2212.09368. Available online: https://mucar3.de/iros2022-ppniv-tas-nir/ (accessed on 16 August 2024).
  162. Wang, H.; Li, T.; Li, Y.; Chen, L.; Sima, C.; Liu, Z.; Wang, B.; Jia, P.; Wang, Y.; Jiang, S. Openlane-v2: A Topology Reasoning Benchmark for Unified 3d Hd Mapping. In Proceedings of the Thirty-Seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track, New Orleans, LA, USA, 10–16 December 2023. [Google Scholar]
  163. Wilson, B.; Qi, W.; Agarwal, T.; Lambert, J.; Singh, J.; Khandelwal, S.; Pan, B.; Kumar, R.; Hartnett, A.; Pontes, J.K. Argoverse 2: Next Generation Datasets for Self-Driving Perception and Forecasting. arXiv 2023, arXiv:2301.00493. [Google Scholar]
  164. Baidu Apollo Starfire Autonomous Driving Competition. Available online: https://apollo.baidu.com/community/competition/13 (accessed on 2 July 2024).
  165. China Intelligent and Connected Vehicle Algorithm Competition. Available online: https://www.panosim.com/h-col-188.html (accessed on 2 July 2024).
  166. CVPR Autonomous Driving Challenge. Available online: https://www.shlab.org.cn/news/5443385 (accessed on 2 July 2024).
  167. Waymo Open Dataset Challenge. Available online: https://waymo.com/open/challenges/ (accessed on 2 July 2024).
  168. Workshop on Autonomous Driving. Available online: https://cvpr2023.wad.vision/ (accessed on 2 July 2024).
  169. Argoverse Challenge. Available online: https://www.argoverse.org/tasks.html (accessed on 2 July 2024).
  170. BDD100K Challenge. Available online: https://www.vis.xyz/bdd100k/challenges/ (accessed on 2 July 2024).
  171. CARSMOS International Autonomous Driving Algorithm Challenge. Available online: https://www.carsmos.cn/Race2023/ (accessed on 2 July 2024).
  172. Li, B.; Fan, L.; Ouyang, Y.; Tang, S.; Wang, X.; Cao, D.; Wang, F.-Y. Online Competition of Trajectory Planning for Automated Parking: Benchmarks, Achievements, Learned Lessons, and Future Perspectives. IEEE Trans. Intell. Veh. 2022, 8, 16–21. [Google Scholar] [CrossRef]
  173. OnSite Autonomous Driving Algorithm Challenge. Available online: https://www.onsite.com.cn/#/dist/home (accessed on 2 July 2024).
  174. Li, B.; Ouyang, Y.; Li, X.; Cao, D.; Zhang, T.; Wang, Y. Mixed-Integer and Conditional Trajectory Planning for an Autonomous Mining Truck in Loading/Dumping Scenarios: A Global Optimization Approach. IEEE Trans. Intell. Veh. 2022, 8, 1512–1522. [Google Scholar] [CrossRef]
  175. What Is so Difficult about Sensor Simulation. Available online: https://www.51fusa.com/client/knowledge/knowledgedetail/id/3168.html (accessed on 16 August 2024).
  176. Heiden, E.; Liu, Z.; Ramachandran, R.K.; Sukhatme, G.S. Physics-Based Simulation of Continuous-Wave Lidar for Localization, Calibration and Tracking. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Virtual, 31 May–1 August 2020; IEEE: New York, NY, USA, 2020; pp. 2595–2601. [Google Scholar]
  177. Wei, K.; Fu, Y.; Zheng, Y.; Yang, J. Physics-Based Noise Modeling for Extreme Low-Light Photography. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 8520–8537. [Google Scholar] [CrossRef] [PubMed]
  178. Xiong, Z.; Cai, Z.; Han, Q.; Alrawais, A.; Li, W. ADGAN: Protect Your Location Privacy in Camera Data of Auto-Driving Vehicles. IEEE Trans. Ind. Inform. 2020, 17, 6200–6210. [Google Scholar] [CrossRef]
  179. Li, B.; Gao, T.; Ma, S.; Zhang, Y.; Acarman, T.; Cao, K.; Zhang, T.; Wang, F.-Y. From Formula One to Autonomous One: History, Achievements, and Future Perspectives. IEEE Trans. Intell. Veh. 2023, 8, 3217–3223. [Google Scholar] [CrossRef]
Figure 1. User interfaces of the open-source simulators: (a) AirSim [33]; (b) Autoware [34]; (c) Baidu Apollo [35]; (d) CARLA [36]; (e) Gazebo [37]; (f) 51Sim-One [38]; (g) LGSVL [39]; (h) Waymax [40].
Figure 1. User interfaces of the open-source simulators: (a) AirSim [33]; (b) Autoware [34]; (c) Baidu Apollo [35]; (d) CARLA [36]; (e) Gazebo [37]; (f) 51Sim-One [38]; (g) LGSVL [39]; (h) Waymax [40].
Electronics 13 03486 g001
Figure 3. Critical functions for autonomous driving simulator [36,52,63,99].
Figure 3. Critical functions for autonomous driving simulator [36,52,63,99].
Electronics 13 03486 g003
Figure 4. Datasets categorized based on tasks [128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160,161,162].
Figure 4. Datasets categorized based on tasks [128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160,161,162].
Electronics 13 03486 g004
Figure 5. CamVid dataset sample segmentation results [128]: (a) Test image; (b) Ground truth.
Figure 5. CamVid dataset sample segmentation results [128]: (a) Test image; (b) Ground truth.
Electronics 13 03486 g005
Figure 6. Caltech Pedestrian dataset example image and corresponding annotations [129].
Figure 6. Caltech Pedestrian dataset example image and corresponding annotations [129].
Electronics 13 03486 g006
Figure 7. Example images from SYNTHIA dataset [133]: (a) Sample frame; (b) Semantic labels.
Figure 7. Example images from SYNTHIA dataset [133]: (a) Sample frame; (b) Semantic labels.
Electronics 13 03486 g007
Figure 8. Qualitative labeling examples from Mapillary Vistas dataset [134]: (a) Original image; (b) Labeled image.
Figure 8. Qualitative labeling examples from Mapillary Vistas dataset [134]: (a) Original image; (b) Labeled image.
Electronics 13 03486 g008
Figure 9. Sampled highway toll station information from the KAIST Urban dataset [136]: (a) Stereo image; (b) 3D point clouds.
Figure 9. Sampled highway toll station information from the KAIST Urban dataset [136]: (a) Stereo image; (b) 3D point clouds.
Electronics 13 03486 g009
Figure 10. Example images from the CULane dataset [138]: (a) Normal; (b) Crowded.
Figure 10. Example images from the CULane dataset [138]: (a) Normal; (b) Crowded.
Electronics 13 03486 g010
Figure 11. Example video frames from the HDD dataset [140]: (a) Cyclists; (b) Pedestrians crossing the street.
Figure 11. Example video frames from the HDD dataset [140]: (a) Cyclists; (b) Pedestrians crossing the street.
Electronics 13 03486 g011
Figure 12. Example images in different scenes from the KAIST Multispectral dataset [141]: (a) Campus; (b) Residential.
Figure 12. Example images in different scenes from the KAIST Multispectral dataset [141]: (a) Campus; (b) Residential.
Electronics 13 03486 g012
Figure 13. Example image of an unstructured scene from the IDD dataset [142].
Figure 13. Example image of an unstructured scene from the IDD dataset [142].
Electronics 13 03486 g013
Figure 14. Example driving videos in different scenes from the BDD100K dataset [145]: (a) Urban street in the day; (b) Urban street at night.
Figure 14. Example driving videos in different scenes from the BDD100K dataset [145]: (a) Urban street in the day; (b) Urban street at night.
Electronics 13 03486 g014
Figure 15. Example image from the nuScenes dataset [148]: (a) Daylight; (b) Night.
Figure 15. Example image from the nuScenes dataset [148]: (a) Daylight; (b) Night.
Electronics 13 03486 g015
Figure 16. Sample image from the Highway Driving dataset [152].
Figure 16. Sample image from the Highway Driving dataset [152].
Electronics 13 03486 g016
Figure 17. Sample image from Mapillary Traffic Sign dataset [154].
Figure 17. Sample image from Mapillary Traffic Sign dataset [154].
Electronics 13 03486 g017
Figure 18. Example image from AutoMine dataset [157]: (a) Intense lighting; (b) Rugged unstructured roads.
Figure 18. Example image from AutoMine dataset [157]: (a) Intense lighting; (b) Rugged unstructured roads.
Electronics 13 03486 g018
Figure 19. Example image of point clouds from OPV2V dataset [160].
Figure 19. Example image of point clouds from OPV2V dataset [160].
Electronics 13 03486 g019
Figure 20. Simulator interface used in Baidu Apollo Starfire Autonomous Driving Competition [42].
Figure 20. Simulator interface used in Baidu Apollo Starfire Autonomous Driving Competition [42].
Electronics 13 03486 g020
Figure 21. Scenarios in the 2022 China Intelligent and Connected Vehicle Algorithm Competition [165]: (a) Highway; (b) Intersection.
Figure 21. Scenarios in the 2022 China Intelligent and Connected Vehicle Algorithm Competition [165]: (a) Highway; (b) Intersection.
Electronics 13 03486 g021
Figure 22. Examples of the four tracks in the 2023 CVPR Autonomous Driving Challenge [166]: (a) OpenLane; (b) Online HD map construction; (c) 3D occupancy grid prediction; (d) nuPlan planning.
Figure 22. Examples of the four tracks in the 2023 CVPR Autonomous Driving Challenge [166]: (a) OpenLane; (b) Online HD map construction; (c) 3D occupancy grid prediction; (d) nuPlan planning.
Electronics 13 03486 g022
Figure 23. Example scenario in the 2023 Waymo Open Dataset Challenge [168].
Figure 23. Example scenario in the 2023 Waymo Open Dataset Challenge [168].
Electronics 13 03486 g023
Figure 24. Example scenario in the 2023 Argoverse Challenge [168].
Figure 24. Example scenario in the 2023 Argoverse Challenge [168].
Electronics 13 03486 g024
Figure 25. Example of the instance segmentation problem in the BDD100K Challenge [170].
Figure 25. Example of the instance segmentation problem in the BDD100K Challenge [170].
Electronics 13 03486 g025
Figure 26. Example scenario s in the CARLA Autonomous Driving Challenge [20]: (a) Obstacle avoidance; (b) Highways.
Figure 26. Example scenario s in the CARLA Autonomous Driving Challenge [20]: (a) Obstacle avoidance; (b) Highways.
Electronics 13 03486 g026
Figure 27. Example of simulation image in the CARSMOS International Autonomous Driving Algorithm Challenge [58].
Figure 27. Example of simulation image in the CARSMOS International Autonomous Driving Algorithm Challenge [58].
Electronics 13 03486 g027
Figure 28. Example problems in TPCAP [172]: (a) Parallel parking; (b) Perpendicular parking.
Figure 28. Example problems in TPCAP [172]: (a) Parallel parking; (b) Perpendicular parking.
Electronics 13 03486 g028
Figure 29. Example parking scene in the Onsite Autonomous Driving Challenge [173].
Figure 29. Example parking scene in the Onsite Autonomous Driving Challenge [173].
Electronics 13 03486 g029
Table 1. Features of autonomous driving simulators.
Table 1. Features of autonomous driving simulators.
SimulatorAccessibilityOperating SystemsLanguagesEnginesSensor Models Included
AirSim [33]Open-SourceWindows, Linux,
macOS
C++, Python, C#, Java, MatlabUnreal EngineAccelerometer, gyroscope, barometer, magnetometer, GPS
Autoware [34]Open-SourceLinuxC++, PythonUnity EngineCamera, LiDAR, IMU, GPS
Baidu Apollo [42]Open-SourceLinuxC++Unity EngineCamera, LiDAR, GNSS, radar
CARLA [43]Open-SourceWindows, Linux,
macOS
C++, PythonUnreal EngineIDARs, multiple camera, depth sensor, GPS
Gazebo [44]Open-SourceLinux, macOSC++, PythonODE, Bullet, DART, OGRE, OptiXMonocular camera, depth camera,
LiDAR, IMU, contact, altimeter,
magnetometer sensors
51Sim-One [38]Open-SourceWindows, LinuxC++, PythonUnreal EnginePhysical-level camera, LiDAR, mmWave radar
LGSVL [39]Open-SourceWindows, LinuxPython, C#Unity EngineCamera, LiDAR, radar, GPS,
IMU
Waymax [40]Open-SourceWindows, Linux,
macOS
PythonN/AN/A
Ansys Autonomy [48]CommercialWindows, Linux,
macOS
C++, PythonSelf-developedPhysical-level Camera, LiDAR, mmWave radar
CarCraft [64]PrivateN/AN/ASelf-developedN/A
Cognata [52]CommercialN/AN/ASelf-developedRGB HD Camera, LiDAR,
mmWave radar
CarSim [66]CommercialWindowsC++, MatlabSelf-developedN/A
CarMaker [67]CommercialWindows, LinuxC, C++, Python,
Matlab
Unigine EngineCamera, LiDAR, radar, GPS
HUAWEI
Octopus [68]
CommercialN/AC++, PythonN/AN/A
Matlab [56]CommercialWindows, Linux,
macOS
Matlab, C++,
Python, Java
Unreal EngineCamera, LiDAR, radar
NVIDIA DRIVE
Constellation [72]
CommercialLinuxC++, PythonSelf-developedN/A
Oasis Sim [58]CommercialWindows, LinuxC++, Simulink, PythonUnreal EngineObject-level Camera, LiDAR, Ultrasonic, mmWave radar, GNSS, IMU
PanoSim [73]CommercialWindowsC++, Simulink, PythonUnity EngineCamera, LiDAR, Ultrasonic, mmWave radar, GNSS, IMU
PreScan [74]CommercialWindowsC++, Simulink, PythonSelf-developedCamera, LiDAR, Ultrasonic radar
PDGaiA [61]CommercialN/AC++, PythonUnity EngineCamera, LiDAR, mmWave radar, GPS
SCANeR Studio [62]CommercialWindows, LinuxC++, PythonUnreal EngineGPS, IMU, radar, LiDAR, Camera
TAD Sim 2.0 [23]CommercialN/AN/AUnreal EngineCamera, LiDAR, mmWave radar
Table 2. Features of autonomous driving datasets.
Table 2. Features of autonomous driving datasets.
DatasetYearAreaScenesSensorsData Coverage
CamVid [128]2008ColombiaDaytime, dusk, urban,
residential, mixed use roads
Camera86 min of video
Caltech
Pedestrian [129]
2009AmericaUrbanCamera350,000 labeled bounding boxes, 2300 unique pedestrians
KITTI [130]2012GermanyDaytime, urban,
rural, highway
Camera, LiDAR, GPS/IMUImages, LiDAR data, GPS/IMU data, bounding box label
Cityscapes [131]2016Primarily in
Germany, neighboring
countries
Urban streetCamera, GPS5000 images with high-quality pixel-level annotations, 20,000 images with coarse
annotations
Oxford
RobotCar [132]
2016OxfordAll light condition, urbanCamera, LiDAR,
GPS/IMU
Almost 20 million images,
LiDAR data, GPS/IMU data
SYNTHIA [133]2016Virtual cityUrbanCamera, LiDARMore than 213,400 composite
images
Mapillary
Vistas [134]
2017GlobalDaytime, urban, countryside, off-roadCamera25,000 high-resolution images, 66 object categories
Bosch Small
Traffic Lights [135]
2017AmericaN/AN/A5000 images for training, a video sequence of 8334 frames for evaluation
KAIST Urban [136]2017KoreaUrbanCamera, LiDAR, GPS, IMU, FOG3D LiDAR data, 2D LiDAR data, GPS data, IMU data, stereo images, FOG data
ApolloScape [137]2018ChinaDaytime, urbanCamera, GPS, IMU/GNSSImages, LiDAR data
CULane [138]2018Peking, ChinaUrban, rural, highwayCamera133,235 frames of images
DBNet [139]2018ChinaA variety of traffic
conditions
Camera, LiDARPoint cloud, videos
HDD [140]2018San FranciscoSuburban, urban, highwayCamera, LiDAR,
GPS, IMU
104 h of real human
driving data
KAIST
Multispectral [141]
2018N/AFrom urban to residential,
campus, day to night
RGB/Thermal camera, RGB stereo, LiDAR, GPS/IMUImages, GPS/IMU data
IDD [142]2018IndiaResidential areas, country roads, city roadsCamera10,004 images, 34 labels
NightOwls [143]2018England,
Germany,
The Netherlands
Dawn, night, various weather conditions, four seasonsCamera279,000 frame completely
annotated data
EuroCity
Persons [144]
201812 European countriesDay to night, four seasonsCamera238,200 person instances
manually labeled in over
47,300 images
BDD100K [145]2018New York, San Francisco BayUrban, suburban, highwayCamera, LiDAR, GPS/IMUHigh-resolution images, high-frame rate images, GPS/IMU data
DR(eye)VE [146]2019N/ADay to night, various weather, highway,
downtown,
countryside
Eye tracking glasses,
camera, GPS/IMU
555,000 frames annotated
driving sequences
Argoverse [147]2019Pittsburgh,
Miami
UrbanCamera, LiDAR,
stereo camera, GNNS
Sensor data, 3D tracking annotations, 300k vehicle trajectories, rich semantic maps
nuScenes [148]2019Boston,
Singapore
Urban, day to nightCamera, LiDAR,
radar, GPS, IMU
1000 scenes, 1.4 million
images
Waymo Open [149]2019AmericaUrban, suburbanCamera, LiDAR1150 scenes that each span 20 s
Unsupervised Llamas [150]2019CaliforniaHighwayCamera100,042 labeled lane marker
images
D2-City [151]2019ChinaUrbanCameraMore than 10,000 driving
videos
Highway
Driving [152]
2019N/AHighwayCamera20 video sequences with a 30 Hz frame rate
CADC [153]2020Waterloo,
Canada
Urban, winterCamera, LiDAR, GNSS/IMU7k frames of point clouds,
56k images
Mapillary
Traffic Sign [154]
2020GlobalCity, countryside,
diverse weather
Camera100,000 high-resolution
images
A2D2 [155]2020GermanyUrban, highway, ruralCamera, LiDAR, GPS/IMUCamera, LiDAR, vehicle bus data
nuPlan [156]2021Pittsburgh,
Las Vegas,
Singapore,
Boston
UrbanCamera, LiDARLiDAR point clouds, images,
localization information, steering inputs
AutoMine [157]20222 provinces in ChinaMineCamera, LiDAR, IMU/GPSOver 18 h driving data, 18k annotated lidar data, 18k
annotated image frames
AIODrive [158]2022CARLA
simulator
Adverse weather, adverse
lighting, crowded scenes,
people running, etc.
RGB, stereo, depth
camera, LiDAR,
radar, IMU/GPS
500,000 annotated images, 100,000 annotated frames
SHIFT [159]2022CARLA
simulator
Diverse weather, day to night, urban, villageComprehensive
sensor suite
Rain intensity, fog intensity,
vehicle density, pedestrian density
OPV2V [160]2022CARLA
simulator,
Los Angeles
73 divergent scenes with
various numbers of
connected vehicles
LiDAR, GPS/IMU, RGBLiDAR point clouds, RGB
images, annotated 3D vehicle bounding boxes
TAS-NIR [161]2022N/AUnstructured
outdoor driving
scenarios
Camera209 VIS+NIR image pairs
OpenLane-V2 [162]2023GlobalUrban, suburbanN/A2k annotated road scenes, 2.1M instance-level annotations, 1.9M positive topology relationships
Table 3. Features of virtual autonomous driving competitions.
Table 3. Features of virtual autonomous driving competitions.
CompetitionsInitial YearSimulatorsDatasetsScenario
Baidu Apollo Starfire
Autonomous Driving
Competition [164]
2020Apollo-Traffic light intersections with pedestrians, intersections, changing lanes due to road construction, etc.
CIAC [165]2022PanoSim-Highways, intersections, parking lots, etc.
CVPR Autonomous Driving Challenge [166]2023-OpenLane-V2 dataset, nuPlan datasetUrban traffic
Waymo Open Dataset
Challenge [167]
2020-Waymo Open datasetN/A
Argoverse Challenge [169]2020-Argoverse dataset,
Argoverse2 dataset
N/A
BDD100K Challenge [170]2022-BDD100K datasetN/A
CARLA Autonomous
Driving Challenge [20]
2019CARLA-Intersections, traffic congestion,
highways, obstacle avoidance, etc.
CARSMOS International
Autonomous Driving
Algorithm Challenge [171]
2023Oasis Sim-Foggy conditions, intersections, etc.
TPCAP [172]2022--Parallel parking, perpendicular parking, angled parking, parking with multiple obstacles, etc.
OnSite Autonomous
Driving Challenge [173]
2023OnSite-Highways, entering, and exiting parking spaces in mining areas, parking in parking lots, etc.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, T.; Liu, H.; Wang, W.; Wang, X. Virtual Tools for Testing Autonomous Driving: A Survey and Benchmark of Simulators, Datasets, and Competitions. Electronics 2024, 13, 3486. https://doi.org/10.3390/electronics13173486

AMA Style

Zhang T, Liu H, Wang W, Wang X. Virtual Tools for Testing Autonomous Driving: A Survey and Benchmark of Simulators, Datasets, and Competitions. Electronics. 2024; 13(17):3486. https://doi.org/10.3390/electronics13173486

Chicago/Turabian Style

Zhang, Tantan, Haipeng Liu, Weijie Wang, and Xinwei Wang. 2024. "Virtual Tools for Testing Autonomous Driving: A Survey and Benchmark of Simulators, Datasets, and Competitions" Electronics 13, no. 17: 3486. https://doi.org/10.3390/electronics13173486

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop