1. Introduction
As the automotive industry becomes increasingly sophisticated, the need for a greater number of sensors and electronic devices has led to increased complexity in the internal configurations of vehicles [
1]. At the same time, vehicles must ensure safety, stability, security, as well as the real-time transmission and reception of data. Not only has the number of Electronic Control Units (ECUs) within vehicles grown, but the complexity of the required software has also exponentially increased. Modern vehicle architectures are challenging to manage and may connect over 100 ECUs [
2]. This signifies the need for a transformation in the traditional electrical and electronic (E/E) architecture of vehicles.
In response to these advancements, the automotive industry launched “Automotive Open System Architecture (AUTOSAR)”, which is an international consortium consisting of automotive OEMs, suppliers, and other industry stakeholders [
3]. Since 2003, AUTOSAR has introduced the Classic Platform, a solution for embedded systems with limited resources that offers a high level of safety and meets the Automotive Safety Integrity Level (ASIL-D). AUTOSAR is based on signal-based communication, such as Controller Area Network (CAN) or FlexRay. However, as highly advanced technologies such as autonomous driving and connected cars are developed, and high-performance sensors that continuously produce large volumes of data are incorporated into vehicles, traditional communications such as CAN are beginning to demonstrate bandwidth problems. Furthermore, the flexibility of architectures in making sensor data available to various software applications has become increasingly critical, and high-performance processors for the smooth acquisition and processing of sensor data have become more important.
To meet these technological demands, AUTOSAR introduced Adaptive AUTOSAR in 2017, which is based on POSIX OS [
4]. Adaptive AUTOSAR supports the high-bandwidth Ethernet-based Scalable Service-Oriented Middleware on Ethernet (SOME)/IP protocol and enables a shift from traditional signal-based data communication to service-based communication. Additionally, the support of High-Performance Computing has made it easier to use the computing performance necessary for autonomous driving and to facilitate the integration of various high-performance sensors and algorithms. This advancement is aimed at the development of next-generation mobility, such as autonomous driving. The automotive industry alongside AUTOSAR has already achieved substantial progress in this direction [
5].
In contrast to the efforts within the automotive industry, most research and development for autonomous driving has been conducted on the robotics platform known as Robot Operating System (ROS), which is managed and developed by Open Robotics [
6]. Initially designed as middleware for research purposes within universities and research institutions, ROS was made available as an open-source project, which allowed for a large number of users to develop and distribute a wide variety of packages and libraries. This openness remarkably accelerated the development speed by making available numerous drivers that supported the sensors necessary for autonomous driving, as well as sensor processing algorithms. Moreover, ROS supported powerful simulation environments such as Gazebo [
7] and visualization tools such as Rviz [
8], which added to the convenience of development. However, ROS lacked real-time control capabilities and required high computing power. It also used a proprietary communication method, TCPROS [
9], which relied on a single point of failure, the ROS Master [
10]. TCPROS was unsuitable for industrial use because of the severe security risks associated with exposing the Master IP and Port. To address these issues, a second version, ROS2 [
11], was developed.
ROS2 addresses the limitations associated with the research-focused nature of ROS1 by incorporating the Data Distribution Service (DDS) [
12], which is a standard used in the military and aviation industries for Service-Oriented Communication (SOC). Additionally, the introduction of DDS Real-Time Publish Subscribe (DDS-RTPS) enables real-time control under the assumption of well-structured code. These innovations demonstrate the potential of ROS1 to move beyond its limitations and progress industrially. For instance, Apex successfully developed a high-safety solution that met the requirements of the ASIL-D level of the functional safety standard for electrical and/or electronic systems (ISO 26262) based on ROS2 [
13,
14]. This trend has led various ROS1 autonomous driving projects to transition to ROS2; Autoware [
15] is a prominent example of this trend. Autoware is an open-source project that encompasses essential autonomous driving functionalities such as perception, decision making, and control [
16]. It is currently being tested on various testbeds, including autonomous buses, shuttles, and Autonomous Valet Parking, with ongoing development toward the activation and commercialization of autonomous driving technologies. However, for consumers to use autonomous vehicles in daily life, stringent conditions must be met. Almost all vehicle companies currently adhere to the AUTOSAR standard. This indicates that there is a notable gap between the research and development of autonomous driving technologies and their application in actual vehicles. To leverage the strengths of each platform and mitigate its weaknesses, interoperability between Adaptive AUTOSAR and ROS2 platforms must be ensured in a form that can be realistically applied in the automotive industry.
Two data communication methods exist to ensure such interoperability: DDS and SOME/IP. While Adaptive AUTOSAR is being actively researched and applied, it can lead to unpredictable behavior due to non-determinism issues [
17]. Therefore, for stability in the actual vehicle development and production stages, a combined architecture with Classic AUTOSAR, which can achieve deterministic execution, is utilized. Classic AUTOSAR supports SOME/IP but not DDS. Additionally, from a cost perspective, the semiconductor chips used in Classic AUTOSAR possess very limited hardware resources. DDS covers a significantly broader range of protocols and, due to its various Quality of Service (QoS) features, demands much more memory than SOME/IP. Consequently, compared to SOME/IP, DDS relies heavily on the hardware resources of the vehicle’s network infrastructure, and implementing and using DDS on microcontrollers is highly limited in functional aspects.
For these reasons, we propose a method for integrating Adaptive AUTOSAR and ROS2 via the Ethernet-based SOME/IP protocol to ensure safety while maintaining a flexible environment for the development and testing of autonomous vehicles. Adaptive AUTOSAR, which has long been a standard architecture for vehicle companies and is a validated platform, focuses on high reliability and safety for automotive systems. However, ROS2, which is widely used in the field of robotics, lacks safety and real-time control but offers substantial advantages in terms of flexible communication, development convenience through an active open-source community, the development of sensor drivers, and more. In addition, ROS2 has notable strengths in powerful visualization, development tools, and simulation tools. The integration of Adaptive AUTOSAR and ROS2 leverages the safety of the automotive field and the flexibility of robotics to simplify and accelerate the development and testing of autonomous vehicles. In addition, interoperability via the SOME/IP protocol enables faster adoption in the current automotive industry compared to DDS. To integrate these two distinct platforms, this study proposes the design and implementation of an interoperable architecture named Autonomous Driving System with Integrated ROS2 and Adaptive AUTOSAR (ASIRA). Using ASIRA within a Linux environment enables the exchange of data between Autoware, which is ROS2’s autonomous driving project, and the Adaptive AUTOSAR Platform, which facilitates the operation of autonomous vehicles.
The structure of this paper is as follows:
Section 2 introduces related research and background knowledge.
Section 3 describes the system architecture and components and the method of system implementation.
Section 4 validates the developed system through the simulation within scenarios and verifies the capability of the two platforms to exchange data and achieve autonomous driving. Finally,
Section 5 concludes this paper and presents future research directions.
4. System Validation
Section 4 describes the procedure for validating the ASIRA architecture built in this paper.
Section 4.1 describes the configuration of the environment to verify the system.
Section 4.2 describes the verification scenario and simulation method. In
Section 4.3, we show that the ASIRA architecture built in this study can be interoperable based on the actual simulation results.
4.1. System Verification Environment
To verify the system that was implemented in this study, we created a Point Cloud Map and a Vector Map for use on the ROS2 Autonomous Driving Platform. The red square in
Figure 5 shows the area where the LiDAR sensor was used to collect data as the robotic platform moved to collect the data shown in
Figure 6. While it collected 16-channel 3D LiDAR data, the robot constructed a Point Cloud Map by performing gicp/ndt scan matching and Simultaneous Localization and Mapping (SLAM) using a graph-based SLAM algorithm [
27].
The results of the SLAM are shown in
Figure 7. We created a three-dimensional map of points, which was used in the Localization and Perception process in Autoware. Along with the Point Cloud Map, a Vector Map was constructed, and it is shown in the figure. The Vector Map used a Lanelet2 format that contained the location information for the road on which the vehicle could drive, such as left and right lanes, stop lines, traffic lights, and the various constraints required for driving. Using TIER V4’s Vector Map Builder [
28], we constructed a Vector Map containing road information for use in the verification of the simulation. In
Figure 8, the yellow-colored part of the overlapping Point Cloud is the area where the vehicle could drive.
4.2. Validation Scenarios
Figure 9 shows a scenario that was used to validate the system proposed in this study. Below, we describe the implementation method for the verification of each part.
Dummy Odometry Data File: The validation scenario assumes that a vehicle knows its Position, Orientation, and Linear and Angular Velocities. The Adaptive AUTOSAR Platform is equipped with Dummy Odometry Data. Dummy location information was obtained by operating the actual data acquisition robot shown in
Figure 6 in the scenario environment. The odometry results obtained during this process were recorded at a 20 ms interval along with Timestamps. This information was then converted into a CSV file for use in the verification process of ASIRA.
Figure 10 shows some of the acquired Odometry Data converted to a CSV file with a timestamp. Some Linear Velocities and accelerations are omitted for readability in
Figure 10. When a request is received from ROS2 SOME/IP Bridge, the CSV file is read row by row to simulate the acquisition of the vehicle’s location information. The X, Y, and Z positions of the vehicle are based on the UTM coordinate system using a latitude of 37.5422 and a longitude of 127.0785 as the origin. Odometry Data also contain Orientation X, Y, Z, and W values to indicate the Orientation of the vehicle, as well as Linear Velocity and Angular Velocity.
ROS2 SOME/IP Bridge: As described in
Section 3.2, the ROS2 SOME/IP Bridge is responsible for requesting and receiving vehicle location data and sending it to the ROS2 Topic. It receives the Vehicle Command Topic from Autoware and sends it to the Adaptive AUTOSAR Platform via an RPC request. Also, it requests location information from the Adaptive AUTOSAR Platform every 2.5 ms to match the rate of the odometry information used by Autoware.
Autoware: When Autoware receives odometry information from the ROS2 SOME/IP Bridge, it uses the information obtained from localization. It also performs Perception using the Point Cloud Map, performs Planning using Perception and Vector Map results, and generates Trajectory information. This is used by the Control Node to finally publish the Vehicle command Control Topic. In addition to the algorithmic part, the simulation environment is configured through visualization using ROS2 Rviz.
Figure 11 shows the application of the Point Cloud Map and Vector Map for system validation using Rviz, a simulator and visualization tool provided by Autoware. In the Vector Map, the green Lanelet2 component represents the route that the vehicle can drive. The shortest distance to the destination is calculated and the estimated route is displayed in green. In the figure, the estimated route is currently displayed as a green line. The Global Path is displayed in light green, and the possible route based on the current vehicle position is displayed in dark green. You can set the initial position of the vehicle using the 2D Pose Estimation button and set the destination of the vehicle using the 2D Goal Pose button.
4.3. System Verification
Figure 12 shows a simulation of the entire system. The top left terminal is the Kinematic State Subscriber and the top right terminal is the Vehicle Command Publisher. The bottom terminal is the Adaptive AUTOSAR Platform. On the right side of the figure is the Rviz screen for visualizing the Autoware simulation.
Figure 13 shows the change in the trajectory and path over time. The Adaptive AUTOSAR Platform receives the Dummy Odometry Information transmitted by ROS2 over time and continuously drives to the destination, and the path of the vehicle is continuously calculated and updated accordingly. This means that ROS2 and the Adaptive AUTOSAR Platform are working together and exchanging data in real time. (Odometry Data should be delivered within 20 ms to meet Autoware’s needs. Vehicle Control Commands should be delivered within 50 ms to match Autoware’s cycle.)
Figure 14 displays a terminal outputting data exchanged during a simulation. In the top left of the figure is the Kinematic State Subscriber, which shows data exchange on Terminal 1. It receives the vehicle’s Position, Orientation, Linear Velocity, and Angular Velocity from the Adaptive AUTOSAR Platform and converts them into ROS2 messages for publishing. As shown on Terminal 1, it receives packets containing the vehicle’s position X, Y, and Z, Orientation X, Y, Z, and W, along with Linear and Angular Velocity, and analyzes the payload to produce results.
The top right of the figure is the Vehicle Command Publisher, which receives the Vehicle Control Command topic from Autoware and sends it to the Adaptive AUTOSAR Platform. As displayed on Terminal 2, it captures Steering Tire Angle, Rotation Rate, Speed, Acceleration, and Jerk information included in the Control Command topic before sending the packet.
The bottom of the figure represents the Adaptive AUTOSAR Platform, which interfaces with Autoware’s autonomous driving software through ROS2 Bridge for data transmission. Terminal 3 shows the platform sending Odometry Data packaged in RPC Response Payload and receiving Vehicle Command packets through Request Payload. The RPC Handler parses these packets to display Steering Angle, Rotation Rate, Speed, Acceleration, and Jerk information in double format. It also illustrates the process of receiving current location information from Odometry Dummy Data, serializing it, and transmitting it in the Response Payload.
Table 7 documents the details of two types of data transmitted in a scenario, measured over a 3 min driving session. The delay was measured from the moment data were generated—through serialization, transmission via the ROS2-SOME/IP Bridge—to the conversion of the packet into a usable data format. The Odometry Data transmitted from the Adaptive AUTOSAR Platform to the Kinematic State Subscriber was aligned with the 50 Hz (20 ms) transmission cycle required by the Autoware autonomous driving platform. The average and peak delay times were recorded at 10.95 ms and 13.45 ms, respectively. These levels are significantly below the 20 ms cycle, indicating that they are sufficiently low for use in the autonomous driving platform proposed in this study. Data transmission from the Vehicle Command Publisher to the Adaptive AUTOSAR Platform also demonstrated very low levels compared to the cycle, proving its suitability for use in the autonomous driving platform described in this research.
5. Conclusions and Future Improvements
In this study, we built an architecture that allowed Adaptive AUTOSAR and ROS2 to be interconnected through the Ethernet-based SOME/IP protocol and presented the simulation results. We showed that it was possible to connect ROS2-based autonomous driving and Adaptive AUTOSAR-based vehicle architecture, which have been studied in different fields. The vehicle location information from the Adaptive AUTOSAR Platform is transmitted to the ROS2 Bridge Node using the SOME/IP protocol, which is converted to an ROS2 Topic that can be subscribed to by the autonomous driving system. In addition, instead of using only one-way data transmission, the ROS2 Platform uses received data to perform the perception, judgment, and control required for autonomous driving, and transmits the result to the Adaptive AUTOSAR Platform through the ROS2 Bridge. Using this architecture, various robotics systems, sensor drivers, and sensor processing technologies based on ROS2 that are in development, as well as various other software currently available in the open-source community, can be implemented in the vehicle. It is also useful for rapid prototyping because the tests required for vehicle development and the use of various sensors can be handled on the ROS2 Platform, where many drivers and sensor processing algorithms are already available and thus do not have to be developed on the Adaptive AUTOSAR Platform. There are even open sources for LiDAR and vision-related machine learning object detection and detection that are being actively researched and applied to autonomous driving, enabling more advanced autonomous driving implementations [
29,
30]. Since it is connected using SOME/IP, it has the potential to work with not only Adaptive AUTOSAR, but also Classic AUTOSAR, which is currently used continuously in the automotive industry for safety. This means that it can be quickly adapted to the existing automotive industry and can be implemented at a low cost because the hardware requirements are lower than other communication protocols. Furthermore, as research to integrate ROS2 into the automotive industry continues, and to meet safety standards, it can be applied to real vehicles by using the strengths of ROS2 and Adaptive AUTOSAR beyond testing and prototyping.
The limitations and future development challenges of this study can be summarized as follows.
1. The Adaptive AUTOSAR Platform used in this study is an open-source platform that partially satisfies the AUTOSAR standard and is not software that is used in the actual vehicle industry. SOME/IP, which was the focus of this study, satisfied the AUTOSAR standard and was implemented in ara::com. However, in actual vehicles, various other factors that are not implemented in this open source may cause unexpected conflicts. More testing and research are needed.
2. The validation for this study was performed via simulation only. Hardware constraints, such as computing power, should be considered when research to integrate ROS2 and the Adaptive AUTOSAR Platform for autonomous driving are integrated for real devices.
3. In this study, we did not deeply consider network topology selection or traffic optimization [
31,
32] and focused solely on implementing and verifying interoperability through SOME/IP between Adaptive AUTOSAR and ROS2. Further consideration should be given to more suitable QoS settings or traffic optimization in actual communication environments.
Through this study and other extended research, the automotive industry and autonomous driving technologies that are developing in different areas will be combined. Ultimately, the ROS2 Platform and the Adaptive AUTOSAR Platform will complement each other, helping to reduce the time required for testing and prototyping during development. This will lead to faster advancements in autonomous driving and hopefully create a better transportation environment, including moving away from the congestion of traditional traffic and paving the way for reducing the negative aspects of transportation, such as energy consumption and emissions [
33].