**2. Methodology**

For the HAV model, the proposed functionality (HWC) is defined as conditional automated driving function (SAE level 3—Conditional Automation) for standard driving, that is based on requirements and conditions defined by PEGASUS in [13]. PEGASUS is a research project which aims for a definition of a standardized procedure for the testing

and experimenting of automated vehicle systems in simulation and real environments. Regarding the conditional automation on the guidance level, HWC function shall be capable of controlling the vehicle in longitudinal and lateral direction if the current vehicle state allows it. Additionally, the CAV model is defined by a lane change warning system on the basis of HWC, which ascertains the surrounding vehicle motion states based on V2V communication. The safe distance between ego vehicle and rear vehicle in the target lane is analyzed according to the goal of both collision avoidance and vehicle following safety.

#### *2.1. Simulation Platform*

Traffic analysis, modeling, and simulation is a mature field; several simulators are available. Each simulator has its own advantages in simulating real-world traffic based on a different car-following model. Typical TAMS tools applied by traffic engineers are PTV VISSIM [14], Simulation of Urban MObility (SUMO) [15], CORSIM [16], and Paramics [17]. Vissim is a microscopic road traffic simulator developed by PTV Group. Due to the comprehensive simulation diversity (motor vehicle module, bicycle module, pedestrian module, public transportation module, traffic timing module, etc.), as well as multi-dimension and efficiency of traffic simulation parameters, it is widely used in consulting firms, academia, and the public sector in the field of road traffic simulation.

Vissim provides a user-friendly Graphical User Interface (GUI), which means the user does not need to write programs manually to call different simulation modules and set up simulations. In addition to visual applications, Vissim also offers script-based modeling, which is very useful when users aim to dynamically access and control Vissim objects during simulation. This can be achieved through the COM (Component Object Model) interface, a technology that realizes inter-process communication between software with various programming language (e.g., C++, Phyton, Visual Basic, Java, Matlab, etc.).

The Vissim COM interface defines a hierarchical model with a head called IVissim, which represents the Vissim object. Under IVissim, there are different objects in which the functions and parameters of the simulator originally provided by the GUI can be controlled by programming. The Vissim-COM programming is introduced through Matlab Script for the co-simulation framework. For this Vissim-COM interface, [18] introduced a detailed development of the simulation environment. It is capable of performing all simulation sequences with the flexibility to allow the user to calibrate parameters and finally generate statistical plots automatically.

#### *2.2. External Driving Model*

The models are implemented in Vissim described in [18,19] using the External Driving Model interface (DLL). This interface provides the possibility to implement driver models with defined driving behavior. Similar functions are available through a Python interface called Traffic Control Interface (TraCI) in SUMO [20]. The whole DLL driving model is illustrated in Figure 1, which consists of three models. Considering that the traffic participants in Vissim cannot individually set up a sensor model to perceive the surrounding obstacles, DLL provides a possibility to obtain specific parameters of the surrounding vehicles (e.g., relative distance and velocity, heading angle, etc.) passed from Vissim. As a consequence, this perception information is gathered in the sensor model then sent to the driving model (HAV and CAV) as input. In parallel, driving models receive sensor fusion data and current vehicle dynamics data from Vissim to calculate lateral and longitudinal control commands, which are fed back to Vissim and the movement of the traffic vehicle is completed in the loop. The three models are described in more detail in the following subsections.

**Figure 1.** Overview of the DLL driving model.

#### *2.3. Sensor Model*

There are several different sensors built in a modern car supported to assist the driver or even drive autonomously. Therefore, the sensor model plays an important role in the adaptive control of the ego car and objects perception. Due to the limitations of Vissim, it is not possible to provide sensor models, a simplified sensor model is therefore presented here. In [21], an advanced driver assistance system is introduced from Toyota, which has been commercially realized in Japan in 2021. This system has multi-modal sensors covering the complete periphery of 360 degrees. Hence, the sensor model should have the ability to detect the surrounding objects, especially traffic participants upstream and downstream of the adjacent lane. Additionally, the most important parameter is to set the effective range of the sensor detection. Namely, only traffic participants within the detection range are considered. With the development of sensor technology, long-range radar is a range capability up to 150–200 m, presented in [22]. Therefore, a maximum detection range of 200 m is defined in the sensor model. The main function of the sensor model is to receive the specific parameters of the traffic participants from Vissim and transmit them to the driving function model by combining sensory data and fusion algorithms. Hence, Time to Collision (TTC), the lane change decision, and other perception signals are introduced in the subsequence.

#### 2.3.1. Time to Collision Calculation

TTC is used to determine the time difference between the current time to a future moment when a potential crash will happen. It is a snapshot of the currently prevailing conditions and is only valid if the conditions stay stable. Nevertheless, it is useful for the prediction of potential crashes and for classifying the time-based safety distance to other traffic participants. The calculation of the TTC starts with the position of a car in dependency of the driving velocity, and the acceleration is described by Equation (1), where *s* and *v* are relative distance and velocity between ego and target car acquired from Vissim, respectively, and *a* is collected based on current ego car driving dynamics. The calculation of TTC *t* is therefore, the solution (Equation (2)) to Equation (1).

$$s = \frac{a}{2} \cdot t^2 + v \cdot t \tag{1}$$

$$t\_{1,2} = \frac{-v \pm \sqrt{v^2 - 2 \cdot a \cdot s}}{a} \tag{2}$$

$$D = v^2 - 2 \cdot a \cdot s \tag{3}$$

Decisive for the solution of the equation for TTC = *<sup>t</sup>*1,2 is the term under the square root, also called determinant *D* in Equation (3). There are three possible cases shown in Figure 2.


**Figure 2.** TTC outcome possibility.

When the TTC calculation is positive, it means that the velocity of the ego car is faster than that of the target car. Namely, the ego car is accelerating relative to the target car. Conversely, the TTC is negative, which means the ego car is decelerating relative to the target vehicle. In addition, the TTC is assumed to be the maximum value when there is no vehicle within the front detection distance or maintaining the same speed as the target car. Finally, When the TTC is towards zero, this is a very hazardous situation, meaning that the distance between vehicles is decreasing and a potential collision may occur. Therefore, TTC, as an important output of the sensor model, will be used as an essential condition to determine the occurrence of the lane change.

#### 2.3.2. Decision Making

Decision making is another important function in the sensor model. In the previous section, TTC has been determined. The decision to change lanes based on the left-hand overtaking rule of the road is therefore initialized according to the TTC, which are predefined by the decision-making algorithm. Figure 3 illustrates a decision-making process to decide lane change direction. These commands will be as an internal signal transmitted to the driving function model. Before the vehicle reaches cruising speed, if the TTC detected in the same lane is highest, it means the target vehicle is far enough away to be safe. Therefore, the ego car can continue to drive on the current lane unless the adjacent lane TTC is higher and the lane change condition is satisfied; in this case, the sensor model will send a lane change decision signal to the driving model. Even during the lane change execution, the sensor model continues the TTC calculation between the surrounding cars in simulation iteration and the ego car in order to change the lane change decision at any time.

#### 2.3.3. Other Relevant Signals

Other signals related to sensor sensing can be read directly from the Vissim simulation environment via the DLL interface, without additional computation. These signals will be used in the driving function model as well. These signals include:


**Figure 3.** Process of decision making based on TTC calculation.

#### *2.4. Highly Automated Vehicle Model*

The HAV model is implemented based on HWC function defined in [13], which is the most advanced vehicle automation technology, operating on motorways only. The HAV model should adapt to all types of traffic conditions. The physical environment and driving function is virtually reconstructed and simulated in Vissim. In addition, the vehicles can carry out maneuvers in a fully autonomous and safe manner. The HAV model is primarily responsible for the longitudinal and lateral control of the ego car, which are introduced respectively in the subsequence.

#### 2.4.1. Longitudinal Control

As one of the already serialized and common longitudinal controllers, the Adaptive Cruise Control (ACC) has the task to maintain a desired longitudinal speed or distance to a preceding vehicle. The norm International Organization for Standardization (ISO) 22179:2009 [23] defines the Full Speed Range Adaptive Cruise Control (FSRA), which allows control not only while free-flowing but also for congested traffic conditions. The system regulates the velocity of the ego vehicle depending on the vehicles in front and other traffic objects. Furthermore, if the FSRA-type system is used, the controller attempts to stop behind an already tracked vehicle within limited deceleration capabilities. The presented FSRA algorithm is developed based on a longitudinal vehicle model, the speed and distance controller introduced in [24]. The overall control scheme of the FSRA implementation process is depicted in Figure 4. The input signal is separated into three types, sensor inputs, ego vehicle states, and desired vehicle states. For the sensor input, they are relative speed *δv* and distance *δs* to a target vehicle provided, respectively, by the sensor model and transmitted to the distance control. As shown in Figure 1, the current traffic vehicle states can be read directly from Vissim. Therefore, the longitudinal ego vehicle velocity *vx* is transmitted to the distance controller and speed controller. For the desired states, the desired velocity *vd* and safe distance *sd* are predefined by the user. Additionally, an acceleration controller is used for developing longitudinal control algorithms, which means that the distance and speed controller generates the desired acceleration *as*,*<sup>d</sup>* and *ac*,*d*, respectively. The control logic calculates a final desired acceleration, *ad*, which is forwarded to control a Vissim traffic vehicle through a DLL interface. All the values of model parameters are set according to [25], which depends on reasonable literature.

**Figure 4.** Longitudinal control schema in the driving model.

#### 2.4.2. Lateral Control

The lateral control of the HAV mainly simulates the lane change function for a vehicle. For an autonomous lane change on SAE level 3, there is still no ISO norm defined. However, the ISO 21202:2020 [26] norm deals with Partially Automated Lane Change Systems (PALS). It describes basic control strategies, basic driver interface elements, minimum requirements for reaction to failure, minimum functionality requirements, and performance test procedures for a PALS. However, this will only be possible on a road where no pedestrian or other non-motorized vehicle is taking part in the traffic. For autonomous lane change of HAV, it has to observe the position of the car within the lane as well as the adjacent lanes and obstacles in the vicinity. Meanwhile, the ego vehicle can initiate a lane change on its own, as defined by PEGASUS [13]. Figure 5 presents a complete lateral control logic. The sensor model determines lane change decision based on the target car in front of the ego car on the same lane and the surrounding driving situation in the adjacent lane. It thus sends a fused calculation of the TTC and lane change indication *flc* from sensor model to Trajectory Planning Block (TPB). Meanwhile, TPB receives the time consumed for the whole process of lane change *tlc*, vehicle speed *vm* at the moment the lane change is triggered, and the lateral displacement *h* in real-time from Vissim. However, consider a corner case where an accelerating car may suddenly drive from behind during a lane change. In this case, TPB should abort the lane change action and re-plan back to the initial lane based on the rear TTC Δ*tr* provided by the sensor model. In the end, TPB converts the inputs to the heading angle of the vehicle, lane change active command and lane change direction. These signals are transmitted to the DLL interface to control the Vissim traffic vehicles until lane change action is finished. Therefore, lane change trajectory and the back-planning trajectory generation in TPB are described as two important functions in the following:

**Figure 5.** Lateral control schema in the driving model.

Regarding the lane change trajectory generation, the algorithm for the lane change behavior and the simulation implementation are presented in [27,28]. Based on these investigations, it is then known that the lane change is a time-based behavior. Therefore, the displacement of the vehicle from the center of the road can be derived in the trajectory equation for the lane change. The lateral *ylat*(*tlc*) and longitudinal *ylong*(*tlc*) trajectories are calculated by the polynomial Equations which are determined in Equations (4) and (5), respectively. With this method, a smooth trajectory is composed of only a few points. Meanwhile, the acceleration profile is calculated by Equation (6) in order to better and realistically match the lateral motion, where *tm* is maneuver time and calibrated to 6*<sup>s</sup>*, which corresponds to an average time according to [27]. Referring to the description of the longitudinal behavior in [28], the maximum acceleration value can be set to *amax* = 1.2 m/s2. For a complete lane change action *hl* = 3.5 m, which represents the displacement from one center line to the other (lane width). The entire process of lane change has an acceleration at the beginning phase and a gradual decrease in speed after the maneuver is completed. As shown in Figure 6a, the ego car detects a slower car ahead and that the adjacent lane is available for a lane change. Thus, a trajectory is generated.

$$y\_{lat}(t\_{lc}) = \left(\frac{-\epsilon h\_I}{t\_m^5}\right) \cdot t\_{lc}^5 + \left(\frac{15h\_I}{t\_m^4}\right) \cdot t\_{lc}^4 + \left(\frac{-10h\_I}{t\_m^3}\right) \cdot t\_{lc}^3 \tag{4}$$

$$y\_{l\text{long}}(t\_{lc}) = \upsilon\_{\text{m}} t\_{lc} \tag{5}$$

$$a(t\_{lc}) = a\_{max} \cdot \sin\left(\frac{2\pi}{t\_m} \cdot t\_{lc}\right) \tag{6}$$

In back-planning trajectory generation, Δ*tr* is continuously checked as a safety standard until the lane change is completed. Lane change is aborted if the safety criterion fails to be met. Figure 6b presents a typical scenario. In this scenario, the ego car plans to change lane to pass the slower Car 1 in front of it, but a fast approaching Car 2 forces the ego car to abandon the lane change and move back to the initial lane. At this moment, *h* in Equation (4) is adjusted according to the lateral displacement between the current vehicle position and the initial point. Therefore, the lane change abortion path is generated by Equations (4)–(6). The HAV drives back to its original lane following the abortion path. With the lane change abortion mechanism, lane change safety of HAV can be guaranteed.

**Figure 6.** Lane change trajectory generation (**a**) a complete lane change trajectory planning; (**b**) an abortion trajectory generation.

#### *2.5. Connected and Automated Vehicle Model*

Based on the HAV model introduced in the previous section, the CAV model is proposed to simulate a realistic environment with a V2V cooperative lane change function. Wireless technologies are rapidly evolving. This evolution provides opportunities to use these technologies in support of advanced vehicle safety applications and crash avoidance countermeasures [29]. Compared to the HAV model, CAV share their positions, speeds, accelerations, and states with each other, which has a greater impact on the upstream vehicles in the adjacent lane. Therefore, in this section, the CAV model focuses more on the cooperation and communication between vehicles. The preliminary application communication scenario requirements are defined in [30]. Lane change warning function should support a maximum distance of up to 150 m.

As the CAV has the basic functions of the HAV illustrated in Figure 7, it has the same lateral and longitudinal control mechanism and commands signals *cmd* sent to DLL interface. Meanwhile, the V2V block receives state signals from HAV model and Vissim (i.e., current ego car speed *v* and acceleration *a*, rear TTC Δ*trear* and lateral displacement *h*). Additionally, in order to simulate communication between vehicles, the ego car will broadcast an encapsulated lane change warning messages *COMmessage* once lane change flag *factive* is triggered, with a radius of 150 m around the current ego car. Therefore, all CAV within the signal coverage area will receive this signal, and the relevant vehicle will be able to adjust its speed based on received encapsulated information. A typical scenario is shown in Figure 8; Car 3 is a CAV with V2V communication and follows Car 2. When the ego car detects Car 1 ahead, and the adjacent lane meets the lane change conditions, a lane change warning messages are broadcast before the lane change is triggered. After Car 3 receives the warning messages from the ego car, it will change the target car from Car 2 to ego car in longitudinal control. The ego car V2V broadcast communication messages to Car 3 in real-time during the lane change. Car 3 changes longitudinal movement according to V2V signals to reserve safe space for the ego car during the lane change.

**Figure 7.** CAV model in the driving model.

**Figure 8.** Cooperative lane change for CAV.
