Next Article in Journal
Application of Wood Composites III
Next Article in Special Issue
Predictive Vehicle Safety—Validation Strategy of a Perception-Based Crash Severity Prediction Function
Previous Article in Journal
Exploring the Potential Impact of Artificial Intelligence (AI) on International Students in Higher Education: Generative AI, Chatbots, Analytics, and International Student Success
Previous Article in Special Issue
Research Determining the Priority Order of Forces Acting on a Vehicle Transporting Logs
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Approach to Guide the Search for Potentially Hazardous Scenarios for Autonomous Vehicle Safety Validation

1
Stellantis, Université Lorraine (FR), 54000 Nancy, France
2
Research Team on Innovative Processes (ERPI Laboratory), Université de Lorraine, 8 rue Bastien Lepage, 54000 Nancy, France
3
Research Centre for Automatic Control of Nancy (CRAN Laboratory UMR CNRS 7039), Université de Lorraine, 54506 Vandœuvre-lès-Nancy, France
4
Stellantis, 78140 Vélizy-Villacoublay, France
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(11), 6717; https://doi.org/10.3390/app13116717
Submission received: 3 April 2023 / Revised: 27 May 2023 / Accepted: 28 May 2023 / Published: 31 May 2023

Abstract

:
Safety validation of Autonomous Vehicles (AV) requires simulation. Automotive manufacturers need to generate scenarios used during this simulation-based validation process. Several approaches have been proposed to master scenario generation. However, none have proposed a method to measure the potential hazardousness of the scenarios with regard to the performance limitations of AV. In other words, there is no method offering a metric to guide the search for potentially critical scenarios within the infinite space of scenarios. However, designers have knowledge of the functional limitations of AV components depending on the situations encountered. The more sensitive the AV is to a situation, the more safety experts consider it to be critical. In this paper, we present a new method to help estimate the sensitivity of AV to logical situations and events before their use for the generation of concrete scenarios submitted to simulators. We propose a characterization of the inputs used for sensitivity analysis (definition of the context of the automation function, generation of functional and logical situations with their associated events). We then propose an approach to set up a distribution function that will make it possible to select situations and events according to their importance in terms of sensitivity. We illustrate this approach by implementing it on the Traffic Jam Chauffeur (TJC) function. Finally, we compare the obtained sensitivity rank with expert judgment to demonstrate its relevance. This approach has been shown to be a promising method to guide the search for potentially hazardous scenarios that are relevant to the simulation-based safety validation process for AV.

1. Introduction

The standard ISO 26262 [1] is the international reference used in the automotive field for functional safety. This standard deals with the electrical and electronic malfunctions that vehicles may face, and which can lead to safety issues. However, in the context of Autonomous Vehicles (AV), ISO 26262 is limited. Indeed, it does not address safety violations which may be caused by performance limitations in the absence of failure. It has also been observed that, with AV, the driver may be out of the decision loop in contrast to human-controlled vehicles. Consequently, we cannot rely on him/her to deal with the deviations resulting from the behavior of other road users or unusual road conditions. However, automotive manufacturers need to ensure that AVs are completely safe before their deployment on roads.
For this reason, another standard known as ISO 21448 [2] was recently proposed. Its objectives are (i) to consider the AV’s functional performance limitations, which may occur in the absence of failures, and (ii) to cover the reasonably foreseeable misuse of the vehicle. This new standard ISO 21448 is to be used complementarily with the ISO 26262 standard. It is commonly called SOTIF (Safety Of the Intended Functionality), which means: “absence of unreasonable risk due to hazards resulting from functional insufficiencies of the intended functionality or from reasonably foreseeable misuse by persons” (ISO 21448, 2022).
The technologies used to develop an AV are specific: sensors, localization systems, communication systems and intelligent control systems (potentially including Artificial Intelligence based algorithms). These technologies are subject to limitations which are mainly due to the environment features in which the vehicle will operate, e.g., rain, fog, snow, degradation of pavements, puddles, curvature of roads, lack of road markings [3,4]. This environment is very complex and its description involves a large number of parameters which must be identified. Therefore, the activities detailed in the standard are intended to define an efficient process for identifying and dealing with scenarios that will be critical for the AV during its operational phase. According to the standard, critical scenarios relate to two areas: the first refers to “known unsafe scenarios” (area 2) and the second to “unknown unsafe scenarios” (area 3). To explore these two areas, simulation-based validation approaches need to be used to complement driving validation [5]. This is all the more true given that Kalra and Paddock [6] have demonstrated that the validation of AV by driving would require covering billions of kilometers. The use of simulation has many advantages, including testing controllability and path planning strategies, simulating different ranges of operational parameters, ensuring reproducibility and efficiency of the tests [7,8], testing specific collision avoidance strategies [9], etc. Previous authors have worked on the validation of automotive models and propose an assessment process that is not only focused on safety validation, but also on the quantification of modeling errors and uncertainties of the simulation compared to real driving [10].
However, the process of implementing simulation-based safety validation must allow sufficient coverage of the operational environment of the AV and meet the “mile of driving” requirement. Thus, there is a need for an approach which can identify the set of test scenarios and contribute to an efficient simulation-based validation process [11]. Such an approach would make it possible to select the scenarios that the AV has to face, in particular those that are considered to be challenging, to achieve the validation objectives more quickly. The main issue is how to identify these scenarios. On the road, frequent and often unchallenging scenarios (considered as nominal) will be encountered but rare and challenging scenarios may not be encountered. The simulation must therefore be oriented to test the AV with critical scenarios that are particularly interesting in order to demonstrate the safety of the vehicle.
Some approaches have been suggested in the industrial domain to help identify these test scenarios [12]: feedback from experience, feedback from users, accidentology databases, information extracted from driving, consortiums and projects such as MOOVE or PEGASUS.
However, despite the efforts made by the car manufacturers, it is difficult to predict all the real-life situations that AV will face, due to many variations in environmental conditions (traffic conditions, weather conditions, infrastructure, behaviors of other road users). As a result, other recommendations to control the scenario generation process have been proposed. The first is to deploy the AV by level of automation. In this case, the vehicle is limited to a number of tactical maneuvers and can carry out its mission in a dedicated and controlled environment called ODD (Operational Design Domain). Another proposal is to structure scenarios into three levels of abstraction in order to gradually identify them: functional, logical and concrete [13]. The functional scenario describes all entities and their relationships in a human-understandable linguistic form. The second level, the logical scenario, uses the functional scenario to describe the state space using parameter ranges. Finally, the concrete scenario allows concrete values to be assigned to the previously defined parameters. This is obtained by choosing a value for each parameter from the range of values defined in the corresponding logical scenario. Structuring scenarios in three levels of abstraction is interesting because it allows a scenario-based approach for the development of AV, while offering a hierarchical approach to structuring the generation process within the infinite space of scenarios.
These approaches seem interesting and helpful in the scenario identification process. However, to the best of our knowledge, there is no approach which proposes a metric to identify potentially critical scenarios in advance, in order to guide scenario generation before the simulation. In the literature, several performance indicators (criteria and metrics) have been proposed for the validation of AV [14,15,16]. Most of these indicators are used during and after simulation or driving to evaluate the safety level of AV. This means that the proposed indicators can only be used when the scenarios are run, before the measurement is made. In other words, these indicators cannot be used to guide scenario generation, as they only qualify the situation after the simulation.
Therefore, in this paper, we propose an approach to analyze and estimate the sensitivity of AV to logical situations and events, in order to guide the search for scenarios that are relevant to the simulation-based safety validation process.
The rest of the paper is organized as follows. Section 2 is a more detailed review of the state of the art. Firstly, it reviews the performance indicators for scenario generation and simulation-based validation of the safety level of AV. Secondly, it reminds the concepts related to scenario generation strategies in the context of AV. Section 3 provides a quick overview of the proposed approach. Then, as previously mentioned, a structuring of scenarios into functional, logical and concrete levels is used, as proposed by Menzel et al. [13]. Since the sensitivity metric that we want to implement requires knowledge of the operational situations at stake, the ranges of values of the parameters, at the logical level of abstraction, are specified. In Section 4 we include a step to achieve the generation of situations and potentially associated events at the functional and logical levels. Even if we have this structural framework (functional, logical, concrete) for scenario generation, its operationalization is not immediate. Unresolved questions include: which elements are relevant for functional level characterization, how can value ranges of the logical scenarios be defined, or finally, how can these ranges be combined to obtain logical situations? We also identify and characterize the usage context of an automation function in Section 4, as this is necessary for the characterization of functional and logical scenarios. Once logical situations have been defined, Section 5 proposes a sensitivity analysis of AV to logical situations and events in order to prioritize the exploration of those scenarios that pose risks for the AV. The proposed sensitivity analysis is then described in Section 6. Section 7 summarizes the results, compares our results with those of an expert and addresses some discussion points. In Section 8, we conclude this paper and propose further work that can be addressed in the future. Abbreviation part sums up the acronyms and variables used in this paper.

2. Related Work

Different metrics have been proposed in the literature to measure the performance of tests performed for the safety validation of AV. The most widely mentioned metric is test coverage, which is commonly encountered in software engineering. This is defined as follows: “degree, expressed as a percentage, to which specified test coverage items have been exercised by a test case or test cases” (ISO/IEC/IEEE 29119-1 [17]), Indeed, it is necessary to justify that the scenarios used during simulation for the validation of the AV cover its entire operational environment. Such a justification can be based on the fact that driving in real conditions contains “dead periods” which are not useful for the validation. On the other hand, the use of scenarios allows focus only on what is relevant from a validation point of view. However, the large number of parameters influencing the behavior of the AV creates a combinatorial explosion that makes it impossible to test all possible scenarios. For this reason, Amersbach and Winner [15] proposed to determine another indicator: the number of scenarios to be tested. This provides an equivalence to the distance indicator proposed by Kalra and Paddock (2014). Indeed, in order to cover a target distance for the validation of the AV (depending on the level of automation), the corresponding number of scenarios required for testing is determined. Zhou and Re [16] prefer to use directly the coverage indicator of the number of critical scenarios by using databases in which thousands of scenarios are recorded for reference. The reference database used by the authors is the Second Strategic Highway Research Program Naturalistic Driving Study (SHRP 2 NDS). This database includes 775 critical highway events. The idea is to check what percentage of the critical scenarios in the baseline were covered by the scenarios generated by Zhou and Re.
In recent years, advances have also been made regarding the effectiveness of Advanced Driver Assistance System (ADAS) functions in the case of possible road crashes. Authors have provided risk metrics for scenarios where a crash may be inevitable. For instance, Vangi et al. [18] proposed criteria to intervene on braking and steering, based on the injury risk to vehicle occupants. According to Alvarez et al. [19], to estimate the safety benefits of ADAS and active safety systems, “the most frequently used metrics are the number (or reduction in the number expressed as percentage) of avoided accidents and the number of avoided injuries”. Gulino et al. [20] defined a Crash Momentum Index (CMI) and demonstrated its effectiveness in assessing the performance of ADAS in critical road scenarios. Smit et al. [21] pointed out the interest of developing appropriate testing methods to assess the safety capabilities of ADAS (avoiding accidents or mitigating the injury severity). They proposed an impact model which aims to predict collision parameters (here delta-v). These are correlated with the risk of becoming injured and based on the re-simulation of real accidents. Such a model is useful to assess the effects of ADAS in terms of injury severity in accidents. However, even if the sensitivity of the model is evaluated for some input parameters, it is not sufficient for Automated Driving Systems (ADS). In fact, the scenario space to be covered for the safety validation of ADS is too large. In addition, there is little feedback on accidents related to these ADS and their behavior, as they have not yet been introduced on the road.
Furthermore, since safety validation is required to ensure that the AV can safely react in all the scenarios it will face, another important consideration is how to measure and assess the hazardousness of a scenario during validation. ISO 26262 assesses the risk associated with a scenario through a combination of severity, exposure and controllability: “combination of the probability of occurrence of harm (§1.56) and the severity (§1.120) of that harm” (ISO 26262-1, 2011). Feng et al. [22] also proposed a definition of critical scenario: “The criticality of a scenario measures the importance in evaluating a performance metric”. Here, the performance is related to the following indicators “Safety, Functionality, Mobility, Rider’s comfort”. Another definition is given by Hallerbach et al. [23], in these words: “Critical scenarios are defined as scenarios that need to be tested, regardless, whether the requirements are functional or non-functional”.
However, the criticality referred to by these authors can only be effectively measured once the simulation has been carried out and the reaction of the AV observed. Furthermore, the number of scenarios that need to be tested is very high and the identification of all critical scenarios can be slow, and yet we have not found in the literature a metric to guide the generation of scenarios before simulation, by identifying the potentially critical scenarios beforehand. In this paper, we thus propose a sensitivity metric to fill this gap. The notion of “sensitivity” mentioned here is quite different from the usual meaning, and consists in analyzing how the uncertainty in a system’s output (e.g., a failure probability, an injury severity probability) can be attributed to the uncertainty in its input variables [24,25].
Four key concepts appear in the context of scenario generation for the safety validation of AVs: scenario, scene, situation and event. Definitions have been proposed for each of these by several authors [26,27,28,29]. The desire to establish a consensus has given rise to reflections, notably in the framework of ISO 21448. The definitions proposed by Ulbrich et al. were considered as references in the preliminary version of this standard. However, Koné et al. [30] found that some terms, such as “scenery” and “self-representations”. were still under discussion. They also noted that the definition proposed by Ulbrich et al. for the concept of scene, used to define a scenario, remains unclear: “A scene describes a snapshot of the environment including the scenery and dynamic elements, as well as all actors’ and observers’ self-representations, and the relationships among those entities. Only a scene representation in a simulated world can be all-encompassing (objective scene, ground truth). In the real world it is incomplete, incorrect, uncertain, and from one or several observers’ points of view (subjective scene)”.
This definition highlights that a scene can contain a great deal of information and its implementation can be very subjective and vary from person to person. Furthermore, the term “snapshot” makes the elements of the environment lose their stable character (no loss of movement or duration of lifespan).
As an alternative, Koné et al. [30] took a closer look at the concept of situation. They found that the term “operational situation” is often considered in scenarios for describing accidents [31] or in methods for determining “hazardous events” [32].
Let us remember that we seek knowledge of the state of the AV and the elements of the situation [33]. We need to know which conditions must be true to solicit a reaction from the AV [28]. As a result, Koné et al. point out that focusing the scenario description on the situation makes it possible to highlight the influencing factors and interactions between the elements of the situation and the AV. This is particularly relevant because the situation also indicates a restrictiveness related to the relevant elements of the scene and is therefore a more interesting level of abstraction than the scene.
In this work, the concepts that will be of interest are scenario, operational situation and event. The definitions proposed by Koné et al. serve as a reference to define these concepts [30]:
Operational situation: set of states of the AV and other entities in an operational environment present for a period of time, defined by the stability of these states.
A scenario: description, in a given time interval, of the temporal sequence of operational situations. The transition from one (initial) situation to another (intermediate) is caused by one or more events.
Event: modification of the state of one or more environmental entities (AV and others) which induces, can induce or must induce a new action from the AV (new behavior). Action is the reaction of the AV following an event (definition proposed from the perspective of the AV).
In summary, this section pointed out the limitations of existing approaches to generate test scenarios and defined key concepts that will be used in the following sections.
The rest of this document develops the steps necessary to set up the AV sensitivity analysis to logical situations and events. The first step is to define the usage context of an automation function and the process for obtaining functional and logical situations, with their associated events.

3. Overview of the Proposed Method

In this section, we present the general structure of the method, which results in the analysis of the sensitivity of AV to situations and events. It is structured according to the following steps:
  • Define the usage context of the automation function to be tested (Section 4.1)
  • Generate situations and events at the functional level (Section 4.2)
  • Generate situations at the logical level (Section 4.3)
  • Analyze the sensitivity of the AV to logical situations and define the sensitivity distribution function (Section 5.1)
  • Analyze the sensitivity to events and define the sensitivity distribution function (Section 5.2)
These different steps are detailed in the following sections.

4. Context Definition of an Automation Function, Generation of Situations and Events at the Functional and Logical Abstraction Levels

In this section, we present the steps for generating scenarios at the functional and logical levels. The first step consists in defining the scope of the generation by choosing the automation function concerned and defining its usage context.

4.1. Define the Usage Context of the Automation Function to Be Tested

Once the automation function and its automation level have been identified, we need to describe the operational environment in which it will be used. This operational environment is called ODD (Operational Design Domain). It is described in such a way as to emphasize the concept of operational situation. This provides knowledge of the state of the AV (i.e., the expected properties, the level of automation and the corresponding speed range), and of the other entities (i.e., the physical infrastructure, the weather conditions and the other road users). This activity is carried out in support with the functional designer, as he/she has knowledge of the specification of the function and the mission profile related to customer needs.
The expected properties of the AV, such as its functional performances, are among the elements needed to define the usage context. These functional performances must be identified in order to deepen the knowledge of the function and to know what to test during the validation.
Likewise, we also need to specify the operational modes in which the AV may find itself and the maneuvers it may perform during its use. This information is useful to accurately qualify the behavior of the vehicle at the occurrence of any event. Then, these modes and maneuvers must be characterized by specifying their validity criteria. This information helps to qualify the events whose occurrence generates changes or transitions between operational modes and maneuvers.

4.1.1. Example: Choice of the Automation Function

The contributions of this paper will be illustrated on the “Traffic Jam Chauffeur (TJC)” function. The TJC is a level 3 automation function, designed to support autonomous driving on divided lanes (motorway or expressway), in congested conditions, where the speed is limited to a maximum speed Vmax (in the range of 60–70 km/h). The Ego vehicle (the vehicle equipped with the automation function, corresponding here to the AV) follows the vehicle in front of it, keeps to its lane, manages acceleration and braking within the authorized speed limit, and keeps a safe distance. The driver can perform other activities but must remain alert and be able to regain control of the vehicle within 10 s when prompted by the system. The system should allow the driver to activate the function and to intervene at any time, and should give them enough time to do so.

4.1.2. Example of TJC Context Description

Table 1 gives the entities of the ODD, i.e., the type of infrastructure in which the function is to be used, the other users and the weather conditions are defined. Functional constraints, corresponding to the TJC function, are defined in the column “Ego Vehicle”. They concern the speed limit for the function (in our example, the speed is limited to 60 km/h), the need for a driver and examples of unauthorized operational maneuvers.
As regards operational modes, several are valid for TJC, including Activation, Give Back and Minimum Risk Maneuver (MRM) modes. The system is in Activation mode when the automation function is activated. The system switches to Give Back mode when the function can no longer guarantee its nominal operation, either because the availability conditions are no longer met or a failure is detected. The MRM mode corresponds to a reduction of risk in the case where the driver does not respond to a request to take over. This may be a vehicle deceleration phase. Moreover, “Car Following” is the main maneuver selected for the TJC function. The validity conditions of AV are as follows: the function must be in Activation mode; a vehicle must be in front of the Ego (this will be named lead vehicle or OtherVehicle.1); the inter-distance between Ego and lead vehicle is greater or equal to the minimum safe distance. However, several functional performances are expected from the function in terms of situation perception, trajectory control and reaction to the situation. For perception, the system should, for example, have the following functionalities: identify the appropriate type of infrastructure; identify the specified weather conditions; identify other relevant objects. Otherwise, certain performance limitations may be observed, including incomplete/limited perception of the situation (e.g., the vehicle does not see all elements of the situation) or misperception of the elements of the situation (e.g., a motorbike is seen as a car).
All the information gathered during the definition of the usage context will allow the generation of scenarios according to the three abstraction levels: functional, logical and concrete.

4.2. Generate Situations and Events at the Functional Level

Operational maneuvers allow the description of vehicle behaviors and can be identified so as to be mutually exclusive. Since the aim is to test the behavior of the AV, the operational maneuver seems to be a relevant starting point to address the scenario generation problem. Thus, like most work in the context of the AV, we propose to characterize functional scenarios by the corresponding operational maneuvers. For example, one can have a functional scenario for “car following” or “lane changing”. In order to give more details for this characterization, it is necessary to define the functional situations and events which can be found in a functional scenario.

4.2.1. Generate the Functional Situations

Although we defined the situation as describing the state of the AV and of the other entities of its operational environment, when generating the functional situations, we have to remain consistent with the definition of the functional level given by Menzel et al. At this stage of the identification, the detailed information on the parameters and their values is not to be filled in, the aim being simply to know the configurations of situations that will have to be considered when generating the test scenarios. Therefore, the identification of the functional situations proposed here is mainly based on the information collected during the definition of the automation function, its usage context or the operational maneuvers.
Figure 1 shows the process for obtaining the functional situations. These are obtained by combining the information provided by the ODD (type of traffic density, type of infrastructure), and the operational modes and maneuvers.
Examples for the TJC function: Traffic density can be fluid, congested or accordion-like. It is related to the number of users present in the traffic (i.e., around the AV). If Number_of_users < 3, then traffic is fluid, otherwise traffic is congested or accordion-like.
For the infrastructure, several types can be encountered: divided lanes (motorway, expressway), toll area, work zone, road tunnel, etc.
By considering the TJC function and the description of its usage context, we can deduce the functional situations which need to be considered for the generation. Table 2 illustrates the six functional situations obtained.
In the reminder of this example, we focus on a specific functional situation: Ego in a “Car following” situation in the “Activation” mode on a motorway and in congested traffic.

4.2.2. Types of Events That Can Occur within a Functional Situation

An event is defined as a change in the state of one or more entities in the environment (AV and other entities) that induces, may induce or must induce a new action by the AV (i.e., new behavior). In other words, it means that a parameter acquires a new value in a range different from the range of its initial value. Consequently, the change of the value of a parameter modelling the operational environment of the AV corresponds to the generation of an event. Some events are critical and may increase the risk associated with the scenario, and others are not. We have identified all the factors that can cause limitations in the performance of the AV. They can be organized into events occurring within the ODD and those occurring outside the ODD. The first category includes events related to performance limitations of the automation function, deviations in the behaviors of other road users and transitions in the operational maneuvers of road users. The second category mainly includes events related to the constraints of the use of the function and the transitions between the operational modes of the AV.
Furthermore, it is important to note that the concept of event considered here covers the notion of “triggering condition” defined in the Standard 21448 SOTIF as follows: “Triggering conditions: specific conditions of a scenario (3.23) that serve as an initiator for a subsequent system reaction, possibly leading to a hazardous behavior”. This standard also states that an identification of triggering conditions can be supported by a detailed description of the environment model.
Once the functional situations and event types have been identified, the next section defines the various parameters and identifies their value ranges so they can be used in the rest of the generation process.

4.3. Generate Situations at the Logical Level

The aim is to identify the value ranges of the parameters of all the entities (Infrastructure, Other users, Drivers/Passengers, Atmospheric conditions, AV) observed in the functional situation, or which may be subject to an event. Depending on the way in which the variation spaces are divided into ranges, there will be a greater or reduced number of logical situations and events to be tested at the end of the combinations.
We make the following modelling hypothesis: expert knowledge allows us to partition parameter variation spaces and to define a minimum number of ranges for each parameter, depending on its impact on the AV. The AV will be more or less sensitive to each range.

4.3.1. Define Parameters and Value Ranges

Different parameters (variables) have to be considered for the generation of the scenarios and, more specifically, for the description of the scenarios. For examples, we have qualitative variables (nominal, ordinal, binary) such as “Type of users (pedestrian, cyclist, vehicle)” or “Level of brightness (Low, Medium, High, Very high), and quantitative variables (discrete or continuous) such as “Number of lanes [(1, 2, 3 and more)” or “Vehicle_speed (0–30 km/h).
We define the sensitivity of AV to a range of value as follows: the degree to which the range of value can lure the automation function.
As a situation is defined by a combination of different parameters, the more the AV is sensitive to the range of value of each parameter, the more it is sensitive to this situation.
As these parameters are numerous and most of them have continuous value ranges, we thought it appropriate to distinguish value ranges which make a significant contribution in estimating the achievement of the AV safety performance.
This distinction is made with the help of a safety expert. However, partitioning the value ranges into classes (i.e., exclusive but complementary value ranges) is intended to ensure that each possible situation that the vehicle may encounter and to which it may be sensitive is properly defined. For each parameter, the safety expert is asked to define different classes of variation. For each class, he/she examines the impact of that class on each AV technology. If two close classes have a similar impact, they may be combined into a single class. The number of ranges for each parameter must be determined according to the different sensitivities of the AV technologies, but the safety expert must reduce this number as much as possible.
Consider the example of the “rainfall level” parameter. Its space variation is from 1 to 100 mm/h. We propose to divide it into two classes of variation: Light = 1–7 mm/h and Heavy = 8–100 mm/h. As some sensor technologies are sensitive to heavy rainfall, such a partition makes it possible to highlight the range of heavy rainfall. When combined with other parameters, this range further increases the possibility of logical scenarios that have a high impact on criticality.

4.3.2. Generate the Set of Logical Situations Associated with Each Functional Situation

Logical situations are obtained by combining the value ranges of the parameters describing each functional situation. A systematic combination method is used, i.e., each of the subspaces of variation of each parameter appears in each combination obtained. In addition, special care must be taken when combining value ranges. Constraints between the value ranges may be defined in order to ensure that inconsistencies in the obtained situations are avoided.
Suppose the following parameters are used for the description of the scenarios: Number_lanes, Curvature, Speed_Ego (km/h), Rainfall level and Speed_OtherVehicle.1 (km/h). The value ranges for each parameter are shown in Table 3.
By combining the ranges of these five parameters in a systematic way (2 ranges × 2 ranges × 1 range × 2 ranges × 1 range), we obtain the eight logical situations described in Table 4.
The number of logical situations obtained depends on the number of parameters considered and the ranges of values associated with each of them. A combinatorial explosion can be observed as soon as the parameters are numerous and the space of variation of each one is broken down into two or three ranges. This means that the decision to split or not a space of variation into ranges of values, and the identification of the constraints inherited from the ODD and the function specification, must be done carefully, and in collaboration with experts. The number of ranges for each parameter is determined according to the difference of sensitivity of the AV technologies. This procedure aims to limit the number of combinations and then the number of logical situations generated.
Obtaining logical situations is the phase which precedes the generation of concrete scenarios. The high number of concrete scenarios that can be obtained led us to propose a sensitivity analysis of the AV to logical situations and events in order to prioritize the exploration of those that are at risk for the AV.

5. Sensitivity Analysis of AV to Situations and Events for the Generation of Concrete Scenarios

In order to identify the potentially critical scenarios necessary for the simulation-based validation of the AV, we will proceed to an analysis of its sensitivity to logical situations and events. The aim of this analysis is to focus on the logical situations and events most likely to induce a risk for the AV, and to accelerate the validation process.

5.1. Analyze the Sensitivity of AV to Logical Situations and Define the Sensitivity Distribution Function

In this part, we start by analyzing the sensitivity of AV to logical situations. Then, we will establish a distribution function on the sensitivities of the logical situations.

5.1.1. Definition of the Sensitivity of AV to Logical Situations

The calculation of the a priori sensitivity of the AV to logical situations is obtained in two steps. First, we consider the a priori sensitivity of the AV to each value range characterizing the generation parameters. This is done with the functional architecture of the AV composed as follows: perception, decision and actuation. Five sensor technologies ensure perception functionalities and are grouped as follows: radar (long-range and short-range), ultrasonic, lidar, stereoscopic (3D vision) and monocular cameras. The proposed process for determining sensitivity is not tied to specific components. It offers the possibility of being adaptable to the specificities of the components that will be used on a given vehicle. Thus, for each considered range, with the help of the expert, we answer the following question: is the sensor technology under consideration (or decision or actuation) sensitive in this range?
The expected answer from the expert is either yes or no. Then, the scale described in Table 5 makes it possible to determine the level of sensitivity of the AV to the range according to the impacted AV component(s). Next, all the responses obtained will be used to determine the a priori sensitivity of the AV to each logical situation. Indeed, the logical situation is a combination of ranges of values. We propose to calculate the a priori sensitivity of the AV to a combination of ranges (and therefore to a logical situation) by aggregating the sensitivities associated with the ranges.
We assume that each component (perception, decision, actuation) makes a maximum contribution of 1 to the overall sensitivity of a logical situation. Since the AV has five main types of sensor technologies, each sensor technology, therefore, is considered to make a contribution of 0.2 towards the sensitivity to the logical situation if it is impacted by at least one range of variation. The decision made by the AV (in the absence of any lure of the sensors) or the actuation makes, respectively, a contribution of 1 in the calculation of sensitivity to the logical situation, when it is misled by at least one of the ranges of values composing the logical situation. Table 6 synthetizes the contribution of each component in obtaining the sensitivity of a logical situation.
Furthermore, when generating scenarios, the randomness of their occurrences must be considered. We, therefore, propose the implementation of a sensitivity distribution function.

5.1.2. Sensitivity Distribution Function on Logical Situations

The sensitivity distribution function q Y j is obtained by normalizing the AV sensitivities to the logical situations. This distribution function makes it possible to draw, one after the other, the logical situations which will be used to generate the concrete scenarios for subsequent simulations. As we can see, with such a distribution, the logical situations with high sensitivity values will also have high probability values and will have higher occurrences during the sampling. Thus, the sensitivity distribution function is based on an importance function that allows us to prioritize the sampling of logical situations Y j for which the a priori sensitivity value of the AV is high.
The probability distribution of q Y j is defined as follows:
q Y j   ϵ   0 , 1 Y j q   Y j = 1
Let Sj be the sensitivity value of the AV to the logical situation Y j and let W = Y j S j be the normalisation factor. Thus, the sensitivity or importance distribution function can be defined as follows:
q Y j = S j W
We can check that Y j q   Y j = Y j S j W = 1 .
We have defined how to lead a sensitivity analysis on the logical situations. However, in order to generate scenarios, it is also necessary to create events. In this respect, as for the logical situation, a sensitivity analysis of the events will be introduced in order to draw events according to their potential impact on the safety of the AV.

5.2. Analyze the Sensitivity to Events and Define the Sensitivity Distribution Function

An event is the modification of the state of one or more entities in the environment (the AV and others) which induces, can induce or must induce a new action from the AV. At the logical level, situations are obtained by combining ranges of variation. Thus, the event is formulated at the level of the change of value range for a parameter.
The entry point for identifying events is the initial logical situation. The event can be a change in the value of a parameter remaining in the same range of values with a certain level of sensitivity (the AV remains a priori in the same logical situation). It can also be a change in the value of a parameter from the initial range of values (of the initial logical situation) to another range (of another logical situation).

5.2.1. Sensitivity Analysis and Gradient

We have identified three possible levels of impact of the event:
-
Functional impact: once generated, this event leads to the transition from one functional situation to another.
-
Logical impact: once generated, this event leads to the transition from one logical situation to another, within the same functional situation, fixed at the beginning.
-
Concrete impact: once generated, this event leads to the transition from one concrete situation to another, within the same logical situation set at the start.
Figure 2 illustrates the three impact levels. Consider the logical situation LS1 belonging to the functional situation FS1. The functional impact is the modification of a range of values of LS1 that switches to a logical situation of another functional situation, for instance, FS3. With the logical impact, the modification of a range of values of LS1 causes a switch to another logical situation (for instance, LS3 also belonging to FS1). Finally, at the level of the concrete impact, no range of LS1 is modified (the values that are modified by the event remain in the same value ranges).
A range of LS1 values is considered relevant as an event if it has a non-zero sensitivity level. The drawing of a concrete event value within this range causes the AV to move from one concrete situation to another within LS1 (for instance, from CS1 to CS2). Thus, by assumption, we consider that the other events (which concern the other ranges with a null sensitivity value) are not relevant because the AV is not sensitive to these events.
These three levels of impact of the event suggest the existence of a variation in the associated sensitivity level. When the event concerns a modification of the value of a parameter in the same range of values (impact at the concrete level), its sensitivity is obtained with the a priori sensitivity value of the AV to the range, as defined in the scale in Table 2. It will be noted that Δ(Ax_y), with Ax_y as the value range.
When the event is a modification with a change of the value range (logical or functional impact), it will be noted that Δ(Ax_y, Ax_z). Its sensitivity is then calculated using a concept we call “sensitivity gradient”. The sensitivity gradient is defined as the absolute value of the difference in sensitivity between two logical situations. It is thus associated with the event that causes the transition from the initial logical situation to an intermediate situation.
The sensitivity gradient of the event impacting the logical level is obtained by the absolute value of the difference in sensitivity between the initial and the intermediate logical situation (within the same functional situation). Only the level 1 neighborhood of the logical situations is considered, i.e., starting from an initial logical situation (for instance, LS1), the intermediate logical situations to be considered are those which differ by only one range of values. This means that we only consider here the generation of a “single event”.
When generated, the event impacting the functional level takes the AV out of the FS under analysis. If this is the case, it means that the new range of values is either outside the validity conditions of the operational maneuver characterizing the FS or outside the ODD. To obtain the sensitivity gradient of this event which impacts the functional level, we are interested in the new range obtained as a result of the modification. We will first calculate the sensitivity associated with the range, using the scale in Table 2, and then add +1 to this value. Indeed, given that such an event induces a change to another FS, an additional difficulty is added to the AV decision component. To consider this, we decide to add +1 to the sensitivity obtained for the change in the range of values.

5.2.2. Sensitivity Distribution Function on Events

To obtain the sensitivity distribution function, the same principles as those used in the case of logic apply. The sensitivity distribution function is obtained by normalizing the sensitivities or sensitivity gradients of events. As already noted, the considered logical situation makes it possible to identify the events that can be associated with it. Thus, for each logical situation Y j , there is a list E j of events e j i . The sensitivity value of each event eji belonging to this list will be normalized to obtain a sensitivity distribution function q e j i .
The probability distribution of q e j i is defined as follows:
q e j i   ϵ   0 , 1 Y j q   e j i = 1
Let S e j i be the sensitivity value of an event e j i and let W E j = e j i S e j i be the normalization factor for the list E j of events eji. Thus, the sensitivity distribution function can be defined as follows:
q e j i = S e j i W E j     e j i   ϵ   E j
We can check that: e j i q   e j i = e j i S e j i W E j = 1 .
The proposals made in this section on the sensitivity of the AV to logical situations and events aim to provide a layer of risk analysis when declining the scenarios according to the three levels of abstraction: functional, logical and concrete. Such a consideration helps to guide the generation of scenarios by prioritizing the situations to which the AV has a high sensitivity value. In Section 6, we illustrate this principle of sensitivity analysis of the AV to logical situations and events.

6. Results: Application of the Sensitivity Analysis on the “Traffic Jam Chauffeur”

This example consists first in studying the sensitivity of the AV to the logical situations and then in using the result to establish a distribution function of the sensitivity over the logical situations. The types of events faced by the AV, the usage constraints imposed by its ODD and the ranges of variations characterizing the logical situations are defined. Thanks to this information, an identification of the events according to the three types of impact is made for each logical situation. Finally, a sensitivity study is carried out on the events of each logical situation in order to have a sensitivity distribution function per list of events associated with a logical situation.

6.1. Analyze the Sensitivity of the AV to Logical Situations and Define the Sensitivity Distribution Function

Let us consider the eight logical situations obtained in Table 4.

6.1.1. Sensitivity Analysis of AV to the Logical Situations

The sensitivity of the AV to these logical situations is presented in the Appendix A. When the sensor technology (or decision or actuation components) is sensitive to the Ax_x value range, this is indicated by the corresponding value using the scale in Table 5. Otherwise, it means that there is no sensitivity. Then, Table 6 provides the sensitivity value of a logical situation by aggregating the range sensitivity values.
None of the AV components is sensitive to one of the value ranges that characterize the logical situation Y 1 (Table A1).
For Y 2 (Table A2), the AV design experts judge that the A2_2 range (Curvature.High) has an impact on two sensors (contribution of 0.4 to the sensitivity).
In the case of Y 3 (Table A3), we can note that the A4_2 range (Rain.Heavy) can mislead three sensor technologies (contribution of 0.6 to the sensitivity) and actuation (contribution of 1 to the sensitivity) and this leads to a sensitivity of 1.6.
In case of the logical situation Y 4 (Table A4), the same sensitivity is obtained for Y 4 . A4_2 range makes the same contribution as for Y 3 and since the range A2_2 may mislead the same sensors than A4_2, this leads to the same sensitivity of 1.6.
In the case of Y 5 (Table A5), we have a sensitivity of 1.4. This is due to the range A1_2 (number of lanes = 3 and more) which can lure both two sensor technologies (contribution of 0.4 to the sensitivity) and the decision (contribution of 1 to the sensitivity calculation). A sensitivity of 1.4 is also obtained for Y 6 (Table A6) since, complementarily to A1_2, the range A2_2 has an impact on the same two sensors.
Finally, Y 7 (Table A7) and Y 8 (Table A8) have the highest sensitivity value. They include both the A1_2 (number of lanes = three and more) and A4_2 (Rain.Heavy) ranges, while Y 8 additionally includes A2_2. The respective effects of each range on the AV components are cumulative, resulting in a sensitivity of 2.6.
In addition, the randomness of the occurrence of the scenarios in relation to the presence of uncertainty in the operational environment of the AV has to be considered when generating the scenarios. Thus, the sensitivity of the AV to a logical situation will be used to set up a sensitivity distribution function. This will allow a random selection of logical situations while prioritizing the occurrence of those to which the AV is the most sensitive.

6.1.2. Sensitivity Distribution Function on Logical Situations

The distribution of the sensitivity of the AV to logical situations, as presented in Section 5.1, is obtained as follows:
q Y j = S j W and Y j q   Y j = Y j S j W = 1 , with W = Y j S j the normalization factor, Y j the logical situation j and S j its sensitivity.
Considering the eight logical situations and their sensitivity, we obtain
W = j = 1 8 S j = 0 + 0.4 + 1.6 + 1.6 + 1.4 + 1.4 + 2.6 + 2.6 = 11.6
Thus, for each of the above logical situations, the sensitivity distribution is:
-
Logical situation Y1: q Y 1 = 0 11.6 = 0
-
Logical situation Y2: q Y 2 = 0.4 11.6 = 3.45 × 10 2
-
Logical situations Y5 and Y6: q Y 5 = q Y 6 = 1.4 11.6 = 12.07 × 10 2
-
Logical situations Y3 and Y4: q Y 3 = q Y 4 = 1.6 11.6 = 13.79 × 10 2
-
Logical situations Y7 and Y8: q Y 7 = q Y 8 = 2.6 11.6 = 22.41 × 10 2
Obviously, the value obtained for q Y 7 and q Y 8 is greater because the sensitivity of the AV to Y7 or Y8 is stronger. Note also that the total sum of probability for the eight situations is equal to 1 by definition.
Therefore, we have five classes, which in terms of priority can be considered as follows:
-
Priority 1: Y8 = Y7
-
Priority 2: Y4 = Y3
-
Priority 3: Y6 = Y5 (rather close to priority 2)
-
Priority 4: Y2
-
Priority 5: Y1

6.2. Sensitivity Analysis of AV to Events and Distribution Function

The events associated with each logical situation are of three types: those impacting the functional level, those impacting the logical level and those impacting the concrete level.

6.2.1. Sensitivity Analysis of AV to Events

For each logical situation, a list of events is associated with it. In order to identify this list, we will rely on the types of events that the AV may face in TJC conditions, the limits of the associated ODD and the value ranges of the logical situations. To illustrate this, let us consider Table 7, which shows the sensitivity of the logical situation Y8 as defined in the Appendix A.
The events for this situation are as follows:
(1)
Events impacting the concrete level concern the ranges to which at least one of the components of the AV is sensitive. Ranges A1_2, A2_2 and A4_2 are concerned. Their sensitivities are defined in Table 8. We note e j i an event i of the logical situation j and S( e j i ) its sensitivity. Δ(Ax_y) is an event concerning a change of value within the same range of values Ax_y.
(2)
Events impacting the logical level consist of a change in the range of values of the logical situation considered and concern what we have called the level 1 neighborhood. By analyzing the eight logical situations, those that differ from Y8 by a single range of values are: Y4, Y6 and Y7 (they represent the level 1 neighborhood of Y8). Since such an event triggers a transition from one logical situation to another, the gradient (i.e., the absolute value of the difference in sensitivity between the initial and final situation) is used to obtain the sensitivity. Table 9 gives the events and their sensitivity in the case of logical situation Y8. Δ(Ax_y, Ax_z) is a range change event.
(3)
The events impacting the functional level trigger a transition of the AV out of the “Car following” situation. Considering the conditions of validity of the “Car following” situation and the constraints of the ODD mentioned in paragraph 4.2, the modifications of the value ranges (and thus the events) that make it leave the “Car following” situation are among others:
  • e8.7 = Δ(A1_1, A1_0) (switching from two lanes to one lane: end of Ego lane)
  • e8.8 = Δ(A1_2, A1_0) (switching from 3+ lanes to one lane: end of Ego lane)
  • e8.9 = Δ(A5_1, A5_2) (switching from Speed_OtherVehicle.1.Low to Speed_OtherVehicle.1.High)
The range A1_0 does not affect any technology (sensitivity contribution = 0) but its occurrence implies a change of lane for the Ego (addition of a factor of 1 to the sensitivity) and thus causes a change of functional situation; therefore, a sensitivity of 1 is obtained. The sensitivities of these events are shown in Table 10.
About the range A5_2, its occurrence leads to a violation of one of the TJC conditions, which is that the speed must be less than a maximum value of 60 km/h (Table 1). This range does not have an impact on any technology but induces a new operational mode (i.e., “GiveBack”). Therefore, the difficulty of this change implies a sensitivity of 1.
The list E 8 of events associated with the logical situation Y8 groups together the events identified in Table 8, Table 9 and Table 10 and contains a total of nine events ranging from e8.1 to e8.9.
As with the logical situations, a distribution function is introduced to generate the events according to their sensitivity.

6.2.2. Sensitivity Distribution Function on Events

To ensure that events with zero sensitivity may be also picked (e.g., this concerns e8.5), a minimum non-zero sensitivity value must be assigned to them. We have chosen this value to be the lowest of all the sensitivity values in the list. Thus, we will assign a sensitivity of 0.1 to the event concerned.
The sensitivity distribution function is defined as follows:
q e j i = S ( e j i ) W E j and e j i q   e j i = e j i S ( e j i ) W E j = 1 , where W E j = e j i S e j i is the normalization factor for the list E j of events e j i and S( e j i ) the sensitivity of the event e j i .
Considering the list E 8 of the nine events associated with the logical situation Y8 and their sensitivity values, we find: W E 8 = i = 1 9 S e 8 i = 8.7 .
The sensitivity distribution of the events is given in Table 11. Furthermore, since the sum of all probabilities is 1, we verify that i = 1 9 q e 8 i = 1 .

6.3. Comparing Our Results with Those of an Expert

In order to evaluate the relevance of the proposed sensitivity assessment method, we decided to compare our sensitivity results (Table 12) obtained in Section 6.1 with the opinion of an expert.
To do that, we asked an expert to classify the same logical situations (all described in Table 4) from the most critical to the least critical.
The classification is performed using the pairwise comparison with the analogy of the well-known multi-criteria decision-making approach called Analytic Hierarchy Process (AHP). This approach aims to compare criteria and their importance. In our work, the aim is to compare situations according to their criticality. The question is: which criterion (resp. situation) is more important (resp. critical), and how much more on a scale of 1–9? The scale is as follows: 1—Equal Importance, 3—Moderate importance, 5—Strong importance, 7—Very strong importance, 9—Extreme importance.
The first principle intuitively used by the expert to compare two situations was that “one situation is more critical than another if it has a higher number of ranges impacting the automation function”. We obtained the results in Figure 3. The logical situations are classed from S1 to S8.
As we can observe, these results give a four-level classification (rank 1 for S8, rank 2 for S7, S6 and S4, rank 5 for S5, S3 and S2, rank 8 for S1) which is totally different from our classification results as mentioned in Table 12. Even if the consistency ratio of the expert is quite good as we obtain 1.6% (0.016 < 0.1), the problem with such a logic is that the differences between the impact degrees of the ranges are not taken into account: two different ranges affecting the same technology or component are considered as having the same risk level.
Thus, we proposed that another classification be made, but considering the fact that the ranges of values have their proper degree of impact on the sensor technologies or on decision and actuation components. Therefore, the new principle used by the expert is that “A situation is more critical than another if the number of sensor technologies, decision and actuation components that are sensitive to value ranges is more important”. Figure 4 shows the resulting classification.
With this new logic, we get a six-level classification (rank 1 for S8, rank 2 for S7, rank 3 for S3 and S4, rank 5 for S6 and S5, rank 7 for S2, rank 8 for S1) and the consistency ratio is also quite good (0.032 < 0.1).
When comparing with our results (see Table 12), which gave a five-level classification, we can deduce that there is a similarity. In fact, the main difference is that, in our results, situations Y7 and Y8 (called S7 and S8 by the expert) are considered as belonging to the same class, whereas in the expert’s results they have been split into two classes. However, the expert could have gathered them together, as their priority values are very close (26.9% and 27.6% in Figure 4). The other situations belong to the same classes and ranks in both classifications. The experts consider such a classification of sensitivity values for these logical scenarios as relevant and recognize that their former intuitive logic is less rational than the one used subsequently.

7. Discussion

Originality. The two main contributions of this paper are a method for analyzing the sensitivity of AV to logical situations and associated events, and, more specifically, the setting up of a sensitivity distribution function for both logical situations and events. We also define and apply this method on the TJC example to characterize the context of usage of an automation function, as well as the functional and logical abstraction levels. The reasoning is based on the three levels of abstractions (functional, logical and concrete scenarios) proposed by Menzel et al. [13]. Despite its relevance, this structuring cannot obtain all the scenarios corresponding to each level of abstraction. Therefore, we bring an approach that identifies which elements to combine in order to specify each level. Moreover, the sensitivity analysis we introduce offers the possibility of putting forward the ranges of values that can have a high criticality impact on the behavior of the AV.
Practical application. With our approach, the characterization of the logical scenarios is done so as to identify potentially critical scenarios before their submission to the simulator. In contrast to the classical safety metrics, the sensitivity metric proposed here brings an additional tool to the scenario generation strategy. The application to the TJC case study shows that the proposed sensitivity metric is relevant. On the one hand, our metric offers a discriminant classification of logical situations according to the sensitivity of the AV and, on the other hand, the comparison with an expert’s classification presents similar results. When facing situations where the number of parameters increases, the expert is no longer able to compare situations 2 by 2 to rank the sensitivity of the logical situations and, consequently, a metric like the proposed sensitivity metric can be used.
Our focus in this paper is on validation and sensitivity analysis for each automation function, here the chosen example is the TJC. The features are the operational modes and maneuvers that define the functional situations (see Figure 1 and Table 2) and the limits of sensor technologies, actuators and decision modules. Our approach does not require a detailed description of the characteristics of the components, the operation and the limits of the AV.
We are aware that the proposed method has some limits that require further development. Each step of the proposed method is based on modelling assumptions and may introduce errors.
The first hypothesis is related to the decomposition of the VA operation by automation function (Section 4.1). This approach is relevant to automation level 3 but may require simulation tests with scenarios integrating all the automation functions to validate levels 4–5.
The second hypothesis is related to an epistemic uncertainty concerning the model of situations and events at the functional level (Section 4.2). The relevant entities and parameters are identified by means of expert knowledge, accident scenario databases, etc. However, relevant entities and parameters describing the environment may be forgotten.
The third hypothesis concerns the division of the value ranges for each parameter with expert knowledge (Section 4.3). These value ranges are influenced by the experts’ subjectivity and may vary according to the specific technology that equips the studied VA (e.g., the value range of a “low rain level” may vary depending on the vehicle manufacturer). The process of obtaining logical situations may be subject to the resolution of a combinatorial problem due to the combination among the value ranges of parameters. However, we have chosen to opt for a systematic combination of ranges. The motivation behind this choice is explained by the fact that splitting the ranges according to their sensitivity, carried out with the support of the AV designers and safety experts, limits the number of ranges to the those that are strictly necessary. In addition, although the splitting is done with the support of experts, the relevance of the division into ranges of values for each parameter must be verified, especially as it guides the whole strategy of sensitivity analysis and then the search for critical scenarios.
The fourth hypothesis is related to the choice of the scale to determine the level of sensitivity of each AV technology to logical situations and events (Section 5.1 and Section 5.2). When calculating the sensitivity gradient for an event impacting the functional level, we chose to add a value of 1 to consider the difficulty in decision that such an event induced (as it triggers a transition to another functional situation). The relevance of the value “1” may be subject to discussion and to a sensitivity analysis of the proposed metric.
The final limitation is that we only consider the neighborhood of level 1 during the identification of events. This choice is justified by the desire to limit the number of combinations by considering only the unitary events that lead from one logical situation to another.

8. Conclusions and Future Work

In this paper, we propose an approach for measuring the sensitivity of AV to logical situations and events it may face, as well as a characterization of functional and logical scenarios. The aim is to guide the generation of scenarios which are tested for the simulation-based validation of AV, by selecting those which are the most likely to induce a harmful event or accident. Thus, the sensitivity distribution functions make it possible to randomly select situations and events according to their importance in terms of sensitivity for the AV. Thus, during the sampling, the more sensitive logical situations and events will be, the more they will tend to be generated often and then simulated.
Our work contributes to the demonstration of AV safety through scenario generation and the use of simulation. This contribution is a first step to effectively support the deployment of a safe AV.
However, additional work needs to be carried out. First, a key perspective is to verify that highlighting the sensitivity of the scenarios reduces the number of scenarios to be tested while meeting the expected performance criteria. In addition, a heuristic for exploring and generating concrete scenarios should be set up and implemented in order to evaluate the use of the sensitivity analyses we have just proposed. Then, a sensitivity analysis of the proposed sensitivity metric should be carried out either by introducing biases on the sensitivity values estimated by the expert or by modifying the amplitude of the value ranges defined for the parameters. A fuzzy approach for these ranges could be considered and compared to the proposed method that is based on fixed ranges.
The notion of “neighborhood” introduced can be extended to different levels (neighborhood of level X, with X > 1). Furthermore, the coupling of the parameters (e.g., constraints between their ranges, types of infrastructure and presence of intersections) should be analyzed and considered in order to further reduce the combinatorial explosion of the generation.

Author Contributions

T.F.K.: Conceptualization; data curation; formal analysis; investigation; methodology; validation; writing—original draft preparation. E.B.: Conceptualization; funding acquisition; methodology; project administration; supervision; visualization; writing—original draft preparation; writing—review & editing. E.L.: Conceptualization; funding acquisition; methodology; supervision; visualization; writing—review & editing. F.M.: Conceptualization; supervision; visualization; review & editing. S.G.: Conceptualization; funding acquisition; methodology; project administration; supervision; validation; review & editing. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by ANRT (the French National Association of Research and Technology (Convention CIFRE N° 2017/1246) as well as STELLANTIS (ex GROUPE PSA)).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Acknowledgments

This work has been carried out under the financial support of ANRT (the French National Association of Research and Technology (Convention CIFRE N° 2017/1246) as well as STELLANTIS (ex GROUPE PSA).

Conflicts of Interest

The authors declare no conflict of interest. The authors have no competing interests to declare that are relevant to the content of this article. The authors fully respect the “Ethical Responsibilities of Authors” included in the Authors’ guidelines of the Journal.

Abbreviations

ADASAdvanced Driver-Assistance System
ADSAutomated Driving System
AHPAnalytic Hierarchy Process
AVAutonomous Vehicles
CSConcrete Situation
FSFunctional Situation
LSLogical Situation
MRMMinimum Risk Maneuver
ODDOperational Design Domain
SOTIFSafety Of The Intended Functionality
TJCTraffic Jam Chauffeur
Ax_yspecific value range y of parameter x
Yjspecific logical situation j
E j list of events in logical situation j
e j i event N° i in logical situation j
Sjsensitivity value of the AV to the logical situation Y j
q Y j sensitivity distribution function on logical situation Y j
S e j i sensitivity value of an event e j i
q e j i sensitivity distribution function on event e j i
W E j normalization factor for the list E j
Δ(Ax_y)modification of the value of parameter x in the same range y of values
Δ(Ax_y, Ax_z)modification of the value of parameter x with a change of the value range

Appendix A

Table A1. Sensitivity of the logical situation Y1.
Table A1. Sensitivity of the logical situation Y1.
Techno 1 (Radar)Techno 2
(Lidar)
Techno 3
(Ultrasonic)
Techno 4
(Stereo Camera)
Techno 5
(Mono Camera)
DecisionActuation
Y1
A1_1
A2_1
A3_1
A4_1
A5_1
Sensitivity of Y1/component
Sensitivity of Y10
Table A2. Sensitivity of the logical situation Y2.
Table A2. Sensitivity of the logical situation Y2.
Techno 1
(Radar)
Techno 2
(Lidar)
Techno 3
(Ultrasonic)
Techno 4
(Stereo Camera)
Techno 5
(Mono Camera)
DecisionActuation
Y2
A1_1
A2_2 0.20.2
A3_1
A4_1
A5_1
Sensitivity of Y2/component 0.20.2
Sensitivity of Y20.4
Table A3. Sensitivity of the logical situation Y3.
Table A3. Sensitivity of the logical situation Y3.
Techno 1
(Radar)
Techno 2
(Lidar)
Techno 3
(Ultrasonic)
Techno 4
(Stereo Camera)
Techno 5
(Mono Camera)
DecisionActuation
Y3
A1_1
A2_1
A3_1
A4_2 0.2 0.20.2 1
A5_1
Sensitivity of Y3/component 0.2 0.20.2 1
Sensitivity of Y31.6
Table A4. Sensitivity of the logical situation Y4.
Table A4. Sensitivity of the logical situation Y4.
Techno 1
(Radar)
Techno 2
(Lidar)
Techno 3
(Ultrasonic)
Techno 4
(Stereo Camera)
Techno 5
(Mono Camera)
DecisionActuation
Y4
A1_1
A2_2 0.20.2
A3_1
A4_2 0.2 0.20.2 1
A5_1
Sensitivity of Y4/component 0.2 0.20.2 1
Sensitivity of Y41.6
Table A5. Sensitivity of the logical situation Y5.
Table A5. Sensitivity of the logical situation Y5.
Techno 1
(Radar)
Techno 2
(Lidar)
Techno 3
(Ultrasonic)
Techno 4
(Stereo Camera)
Techno 5
(Mono Camera)
DecisionActuation
Y5
A1_2 0.20.21
A2_1
A3_1
A4_1
A5_1
Sensitivity of Y5/component 0.20.21
Sensitivity of Y51.4
Table A6. Sensitivity of the logical situation Y6.
Table A6. Sensitivity of the logical situation Y6.
Techno 1
(Radar)
Techno 2
(Lidar)
Techno 3
(Ultrasonic)
Techno 4
(Stereo Camera)
Techno 5
(Mono Camera)
DecisionActuation
Y6
A1_2 0.20.21
A2_2 0.20.2
A3_1
A4_1
A5_1
Sensitivity of Y6/component 0.20.21
Sensitivity of Y61.4
Table A7. Sensitivity of the logical situation Y7.
Table A7. Sensitivity of the logical situation Y7.
Techno 1 (Radar)Techno 2
(Lidar)
Techno 3
(Ultrasonic)
Techno 4
(Stereo Camera)
Techno 5
(Mono Camera)
DecisionActuation
Y7
A1_2 0.20.21
A2_1
A3_1
A4_2 0.2 0.20.2 1
A5_1
Sensitivity of Y7/component 0.2 0.20.211
Sensitivity of Y72.6
Table A8. Sensitivity of the logical situation Y8.
Table A8. Sensitivity of the logical situation Y8.
Techno 1 (Radar)Techno 2
(Lidar)
Techno 3
(Ultrasonic)
Techno 4
(Stereo Camera)
Techno 5
(Mono Camera)
DecisionActuation
Y8
A1_2 0.20.21
A2_2 0.20.2
A3_1
A4_2 0.2 0.20.2 1
A5_1
Sensitivity of Y8/component 0.2 0.20.211
Sensitivity of Y82.6

References

  1. ISO 26262; Road Vehicles—Functional Safety. ISO: Geneva, Switzerland, 2011.
  2. ISO 21448; Road Vehicles—Safety of the Intended Functionality. ISO: Geneva, Switzerland, 2022.
  3. Ponn, T.; Muller, F.; Diermeyer, F. Systematic analysis of the sensor coverage of automated vehicles using phenomenological sensor models. In IEEE Intelligent Vehicles Symposium, Proceedings; IEEE: Piscataway, NJ, USA, 2019; pp. 1000–1006. [Google Scholar] [CrossRef]
  4. Ignatious, H.A.; El-Sayed, H.; Khan, M.A. Sensor Technology for Autonomous Vehicles. Encycl. Sens. Biosens. 2023, 4, 35–51. [Google Scholar] [CrossRef]
  5. Li, C.; Sifakis, J.; Wang, Q.; Yan, R.; Zhang, J. Simulation-Based Validation for Autonomous Driving Systems; Association for Computing Machinery: New York, NY, USA, 2023; Volume 1. [Google Scholar] [CrossRef]
  6. Kalra, N.; Paddock, S.M. Driving to Safety; RAND Corporation: Santa Monica, CA, USA, 2014. [Google Scholar]
  7. Sun, D.; Elefteriadou, L. A Driver Behavior-Based Lane-Changing Model for Urban Arterial Streets. Transp. Sci. 2014, 48, 184–205. [Google Scholar] [CrossRef]
  8. Thorn, E.; Kimmel, S.; Chaka, M. A Framework for Automated Driving System Testable Cases and Scenarios. Report No. Dot Hs 812 623. 2018. p. 180. Available online: https://www.nhtsa.gov/sites/nhtsa.dot.gov/files/documents/13882-automateddrivingsystems_092618_v1a_tag.pdf (accessed on 27 May 2023).
  9. Razzaq, S.; Dar, A.R.; Shah, M.A.; Khattak, H.A.; Ahmed, E.; El-Sherbeeny, A.M.; Lee, S.M.; Alkhaledi, K.; Rauf, H.T. Multi-Factor Rear-End Collision Avoidance in Connected Autonomous Vehicles. Appl. Sci. 2022, 12, 1049. [Google Scholar] [CrossRef]
  10. Riedmaier, S.; Schneider, D.; Watzenig, D.; Diermeyer, F.; Schick, B. Model validation and scenario selection for virtual-based homologation of automated vehicles. Appl. Sci. 2021, 11, 35. [Google Scholar] [CrossRef]
  11. Riedmaier, S.; Ponn, T.; Ludwig, D.; Schick, B.; Diermeyer, F. Survey on Scenario-Based Safety Assessment of Automated Vehicles. IEEE Access 2020, 8, 87456–87477. [Google Scholar] [CrossRef]
  12. Koné, T.F.; Bonjour, E.; Levrat, E.; Mayer, F.; Géronimi, S. Safety Demonstration of Autonomous Vehicles: A Review and Future Research Questions. In Complex Systems Design & Management; Springer International Publishing: Cham, Switzerland, 2020; pp. 176–188. [Google Scholar] [CrossRef]
  13. Menzel, T.; Bagschik, G.; Maurer, M. Scenarios for Development, Test and Validation of Automated Vehicles. In Proceedings of the 2018 IEEE Intelligent Vehicles Symposium (IV), Changshu, China, 26–30 June 2018. [Google Scholar] [CrossRef]
  14. Jesenski, S.; Stellet, J.E.; Branz, W.; Zöllner, J.M. Simulation-Based Methods for Validation of Automated Driving: A Model-Based Analysis and an Overview about Methods for Implementation. In Proceedings of the 2019 IEEE Intelligent Transportation Systems Conference, ITSC 2019, Auckland, New Zealand, 27–30 October 2019; pp. 1914–1921. [Google Scholar] [CrossRef]
  15. Amersbach, C.; Winner, H. Defining Required and Feasible Test Coverage for Scenario-Based Validation of Highly Automated Vehicles. In Proceedings of the 2019 IEEE Intelligent Transportation Systems Conference (ITSC), Auckland, New Zealand, 27–30 October 2019; pp. 425–430. [Google Scholar] [CrossRef]
  16. Zhou, J.; Re, L. Reduced Complexity Safety Testing for ADAS & ADF. IFAC-Pap. 2017, 50, 5985–5990. [Google Scholar] [CrossRef]
  17. ISO/IEC/IEEE 29119-1; Software and Systems Engineering—Software Testing—Part 1: Concepts and Definitions. ISO: Geneva, Switzerland, 2013.
  18. Vangi, D.; Virga, A.; Gulino, M.-S. Adaptive intervention logic for automated driving systems based on injury risk minimization. Proc. Inst. Mech. Eng. Part D J. Automob. Eng. 2020, 234, 2975–2987. [Google Scholar] [CrossRef]
  19. Alvarez, S.; Page, Y.; Sander, U.; Fahrenkrog, F.; Helmer, T.; Jung, O.; Hermitte, T.; Al, E. Prospective effectiveness assessment of adas and active safety systems via virtual simulation: A review of the current practices. In Proceedings of the 25th International Technical Conference on the Enhanced Safety of Vehicles (ESV), Detroit, MI, USA, 5–8 June 2017. [Google Scholar]
  20. Gulino, M.S.; Fiorentino, A.; Vangi, D. Prospective and retrospective performance assessment of Advanced Driver Assistance Systems in imminent collision scenarios: The CMI-Vr approach. Eur. Transp. Res. Rev. 2022, 14, 3. [Google Scholar] [CrossRef]
  21. Smit, S.; Tomasch, E.; Kolk, H.; Plank, M.A.; Gugler, J.; Glaser, H. Evaluation of a momentum based impact model in frontal car collisions for the prospective assessment of ADAS. Eur. Transp. Res. Rev. 2019, 11, 2. [Google Scholar] [CrossRef]
  22. Feng, S.; Feng, Y.; Sun, H.; Bao, S.; Misra, A.; Zhang, Y.; Liu, H.X. Testing Scenario Library Generation for Connected and Automated Vehicles, Part I: Methodology. IEEE Trans. Intell. Transp. Syst. 2021, 22, 1573–1582. [Google Scholar] [CrossRef]
  23. Hallerbach, S.; Xia, Y.; Eberle, U.; Koester, F. Simulation-based Identification of Critical Scenarios for Cooperative and Automated Vehicles. In SAE International Journal of Connected and Automated Vehicles; SAE Technical Paper 2018-01-1066; SAE International: Warrendale, PA, USA, 2018. [Google Scholar] [CrossRef]
  24. Zhang, F.; Xu, X.; Cheng, L.; Tan, S.; Wang, W.; Wu, M. Mechanism reliability and sensitivity analysis method using truncated and correlated normal variables. Saf. Sci. 2020, 125, 104615. [Google Scholar] [CrossRef]
  25. García-Herrero, S.; Gutiérrez, J.M.; Herrera, S.; Azimian, A.; Mariscal, M.A. Sensitivity analysis of driver’s behavior and psychophysical conditions. Saf. Sci. 2020, 125, 104586. [Google Scholar] [CrossRef]
  26. Xiong, Z. Creating a Computing Environment in a Driving Vehicles; The University of Leeds Institute for Transport Studies & School of Computing: Leeds, UK, 2013. [Google Scholar]
  27. Geyer, S.; Baltzer, M.; Franz, B.; Hakuli, S.; Kauer, M.; Kienle, M.; Meier, S.; Weißgerber, T.; Bengler, K.; Bruder, R.; et al. Concept and development of a unified ontology for generating test and use-case catalogues for assisted and automated vehicle guidance. IET Intell. Transp. Syst. 2014, 8, 183–189. [Google Scholar] [CrossRef]
  28. Ulbrich, S.; Menzel, T.; Reschka, A.; Schuldt, F.; Maurer, M. Defining and Substantiating the Terms Scene, Situation, and Scenario for Automated Driving. In Proceedings of the IEEE Conference on Intelligent Transportation Systems, Gran Canaria, Spain, 15–18 September 2015. [Google Scholar] [CrossRef]
  29. Bagschik, G.; Menzel, T.; Maurer, M. Ontology based Scene Creation for the Development of Automated Vehicles. In Proceedings of the 2018 IEEE Intelligent Vehicles Symposium (IV), Changshu, China, 26–30 June 2018. [Google Scholar] [CrossRef]
  30. Koné, T.F.; Levrat, E.; Bonjour, E.; Mayer, F.; Géronimi, S. Safety assessment of scenarios for the simulation-based validation process of AV with regards to its functional insufficiencies. In Proceedings of the 30th European Safety and Reliability Conference, ESREL 2020 and 15th Probabilistic Safety Assessment and Management Conference, PSAM, Venice, Italy, 1–5 November 2020; pp. 100–107, ISBN 9789811485930. [Google Scholar]
  31. Jang, H.A.; Kwon, H.M.; Lee, M.K. A Study on Situation Analysis for ASIL Determination. J. Ind. Intell. Inf. 2015, 3, 152–157. [Google Scholar] [CrossRef]
  32. Mauborgne, P.; Deniaud, S.; Levrat, E.; Bonjour, E.; Micaëlli, J.P.; Loise, D. Operational and System Hazard Analysis in a Safe Systems Requirement Engineering Process-Application to automotive industry. Saf. Sci. 2016, 87, 256–268. [Google Scholar] [CrossRef]
  33. Rocklage, E. Teaching self-driving cars to dream: A deeply integrated, innovative approach for solving the autonomous vehicle validation problem. In Proceedings of the IEEE Conference on Intelligent Transportation Systems, Proceedings (ITSC), Yokohama, Japan, 16–19 October 2017; pp. 1–7. [Google Scholar] [CrossRef]
Figure 1. The process used to obtain the functional situations.
Figure 1. The process used to obtain the functional situations.
Applsci 13 06717 g001
Figure 2. The three impact levels of an event.
Figure 2. The three impact levels of an event.
Applsci 13 06717 g002
Figure 3. The expert’s classification of the logical situations using the first principle.
Figure 3. The expert’s classification of the logical situations using the first principle.
Applsci 13 06717 g003
Figure 4. The expert’s classification of the logical situations using the second principle.
Figure 4. The expert’s classification of the logical situations using the second principle.
Applsci 13 06717 g004
Table 1. TJC’s ODD description.
Table 1. TJC’s ODD description.
Physical InfrastructureOther Road UsersWeather ConditionsEgo Vehicle
  • Type of infrastructure: divided lanes (highway, expressway)
  • Roadway type:
  • separate carriageway
  • No intersection, no roundabout, no pedestrian crossing, no stop lights, no giving way, no priority entry, no U-turn, no working zone, no tunnel
  • No pedestrian, no obstacle, no wrong way driver
  • Type of precipitation: not specified
  • Type of brightness: not specified
  • Temperature: not specified
  • Equipped with TJC
  • Speed range: ≤60 km/h
  • Presence of Driver = Yes
  • No lane changing
  • Vehicle dynamic: no strong deceleration, no strong acceleration, no strong action on the steering wheel
  • No trailer and no towing vehicle are attached
                             Other domain constraints
  • On type of lane: Ego must not be on shoulder lane or merging lane or ending lane
  • No obstacle
  • Traffic Jam condition: 1 vehicle in front of Ego, speed range ≤ 60 km/h, speed difference (between road users) ≤ 10 km/h (meaning that traffic density = congested)
Table 2. Example of the obtained functional situations in TJC case.
Table 2. Example of the obtained functional situations in TJC case.
Type of Traffic Density Associated with the Use of the Function: CongestedList of Functional Situations
The types of infrastructure associated with the function: Motorway, ExpresswayThe operational modes in the “Motorway” infrastructure: Activation, GiveBack, MRMThe operational maneuver associated with the “Activation” mode for the Motorway infrastructure: Car following
  • Ego in “Car following” situation in the “Activation” mode on a Motorway and in congested traffic.
The operational maneuver associated with the “GiveBack” mode for the Motorway infrastructure: Car following
  • Ego in “Car following” situation in “GiveBack” mode on a Motorway and in congested traffic
The operational maneuver associated with the “MRM” mode for the Motorway infrastructure: Car following
  • Ego in a “Car following” situation in the “MRM” mode on a Motorway and in congested traffic.
The operational modes in the Expressway infrastructure: Activation, GiveBack, MRMThe operational maneuver associated with the “Activation” mode for the Expressway infrastructure: Car following
  • Ego in “Car following” situation in “Activation” mode on an Expressway and in congested traffic.
The operational maneuver associated with the “GiveBack” mode for the Expressway infrastructure: Car following
  • Ego in the “Car following” situation in “GiveBack” mode on an Expressway and in congested traffic.
The operational maneuver associated with the “MRM” mode for the Expressway infrastructure: Car following
  • Ego in the “Car following” situation in the “MRM” mode on an Expressway and in congested traffic.
Table 3. Example of parameters and value ranges for obtaining logical situations.
Table 3. Example of parameters and value ranges for obtaining logical situations.
ParametersValue RangesIdentification
Number_lanes2A1_1
3 or moreA1_2
CurvatureLowA2_1
HighA2_2
Speed_EgoLow (0, 60 km/h)A3_1
Rainfall levelLightA4_1
HeavyA4_2
Speed_OtherVehicle.1 (km/h)Low (0, 60 km/h)A5_1
Table 4. Examples of logical situations obtained by combining the variation ranges.
Table 4. Examples of logical situations obtained by combining the variation ranges.
Logical situation 1 (Y1): A1_1, A2_1, A3_1, A4_1, A5_1 = 2 lanes * Curvature.Low * Speed_Ego.Low * Rain.Light * Speed_OtherVehicle.1.Low
Logical situation 2 (Y2): A1_1, A2_2, A3_1, A4_1, A5_1 = 2 lanes * Curvature.High * Speed_Ego.Low * Rain.Light * Speed_OtherVehicle.1.Low
Logical situation 3 (Y3): A1_1, A2_1, A3_1, A4_2, A5_1 = 2 lanes * Curvature.Low * Speed_Ego.Low * Rain.Heavy * Speed_OtherVehicle.1.Low
Logical situation 4 (Y4): A1_1, A2_2, A3_1, A4_2, A5_1 = 2 lanes * Curvature.High * Speed_Ego.Low * Rain.Heavy * Speed_OtherVehicle.1.Low
Logical situation 5 (Y5): A1_2, A2_1, A3_1, A4_1, A5_1 = (3 and more) lanes * Curvature.Low * Speed_Ego.Low * Rain.Light * Speed_OtherVehicle.1.Low
Logical situation 6 (Y6): A1_2, A2_2, A3_1, A4_1, A5_1 = (3 or more) lanes * Curvature.High * Speed_Ego.Low * Rain.Light * Speed_OtherVehicle.1.Low
Logical situation 7 (Y7): A1_2, A2_1, A3_1, A4_2, A5_1 = (3 and more) lanes * Curvature.Low * Speed_Ego.Low * Rain.Heavy * Speed_OtherVehicle.1.Low
Logical situation 8 (Y8): A1_2, A2_2, A3_1, A4_2, A5_1 = (3 or more) lanes * Curvature.High * Speed_Ego.Low * Rain.Heavy * Speed_OtherVehicle.1.Low
Table 5. Proposal for a scale to measure a priori sensitivity of AV to a range of values.
Table 5. Proposal for a scale to measure a priori sensitivity of AV to a range of values.
DescriptionCorresponding Sensitivity Value
The range of values misleads the considered sensor technology0.2
The range of values misleads the five sensor technologies1
The range of values misleads the decision made by the AV (in the absence of any lure to the sensors)
The range of values misleads the actuation
Table 6. Contribution of the sensitivity of an AV component in obtaining the sensitivity to a logical situation.
Table 6. Contribution of the sensitivity of an AV component in obtaining the sensitivity to a logical situation.
DescriptionCorresponding Sensitivity Value
The sensor technology under consideration is sensitive to at least one of the value ranges making up the logical situation0.2
The decision made by the AV (in the absence of any lure to the sensors) is sensitive to at least one of the value ranges making up the logical situation1
The actuation is sensitive to at least one of the value ranges making up the logical situation
Table 7. Sensitivity of the logical situation Y8.
Table 7. Sensitivity of the logical situation Y8.
Techno 1
(Radar)
Techno 2
(Lidar)
Techno 3
(Ultrasonic)
Techno 4
(Stereo Camera)
Techno 5
(Mono Camera)
DecisionActuation
Y8
A1_2 0.20.21
A2_2 0.20.2
A3_1
A4_2 0.2 0.20.2 1
A5_1
Sensitivity of Y8/component 0.2 0.20.211
Sensitivity of Y82.6
Table 8. Sensitivity of events impacting the concrete level for Y8.
Table 8. Sensitivity of events impacting the concrete level for Y8.
e8.1 = Δ(A1_2)e8.2 = Δ(A2_2)e8.3 = Δ(A4_2)
S(e8.1) = 1.4S(e8.2) = 0.4S(e8.3) = 1.6
Table 9. Sensitivities of events impacting the logical level for Y8.
Table 9. Sensitivities of events impacting the logical level for Y8.
Y8 Y4Y6Y7
Event impacting the logical levele8.4 = Δ(A1_2, A1_1)e8.5 = Δ(A4_2, A4_1)e8.6 = Δ(A2_2, A2_1)
Sensitivity of the eventS(e8.4) = |2.6 − 1.6|
= 1
S(e8.5) = |2.6 − 1.4|
= 1.2
S(e8.6) = |2.6 − 2.6|
= 0
Table 10. Sensitivities of events impacting the functional level for Y8.
Table 10. Sensitivities of events impacting the functional level for Y8.
Functional Situation of “Car Following” Lane Changing LeftGiveBack
Event impacting functional situatione8.7 = Δ(A1_1, A1_0)
e8.8 = Δ(A1_2, A1_0)
e8.9 = Δ(A5_1, A5_2)
Sensitivity of the eventS(e8.7) = 1
S(e8.8) = 1
S(e8.9) = 1
Table 11. Distribution of sensitivities of the events in the E8 list.
Table 11. Distribution of sensitivities of the events in the E8 list.
Eventse8.1 = Δ(A1_2)e8.2 = Δ(A2_2)e8.3 = Δ(A4_2)e8.4 = Δ(A1_2, A1_1)e8.5 = Δ(A4_2, A4_1)e8.6 = Δ(A2_2, A2_1)e8.7 = Δ(A1_1, A1_0)e8.8 = Δ(A1_2, A1_0)e8.9 = Δ(A5_1, A5_2)
Sensitivity valuesS(e8.1)
= 1.4
S(e8.2)
= 0.4
S(e8.3)
= 1.6
S(e8.4)
= 1
S(e8.5)
= 1.2
S(e8.6)
= 0.1
S(e8.7)
= 1
S(e8.8)
= 1
E(e8.9)
= 1
Associated probabilitiesq(e8.1) = 0.161q(e8.2) = 0.046q(e8.3) = 0.184q(e8.3) = 0.115q(e8.4) = 0.138q(e8.5) = 0.011q(e8.6) = 0.115q(e8.7) = 0.115q(e8.9) = 0.115
Table 12. Summary of the sensitivity results obtained with the eight logical situations described in Section 6.1.
Table 12. Summary of the sensitivity results obtained with the eight logical situations described in Section 6.1.
Sensitivity DistributionPriority/RankLogical Situations
0 5Y1
3.45 × 10 2 4Y2
12.07 × 10 2 3Y6, Y5
13.79 × 10 2 2Y4, Y3
22.41 × 10 2 1Y8, Y7
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Koné, T.F.; Bonjour, E.; Levrat, E.; Mayer, F.; Géronimi, S. An Approach to Guide the Search for Potentially Hazardous Scenarios for Autonomous Vehicle Safety Validation. Appl. Sci. 2023, 13, 6717. https://doi.org/10.3390/app13116717

AMA Style

Koné TF, Bonjour E, Levrat E, Mayer F, Géronimi S. An Approach to Guide the Search for Potentially Hazardous Scenarios for Autonomous Vehicle Safety Validation. Applied Sciences. 2023; 13(11):6717. https://doi.org/10.3390/app13116717

Chicago/Turabian Style

Koné, Tchoya Florence, Eric Bonjour, Eric Levrat, Frédérique Mayer, and Stéphane Géronimi. 2023. "An Approach to Guide the Search for Potentially Hazardous Scenarios for Autonomous Vehicle Safety Validation" Applied Sciences 13, no. 11: 6717. https://doi.org/10.3390/app13116717

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop