Next Article in Journal
The FlexiBoard: Tangible and Tactile Graphics for People with Vision Impairments
Next Article in Special Issue
Trust Development and Explainability: A Longitudinal Study with a Personalized Assistive System
Previous Article in Journal
Substitute Buttons: Exploring Tactile Perception of Physical Buttons for Use as Haptic Proxies
Previous Article in Special Issue
Are Drivers Allowed to Sleep? Sleep Inertia Effects Drivers’ Performance after Different Sleep Durations in Automated Driving
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

How to Design Human-Vehicle Cooperation for Automated Driving: A Review of Use Cases, Concepts, and Interfaces

1
Technische Hochschule Ingolstadt, 85049 Ingolstadt, Germany
2
Johannes Kepler Universität, 4040 Linz, Austria
3
University of Duisburg-Essen, 45141 Essen, Germany
*
Author to whom correspondence should be addressed.
Multimodal Technol. Interact. 2024, 8(3), 16; https://doi.org/10.3390/mti8030016
Submission received: 20 December 2023 / Revised: 9 February 2024 / Accepted: 19 February 2024 / Published: 26 February 2024
(This article belongs to the Special Issue Cooperative Intelligence in Automated Driving- 2nd Edition)

Abstract

:
Currently, a significant gap exists between academic and industrial research in automated driving development. Despite this, there is common sense that cooperative control approaches in automated vehicles will surpass the previously favored takeover paradigm in most driving situations due to enhanced driving performance and user experience. Yet, the application of these concepts in real driving situations remains unclear, and a holistic approach to driving cooperation is missing. Existing research has primarily focused on testing specific interaction scenarios and implementations. To address this gap and offer a contemporary perspective on designing human–vehicle cooperation in automated driving, we have developed a three-part taxonomy with the help of an extensive literature review. The taxonomy broadens the notion of driving cooperation towards a holistic and application-oriented view by encompassing (1) the “Cooperation Use Case”, (2) the “Cooperation Frame”, and (3) the “Human–Machine Interface”. We validate the taxonomy by categorizing related literature and providing a detailed analysis of an exemplar paper. The proposed taxonomy offers designers and researchers a concise overview of the current state of driver cooperation and insights for future work. Further, the taxonomy can guide automotive HMI designers in ideation, communication, comparison, and reflection of cooperative driving interfaces.

1. Introduction

Vehicle automation is an essential aspect of the safe and pleasant experience within modern and future cars. Yet automation is differently framed among users, researchers, and vehicle designers. A commonly used taxonomy to describe vehicle automation is the SAE (Society of Automotive Engineers) J3016 Standard [1], which defines the control allocation in automated driving binary, with either the human driver or the automation being in control. However, with increasing research on the consequences of increased automation, it has become clear that this kind of binary view on vehicle automation can lead to a multitude of human factor issues. One promising solution that has emerged in response to these issues is cooperative intelligence in driving automation. This concept defines the driving task no longer as a binary control allocation but allows for both the human and the automation to be in control at the same time, effectively creating a human–automation team [2].
Providing control, e.g., in the form of intervention user interfaces, is beneficial to satisfy user’s needs for autonomy and thus fostering motivation to use driving automation [3,4]. Still, in its current form, vehicle automation is associated with a loss of autonomy [5] and therefore, new Human Machine Interface (HMI) designs have to be developed that fit user expectations and provide a safe and comfortable way to exert vehicle control across a variety of driving situations, e.g., provide control for a holiday trip or a city ride, and across different roles, e.g., in a teleoperation scenario where remote operators join the team [6,7].
In related research, a multitude of different interaction concepts and HMI designs have been developed for cooperation in automated vehicles. By testing these designs in empirical studies, researchers have demonstrated that cooperation with driving automation might present a promising approach towards fostering the acceptance of driving automation while increasing driving efficiency [8]. However, by analyzing existing research, we found most investigations concerning these cooperation approaches narrowly focus on the interaction protocols, and existing frameworks and taxonomies for cooperative, automated driving are similarly focused on defining the task distribution and interaction space between humans and automation (e.g., [9,10,11,12]). We argue that existing research on cooperative driving systems often overlooks factors that are crucial for defining tasks and cooperation methods at the (1) individual psychological level, (2) relationship level between cooperative agents, and (3) environmental level. The particular focus on the use case, which defines the motivations of the agents and sets the stage for each agent’s individual goals is essential for defining the resulting tasks and deriving the cooperation modalities from that. To make these essential factors more prominent in the design and development of cooperative driving mechanisms, we extend prior work through a Human–Computer Interaction (HCI) lens and present a refined taxonomy for cooperative driving.
To create this taxonomy for cooperative driving, we first employed a literature survey in which we summarized related research on cooperation in automated vehicles. Based on the analysis and categorization of the related work, we derived the dimensions of the taxonomy.
As a result, we propose a subsequent three-step taxonomy (cf. Figure 1) intended to aid in structuring cooperative control concepts and assisting in their design. First, to identify the involved agents, task parameters, and the relation between them, we start with characterizing the “Cooperation Use Case”. Second, based on the use case to describe cooperation strategy, interaction procedure, and driving task distribution, we provide a sub-taxonomy for the “Cooperation Frame”. Third, based on the cooperation frame, we describe the properties of the implementation for the resulting “Human–Machine Interface”. Our three-step taxonomy allows for a systematic discussion, ideation, and reflection of existing concepts and HMIs in the context of in-vehicle cooperation. In the following, we introduce related work and taxonomies before going through the three steps of the framework and describing the components in detail.

2. Background and Related Work

To obtain an impression of the current landscape of cooperative driving, in this section, we present selected cooperation strategies corresponding to HMIs for sharing the driving task. We then examine related frameworks and taxonomies that help inform the cooperation design. Finally, we summarize cooperation requirements that help shift designing cooperative driving towards a more human-centered perspective.

2.1. Strategies and HMIs for Driver–Vehicle Cooperation

Cooperative driving presents a distinctive and less binary approach compared to Take-Over Request (TOR), emphasizing a collaborative partnership between the automated system and the human operator. Unlike TOR scenarios with distinct takeover requests, cooperative driving integrates the operator as an integral part of the automated system, akin to any other sensor, involving them in executive decision-making [8]. Following [13], two agents engage in a cooperative interaction if: (1) Each one strives towards their own goals and can interfere with the other one’s goals, resources, procedures, etc. (2) Each one tries to manage the interference to facilitate the individual activities and/or the common task when it exists [13]. The human operator functions as an autonomous agent throughout the interaction, actively participating rather than remaining a passive observer until intervention is required. This approach offers several advantages over dichotomous systems, consistently achieving higher scores in enjoyment, ease of use, and trust than binary takeover scenarios [14,15]. Additionally, cooperative driving has the potential to reduce mental load and enhance the operator’s sense of autonomy by involving them in and sharing responsibility for the actions of the automated system [15,16].
A prevalent framework for understanding cooperative driving is the Horse-Metaphor introduced by Flemisch [17]. In this mental model, the automated vehicle takes on the role of a horse while the operator serves as the rider. Flemisch suggests that a cooperative automated vehicle should exhibit horse-like characteristics, employ intuitive feedback and communication, execute commands and react to the environment in a manner analogous to a horse, and provide support to the user. According to this metaphor, users can relax and enjoy other aspects of travel when automation controls movement and guidance while remaining connected and aware of the system, able to ‘tighten the reins’ when necessary. The transitions of control envisioned in the H-Mode metaphor are particularly relevant, representing a continuum between human-controlled (tight rein) and “horse”-controlled (loose rein) movement. Building upon the H-Metaphor, Flemisch et al. [9] developed the H-Mode, a vehicle guidance concept designed to emulate desirable horse-like characteristics. This model incorporates a haptic interface allowing the operator to tighten or loosen the “reins”, with both the operator and the system capable of initiating control and seamlessly transitioning between them.
A different approach is the Conduct-by-Wire concept, where the operator inputs commands from a pre-selected list, subsequently planned and executed by the automated system [12]. Similar to H-Mode, operators in Conduct-by-Wire must remain alert, ensuring they are never completely out of the loop. Both H-Mode and Conduct by Wire employ visual and haptic interfaces to keep the operator informed.
Using interfaces like a car’s steering wheel combined with haptic feedback enables users to initiate maneuvers that are executed by the automated system [14]. In driving simulator studies, researchers, such as Woide et al., have implemented touch screens resembling Conduct-by-Wire, allowing users to select maneuvers from a catalog [18].
While these are concepts supporting automation on an operational level, as defined by Michon [19], there are also approaches to driver–vehicle control on different levels. Wang et al. [20,21] implemented and tested cooperation concepts that allow for the human driver in the AV to support the automation on a perception and prediction level. Instead of supporting the automation by steering or selecting specific maneuvers, the human can help the automation by understanding and judging the current traffic situation correctly on its own [22].

2.2. Existing Frameworks and Taxonomies for Driver–Vehicle Cooperation

These different concepts for human–automation cooperation or collaboration can essentially be differentiated by which level of the hierarchical view of the driving task [19] the cooperation is executed. Thus, looking at existing frameworks and taxonomies for cooperative driving, this differentiation is also used for the definition of cooperation. Guo et al. base their hierarchical view of the cooperative driving task on the structure originating from Michon et al. [8,19]. In this, the driving task is separated into an operational, a tactical, and a strategic level based on the time frame in which the action contributing to the DDT is executed. Similarly, Walch et al. [11] base their proposed framework for handling shared control in automated vehicles on the same essential structure but supplement it with an additional layer, defining the decision, to which degree the control will be shared between the human driver and the automation.
Zimmermann et al. [23] defined a taxonomy of five levels of cooperation that can be applied to different scenarios to analyze the cooperation.
Tinga et al. proposed a framework defining the key elements required for assessing the quality of cooperation, or shared control in general, based on previous work by Petermeijer et al. [24]. Petermeijer et al. [24] identified seven relevant dimensions: (1) compatible goals, (2) shared situational awareness, (3) consistent and compatible mental models, (4) distribution of responsibility, (4) capability and authority, (5) adaptability, (6) conflicts and (7) communication.
A similar distinction between interaction levels was made by Wang et al. [20], who proposed a framework that distinguishes between four levels of interaction with the automation and introduced a concept that allows drivers to guide an automated driving system through gaze–speech interaction. In another approach, Wang et al. [25] presented a framework that facilitates a systematic definition of scenarios where drivers want to intervene in the behavior of the AV system in spontaneous situations, for example, when picking up a friend. Mirning et al. [26] defined a categorization framework for control transitions between the human and the automation. They provide an overview of interaction solutions for transitions between manual and autonomous driving modes and present a categorization framework.

2.3. Human-Centered Cooperation Perspective

Related literature has already proven a tendency of human drivers in automated vehicles towards still wanting to be involved in the driving task and the decisions of the automation, even though they are no longer required to intervene in the driving task, solely based on the capabilities of the automation [27,28,29]. This highlights the fact that an inherent need for cooperation exists detached from the aspect of whether cooperation improves the overall efficiency of the driving task.
Since the principle of cooperation, by definition, requires two or more agents to be in control at the same time, it is essential for successful cooperation that all agents in control are prepared and ready for the task they will be taking over. Walch et al. [30] analyzed these requirements and stated four key ingredients that have to be fulfilled from a human-factors standpoint to ensure a successful cooperation is possible. Based on a literature review, they identified (1) “Mutual Predictability”, (2) “Directability”, (3) “Shared Situation Representation”, and (4) “Trust and Calibrated Reliance on the System” to be these key factors.
On the other hand, in order to judge whether cooperation was not only successful from an efficiency and effectiveness standpoint but also from a human-factors perspective, measurements have been developed and are currently still under development, which are meant to understand the cooperation quality from the users’ standpoint. Woide et al. [31] developed the “Human-Automation-Interaction-Interdependence-Questionnaire” via an Explorative Factor Analysis, resulting in a seven-dimensional model meant to help understand the interdependence in driver–vehicle cooperation.
The described requirements and measures already highlight ways to design driving cooperation that is more human-centered. To better structure the factors, in the next section, we demonstrate our approach toward the construction of human-centered driving cooperation taxonomy with the help of a literature survey.

3. Research Questions and Aim of the Paper

In the related literature, a variety of different approaches to cooperative driving have been developed and tested. However, by analyzing related work, we found that there is currently no overarching taxonomy or framework that considers influencing factors on cooperation outside of the concrete interaction itself. The following question: “What are the motivations for cooperation?”, in our opinion, is essential in designing cooperation between humans and automation while ensuring that users actually feel the inclination to use these systems. Walch et al. [30] defined prerequisites for driver–vehicle cooperation, and Tinga et al. [24] proposed dimensions to measure the quality of cooperation. However, this is only applicable if the humans inside the automated vehicle feel an intrinsic need to cooperate with the automation and that this cooperation is beneficial in some way.
To close this gap in the related literature, we aim to create a taxonomy that allows for a more comprehensive view of the subject of in-car cooperation.
To find and define its dimensions, we use a structured literature survey following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) method [32]. With our survey, we focused on answering the following research questions:
RQ 1 
Which elements define the cooperation use case in a cooperative interaction?
RQ 2 
Which elements define the cooperation frame in a cooperative interaction?
RQ 3 
Which elements describe the characteristics of the HMI in a cooperative interaction?
Using the structured literature survey, we aim to answer these research questions and contribute to a better holistic understanding of cooperative, automated driving.

4. Materials and Methods

4.1. Literature Survey

With our survey, we provide insights into emerging research themes in the HCI community. This overview will help researchers and practitioners to obtain a brief overview of the current state of cooperative driving research. We followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) method [32] to report our review procedure. Figure 2 shows our literature selection process step by step. We go through these steps in the following. For generating the flowchart, we used the R-package PRISMA2020 [33].

4.1.1. Identification

To answer these questions, we started with initial searches using the terms human-vehicle collaboration, human-vehicle cooperation, cooperative driving, and shared driving. Starting with these terms, we narrowed down the search iterative with variations of the search terms above and adding additional terms. We chose the search terms broad enough to include papers with relation to both HCI and AVs but made sure that the search minimized the inclusion of papers that were not related to HCI yet about AVs, i.e., from cooperation in the context of AV microelectronics or politics. We searched within the multidisciplinary digital libraries of IEEE Xplore, ACM Library, Scopus, and Google Scholar, which contain papers from various HCI conferences and journals related to cooperative driving, such as Intelligent Vehicles, Automotive User Interfaces, and Transportation Research. Finally, we used the following search terms with variations due to technical differences in the implementation of the filtering in the digital libraries:
ACM Digital Library: 
((“cooperat*” OR “collaborat*” OR “shared control”) AND (“human” OR “user” OR “driver”) AND (“vehicle” OR “driving”))
Scopus: 
TITLE-ABS (“cooperat*” OR “collaborat*” OR “shared control”) AND TITLE-ABS (“human” OR “user” OR “driver”) AND TITLE-ABS (“vehicle” OR “driving”) AND KEY (“human–computer interaction” OR “human–machine interaction” OR “human automation interaction”)
IEEE Xplore: 
((“cooperat*” OR “collaborat*” OR “shared control”) AND (“human” OR “user” OR “driver”) AND (“vehicle” OR “driving”) AND (“human–computer interaction” OR “human–machine interaction” OR “human automation interaction”))
Google Scholar: 
(human | driver | user | driving | agent) intitle: (car | vehicle | automation) intitle:(cooperation | collaboration|cooperative | collaborative) AND (“human–computer interaction” | “human–machine interaction” | “human automation interaction”)
The search terms resulted in n = 776 search results over all platforms (Figure 2).

4.1.2. Screening

Two researchers with a background in HCI and expertise in automated driving further reduced the list of papers and excluded unrelated or unfitting papers by screening titles and abstracts. Thereby, we applied the following criteria for inclusion in subsequent order:
  • The research must be about human–automation cooperation/collaboration
  • The research must be about human–vehicle interaction
  • The research must be about cooperative driving use cases, concepts, or interfaces
  • The research must be peer-reviewed
This initial screening led to ( n = 505 ) exclusions of papers not fulfilling these criteria. This large number of exclusions was partially due to the term “Cooperative Driving” being ambiguous between human–automation cooperation and cooperative driving in the Car2X field, which is not relevant to this survey. In this step, only reports deemed irrelevant to the research topic were excluded from the list of references.
Overall, this initial screening procedure led to the inclusion of a total of n = 271 reports for full-text analysis. Again, two HCI researchers analyzed the full-text reports of these n = 271 research items to identify the ones applicable to the cooperative driving taxonomy. The same criteria were applied as described above. This final step in the screening process resulted in n = 50 papers, which were included as literature on which the development of the taxonomy is based. The included reports are distributed over the databases as follows:
ACM Digital Library
Scopus
IEEE Xplore
Google Scholar

4.1.3. Clustering and Categorization

Following the screening process, the included papers were labeled using common themes that were applicable to a larger portion of the literature corpus. The labels were structured by:
  • What type of paper is it?
  • What was the main research question of the paper?
  • Which method was used in the paper?
  • What was the main metric taken in the experiment (if applicable)?
  • What is the main contribution of the paper?
These labels were used to understand relevant research topics in cooperative driving better and get a better overview of the different cooperation interaction concepts that were developed in related literature. This clustering of the research articles was then used to extract the dimensions of the Taxonomy.

4.2. Taxonomy Development and Validation

4.2.1. Development

To derive the taxonomy from the resulting list of literature, we used the basic categorization performed as described in Section 4.1. By clustering the reports based on the labels they were given, overarching themes in the research could be found. These themes were turned into the dimensions of the taxonomy. The decisions, which of the dimensions will be included and which will be omitted, were additionally based on the judgment of the authors, where an agreement was made between all contributors. The resulting dimensions are described in Section 5.

4.2.2. Initial Validation

To ensure that the dimensions defined in the previous steps are founded in the relevant literature, the papers identified in the literature review were compared with these dimensions. This matching is presented in Section 6.

4.2.3. Exemplary Application

After presenting the developed taxonomy, we also present an exemplary application for it in Section 6. To test the applicability of the presented work, we use one research paper and categorize it by the taxonomy dimensions. As part of this exemplary application of the taxonomy, we discuss in detail how each of the dimensions fits the research presented in the paper.

5. Results

We analyzed the resulting literature list, which was clustered as described in Section 4.1. From this basic categorization of the papers, we extracted the basic dimensions for the taxonomy. On the highest level, we found that, in general, the research topics can be divided into three categories: “Use Case”, “Cooperation Frame”, and “HMI”. As a result, on the highest level of the taxonomy, we also use this basic three-part structure.
When designing cooperation between humans and automated vehicles, we consequently distinguish three subsequent steps:
  • Thinking about the concrete cooperation use case(s), from that
  • designing the cooperation task distribution concept for the use case(s), and then
  • designing a feasible HMI that allows humans involved in the cooperation to fulfill their task share in a usable matter.
In the following, we report the results of the literature survey and the resulting structure and dimensions of the taxonomy following this three-part structure. For each of the dimensions we present in the taxonomy, we also present the related papers to which this dimension is applicable as reasoning for the inclusion of the dimension.

5.1. Cooperation Use Case

The first element of the taxonomy is the ”Cooperation Use Case” (Figure 3). This part mainly describes the reason for cooperative control but also the exact scenario the human-automation team is facing, which is supposed to give indications for the next steps towards which task distributions make sense for the specific use case, based on the agents involved and their motivations, but also the criticality and urgency of the situation.
Defining a use case, simple user stories often ask: “Who does what why?” (e.g., [79]). There is an actor with a motivation and a task. That is our starting point. In cooperative driving scenarios, criticality and urgency typically characterize the (driving) task. Given that we have not only one actor, we need to extend the actor notion with one or multiple co-actors or co-agents, which leads to a case-specific agent constellation.
Overall, these use case properties of the agents, the task, and the agent-task relation dimensions influence the pros and cons of choosing a feasible cooperation strategy.

5.1.1. Agents

Co-Agents

Agents are defined as “Passengers” representing humans inside the automated vehicle. This can be either a driver, who is primarily in charge, or a passenger, who is not positioned in a traditional driver’s seat. Another human co-agent can be a “Teleoperator”, e.g., an agent who has control access to the vehicle from outside the vehicle and is able to control it remotely. The last possible co-agent is the “Automation” or AI.
Human–automation cooperation, in the context of automated mobility, offers multiple possible constellations for the three types of co-agents. This means that the number of agents of different types might vary. Similarly, the definition and allocation of authority for each of the agents might vary between different cooperation concepts. Based on Yanco and Drury’s taxonomy for robot–operator interaction [80], the constellation describes the relationship between agents and their cooperation intent. Generally, a cooperative scenario can include single or multiple agents and single or multiple vehicles, such as a platoon of vehicles (vehicle team). Multiple agents can issue the same (team) or conflicting control intents (individuals). The constellation of an agent team is similar to a single agent constellation. However, it requires some form of consent (due to rules or negotiation) before a command is issued. When two agents issue conflicting task commands, there has to be a conflict-resolving mechanism.
Thus, in the taxonomy, it is possible to configure one passenger cooperating with one automation but also multiple passengers cooperating with the automation. Similarly, one teleoperator may cooperate with one or more automated vehicles or one or more passengers inside the AV and vice versa.

Motivation

The motivation in this taxonomy, since it is developed from a human factors’ perspective, is thus defined from the standpoint of the human in the cooperative system. The initial request for cooperation is either agent-initiated (internal) or a situational requirement (external). From a psychological perspective, this corresponds to the internal and external locus of control [81], which predicts whether a person attributes an outcome to their own actions or not, also known as the sense of agency in an HCI context [82,83]. The internally motivated exercise of control is associated with self-efficacy [84], the confidence in one’s ability to control one’s activities.
An internally motivated request for cooperation might originate from the passengers’ need to control the vehicle, e.g., to override the automation. This might be due to a disagreement with the driving behavior of the automation or simply a desire for control over the driving task. From the automation’s standpoint, an internal motivation for cooperation could, for example, be the human driver’s inability to execute the driving task due to intoxication, drowsiness, distraction, etc. The relevance of user-initiated exercise of a need for control has been topic of recent works in the automotive HCI domain [3,4,5,28]. Constant deprivation of basic needs can adversely affect health and well-being [85]. Consequently, a perspective that emphasizes human needs in cooperative driving HMI design promotes UX and well-being (cf. [3,86]).
External cooperation motivation describes a motivation that lies outside the control of the human or the automation that requires them to act together. This might be due to environmental factors like a sudden change in weather conditions, a sensor failure, or an ambiguous driving situation.

5.1.2. Scenario

The scenario describes the current dynamic environment in which the cooperation is taking place by its main influencing factors. On the one hand, the question “What is the impact of the task at hand?” describes how critical an issue is, which is supposed to be solved by cooperative interaction. On the other hand, to describe a scenario, it is also important to know the time frame of the interaction, hence the urgency of the situation.
The two main influencing factors defining how the cooperation is executed thereby are the “Criticality” and the “Urgency” of the scenario the agents are facing. It could also be seen as the “Goal”, which is supposed to be achieved by cooperation.

Criticality and Urgency

The more critical a task is, the higher the outcome’s impact on its agents, e.g., for health or comfort. The more urgent a task is, the less time agents can spend on negotiating or preparing the task.
In our classification, this combination of criticality leads to four possible task types:
  • Non-critical planned cooperation: tasks that are not urgent and non-critical. An example would be a passenger request for a change in the automation’s driving style.
  • Non-critical spontaneous cooperation: tasks that are non-critical but require sudden action. An example would be a passenger who must choose between route alternatives with limited time.
  • Critical planned cooperation: tasks that are not urgent but have high criticality. An example would be the passenger deciding on long-term strategies.
  • Critical spontaneous cooperation: tasks that are urgent and critical. An example would be the passenger choosing maneuvers for the automation to execute.
In the context of the cooperation frame (Section 5.2, this classification has an impact on which level the cooperation can be executed on. The more urgent a task is, the lower the level (towards operational control) on which the task has to be executed for the cooperation to succeed. In the dimension of “Task Coverage”, this might also mean that cooperation might only be possible in the steps closer to the “Execution” or “Evaluation” (Figure 4 phase in the task cycle, since the time limiting prohibiting the cooperation on earlier stages of the cycle.
In the design of cooperative interactions, this relation between the scenario in the use case and the timing and impact of the cooperation has to be kept in mind to develop sensible cooperation concepts.
To illustrate this further, we can look at related concepts, which depict cooperative interactions on the perception stage of the task by Wang et al. [87] or Walch et al. [48], in which the human driver is tasked to supply the automation with additional information about the current driving scene and thus support it in its decision by increasing the accuracy of the information on which the automation bases its decisions. This is only possible since the task at hand only has low criticality and low urgency. Thus, the agents have enough time to interact in this early stage of the decision process and still have enough time to plan and execute an action safely.
On the other hand, when looking at different concepts, in which the human driver is tasked with the execution of actions on the operational level as described by the haptic shared control concepts of Abbink, Mulder, and Boer [88] or Wang et al. [62], the time frame on the tasks at hand is much more restricted. Cooperation on the perception level would be impossible if the goal is to avoid an upcoming obstacle with a time budget of only a few seconds. Thus, the cooperation happens on the “Execution” step of the cycle on the “Operational Level”.

5.2. Cooperation Frame

The “Cooperation Frame” (Figure 4) describes the process of cooperation itself. The cooperation frame is comprised of the cooperation dynamics and the Agent Task Residuals. Cooperation can take different shapes based on the defined agents and the scenario defined in the use case dimensions. The main influencing factors in the cooperation frame are if the cooperation is split into independent sub-tasks for each agent (“Determinism”) and how the sequence of task distribution is distributed between the agents (“Time Allocation/Window”). This defines the cooperation dynamics.
The Agents Task Residuals describe the task that an agent has to execute itself. The task is described by the level on which it is executed, based on the hierarchical description of the driving task by Michon et al. [19], as well as in which step of the task-execution cycle the sub-task is situated.

5.2.1. Cooperation Dynamics

The cooperation dynamics describe the relationship between the co-agents. It defines the authorities in the cooperative system and how the timing of the task execution is defined.

Determinism

In general, there are two possible options for how a task can be executed in a cooperative setting. Either the agents agree on one agent who is in charge, or two or more agents work on the task together. The determinism dimension of the cooperation dynamics depicts the authority in the context of the system. In a cooperative interaction, it has to be clear if one or multiple agents are in charge. In the case that multiple agents take over a task at the same time, negotiation between these agents is necessary. In this case, a negotiation strategy has to be defined to prevent conflicts between the co-agents. If the cooperation is deterministic, this does, however, not mean that one agent is taking over the entirety of the DDT but takes over the “master” role for one or multiple sub-tasks on one or multiple level(s), while the other agent(s) take over other tasks simultaneously.

Time Allocation/Window

A second aspect of the cooperation dynamics is whether the cooperation happens as a single intervention in the driving task or if the cooperation is continuously executed over a longer time period. Similarly to Walch et al. [11], we distinguish between temporarily confined cooperative actions or teamwork between humans and automation over a longer time, e.g., shared control.
Combining the two dimensions “Determinism” and “Time Allocation/Window” results in four possible constellations in the cooperation dynamics:
  • Single occasion deterministic cooperation: An Agent takes over a “master” role in one discrete interaction. For the execution of the sub-task they take over in this instance, they will be the decision authority. The other co-agents revert to subordinated roles for the duration of this single interaction.
  • Single occasion negotiated cooperation: Multiple Agents negotiate a cooperative action on one occasion. In this instance, there is not necessarily one co-agent taking over the single “master” role for the cooperation. The co-agents need a defined negotiation strategy to come to a conclusion. This strategy has to be defined beforehand.
  • Continuous deterministic cooperation: an example of this is continuous maneuver-based cooperation. The co-agents work together over a longer period of time. However, in this case, the interactions are discrete (e.g., choosing maneuvers which the automation should execute). In these repeated single interactions, one co-agent takes over a “master” role. In the case of Conduct-by-Wire, the driver can override the automation’s intentions by choosing maneuver commands. In this scenario, there is no negotiation between the co-agents.
  • Continuous negotiated cooperation: In this case, multiple co-agents continuously negotiate to reach a cooperative decision. None of the agents takes over the primary “master” role to override the decisions of the other agents. One example of this is Haptic Shared Control (HSC). In HSC, the driver and the automation engage in continuous cooperation on the operational level (multiple agents partially in control). The trajectory of the vehicle is thereby negotiated by the force the human applies on the steering wheel and the force the automation applies to the steering wheel at the same time. Using this method, both agents communicate their intentions. The negotiation result is the resulting combined trajectory.
The combination of “Determinism” and the “Time Allocation/Window” of the cooperative action already defines the basic interaction principles in cooperation. Figure 4 on the left side displays how this combination relates to concrete concepts or established metaphors for cooperative driving (e.g., HSC, Conduct-by-Wire).

5.2.2. Agents Task Residuals

We defined two sub-dimensions for the tasks inside a cooperative setting (1) “Task Level” and (2) “Task Coverage”. Where (1) defines the level of the task in a hierarchy based on the structure defined by Michon [19], while (2) defines the timing of sub-steps in that task procedurally and cyclically. Combining these two dimensions gives a hierarchical and procedural architecture for the overarching driving task, similar to the architecture proposed by Guo et al. [89].

Task Level

The driving task is usually hierarchically separated into three different levels: Operational, Tactical, and Strategical—based on the structure proposed by Michon et al. [19]. This hierarchical structure of the driving task divides it into:
  • Long-term tasks: navigation and route planning (strategical).
  • Mid-term tasks: maneuver planning and execution (tactical).
  • Short-term tasks: trajectory planning and following (operational).
Sub-tasks of the overarching driving tasks can be categorized according to this structure, which indicates how time-critical, or in other words, granular, specific activities of agents in a cooperative setting are.

Task Coverage

All tasks on each of the levels defined in the “Task Level” dimension can also be split up into procedural steps, which have to be taken to fulfill the overall task. Based on Taş et al. [90], an automated driving system can be divided into the main procedural steps: Perception, Planning, and Action. In order to give a more detailed description of the process, we added additional steps for Understanding the current situation, as well as Evaluating the outcome of an executed task and thus closing the circle. Based on the evaluation of the cooperation result, the procedure from Perception to Evaluation can be started again.
At each task level of the driving task, a sub-task in one of the steps in the task coverage dimension can be handled by either the automation, a human, or both (as defined in the Cooperation Dynamics). However, if one of the agents only takes over one or a few steps in the procedural sub-task, a hand-over between the co-agent managing the current step and the agent taking over the next step is necessary. This exchange has to be reflected in the HMI.

Example

As an example to illustrate this concept, cooperation could take place on the tactical level where the automation and the human driver work together to plan and execute driving maneuvers (Task Level). In the process of choosing the appropriate maneuvers, humans and automation can take over different sub-tasks. For example, the human could support the automation in the Interpretation/Understanding step by giving the automation additional information about the current driving situation. This is illustrated by the concept proposed by Wang et al. [20], where the driver provides additional information about the future behavior of other traffic participants by interpreting their current behavior.

5.3. Human Machine Interface

Based on the definitions in the “Cooperation Use Case” and the “Cooperation Frame”, the main conditions for the “HMI” (Figure 5) are already defined. By defining the specific sub-tasks of a human agent in the cooperation system, the requirements an HMI has to fulfill are clear. The HMI has two main components: The “Input” and “Output” channels. Through these two channels, the interaction between the automation and other co-agents takes place.

5.3.1. Input and Output

The HMI part of the taxonomy (Figure 5) consists of 2 main elements: (1) Input and (2) Output. These two main characteristics define the interaction. As described in Section 5.2, if one agent takes over one of the sub-tasks in the procedural driving loop, information has to be exchanged between them and the other agent(s) taking over the other sub-tasks on different levels or different steps in the driving loop. In order for the other agents, as well as the agent to take over the current sub-tasks, there needs to be a common information base on which all involved agents can base their decisions.
This information exchange takes place through the HMI. The two Channels Input and Output have the elementary property of “Modality”. This defines the sensory channel on which the information is exchanged between humans and automation. The modality chosen for the HMI has to match the information that has to be exchanged, keeping in mind the timing of the exchange. Often, the modalities in explicit interactions are of a visual or auditory kind, but from a theoretical perspective, the kinesthetic (e.g., head pose) or physiological (e.g., heart rate) modalities can be used in more implicit interactions as well [91].

5.3.2. Metaphors/Mental Models

The concept of mental models originated in cognitive psychology and plays an important role in the interaction with intelligent systems, in general, [92]. According to this theory, knowledge and learning experiences are stored in so-called mental models that can be described as the user’s representation of a system or a task, which provides their system understanding and impacts the level of task performance [93].
Based on the mental model and the interaction principles defined by the characteristics of the HMI, metaphors can be deduced. These metaphors typically use allegories from other domains to better explain the interaction principles. For example, the Horse Mode by Flemisch et al. [9] uses the picture of the relationship between horse and rider and the dynamic control distribution (loose rein/tight rein) between them to describe a similar shared control concept between the automated vehicle and the human driver.

6. Application

As an initial validation, in the following section, we present an exemplary application of the taxonomy on one publication, as well as a categorization of the entire literature corpus gathered in the review, based on the structure of the taxonomy.

6.1. Exemplary Application

For an exemplary utilization of the presented taxonomy, we apply it to a cooperative HMI concept presented in the paper “Can you rely on me? Evaluating a Confidence HMI for Cooperative, Automated Driving” by Peintner et al. [34]. The paper evaluates an HMI concept for cooperative driving that includes information about the automation’s current confidence level and compares different visual representations of this information. The experimental study was based on a driving simulation scenario in which a level 4 automated vehicle was driving in an urban setting and approached a pedestrian on the sidewalk whose behavior did not indicate whether he intended to cross the street or not. Because of the ambiguous situation, the automation initiated a cooperation request via the head-up display by communicating the automation’s perception together with the confidence level and giving the driver the option to enter their or her own decision to either stop or drive through by pressing buttons on the steering wheel. If no decision was entered, the vehicle performed an emergency brake to avoid a possible collision with the pedestrian.

6.1.1. Step 1: Use Case Classification

In this use case, there are two agents: the driver and the AV’s automation. Due to the ambiguity in the situation, by the automation’s inability to judge the pedestrian’s intentions, the motivation for the resulting cooperation request is external due to the root cause being outside of both agent’s control. The cooperation can be classified as non-critical spontaneous cooperation, where the task is not critical but requires sudden action. The criticality of the situation is low because the automation would choose the safest option for the pedestrian and stop the vehicle in any case. At the same time, the urgency is high as a decision about the driving maneuver has to be made within a few seconds due to the dynamic driving situation.

6.1.2. Step 2: Cooperation Frame Classification

The cooperation dynamics can be described as deterministic and discrete (the driver takes over a decision on one occasion. The task level is tactical (short-term maneuver planning), and this part of the driving tasks includes perception, interpretation, prediction, and execution.

6.1.3. Step 3: HMI Classification

The HMI concept’s output is provided through the visual modality by displaying the information on a head-up display in front of the driver. The input option is based on haptic feedback where the driver can enter a decision through a button press on the steering wheel.

6.2. Categorization of Related Literature

Following the shown protocol, we classified the existing body of literature. Table 1 show the matching of the literature base from the PRISMA survey on the dimensions of the framework. The references in the table indicate on which papers the dimension is applicable. There are papers to which multiple levels of a dimension are applicable; these are put into separate categories marked with “multiple”.
It has to be noted that for some levels of sub-dimensions, no match in the related literature could be found, pointing to new research opportunities in these specific cases. As is the case with the level “Strategical” in the sub-dimension “Task Level” in the “Task Agent Residuals” of the “Cooperation Frame”. Another case would be the involvement of teleoperators in the cooperation use cases. In our literature survey, we did not come across research work on this topic, despite teleoperation in automated vehicles becoming a more and more relevant research area.

7. Discussion

In this paper, we present a newly developed taxonomy based on an extensive literature survey aimed towards filling gaps in existing frameworks and taxonomies to create a more complete picture of the elements at play in cooperative, automated driving. The dimensions presented in the taxonomy are based on the related literature and reflect the relevant properties of cooperative interactions tested in related research. However, it is important to keep in mind that there are also parts of the dimension that (to the best of our knowledge) have not yet been researched, which suggests new research opportunities yet to be explored. In the following, we want to compare this taxonomy to already existing ones, as well as discuss its validity to answer the research question posed at the beginning of this paper.

7.1. Comparison to Existing Frameworks and Taxonomies

The prevalent standard to characterize automated driving concepts is the SAE J3016 [1] taxonomy, describing the levels of automation. While this standard is very well suited to define the capabilities and operational domains of automated vehicles designed with conventional approaches to automated driving in mind, it does not consider concepts for shared control. Who is in control is a binary question within this framework. We follow Guo et al. [89] in their argumentation that this is designed from an engineering standpoint and does not consider the human factors well enough. We try to address this issue by highlighting the role of the human in this taxonomy by giving more attention to the use case and the HMI.
In their Framework, Walch, Colley and Weber [11] divide cooperation concepts into categories based on whether a mode change is necessary for the interaction or not. This distinction is based on which agent is in control and how the control allocation shifts through the interaction, separating cooperation, shared control, and full takeovers/handovers into different categories. This separation makes sense in the specific interaction but only moves in the “Cooperation Frame” as we defined it. By extension, it would be important to also consider what is the context of an application for this horizontal and vertical extensions of shared control as it is called in the paper. Thus, in our opinion, it is essential to keep the “Cooperation Use Case” in mind when applying this principle to successfully design interactions for relevant, real-world scenarios.
Another framework was developed by Mirning et al. [26]. This framework gives a structured overview of different types of control transitions possible in automated vehicles. However, this framework is mostly only concerned with the process of the handover of control, e.g., what is the direction (human to automation/automation to human), who is the initiator (user/system), is the transition of control complete or only partial, etc.? This is aimed more toward structuring research on conventional Take-Over Requests and less suitable for structuring cooperative approaches to automated driving.

7.2. The Cooperation Use Case

The biggest novelty to the presented taxonomy is the inclusion of the use case dimension in the structure. By analyzing the body of related work, we found that a large portion of cooperation concepts was developed from the standpoint of “what is technically possible?”, rather than “what are real-world scenarios and problems which need solving?”. Because of this, we found it necessary to include this additional level on top of the “Cooperation Frame” and the “HMI” dimensions to allow for cross-referencing which situations and scenarios specific interaction concepts are actually suitable for.
However, since little research has been published on the use cases for cooperative driving, the dimensions defined might not be sufficient, and further research in the area is needed to understand the internal and external motivation factors relevant to cooperation between humans and automated vehicles.
Since even more new concepts for cooperation with automated vehicles are emerging, for example, with the involvement of teleoperators in the driving task, there might come a need for an extension of the presented dimensions to be able to also accommodate these novel use cases and applications.

7.3. Applicability

The presented taxonomy offers a versatile approach for researchers and practitioners to systemically categorize cooperation in automated driving. It allows the exploration of the design space of human–vehicle cooperation and thereby enables discussions, ideation, and critical reflection on both established and emerging cooperative concepts and HMIs.
The taxonomy supports:
  • In-depth analysis of existing cooperative HMI concepts
  • Classification and comparative analysis of existing literature on cooperative driving
  • The iterative development of use cases, cooperation frameworks, and HMIs through a bottom-up approach for experimental studies
Users can leverage the taxonomy as a comprehensive whole or selectively focus on specific dimensions, ensuring adaptability and relevance across diverse research and application contexts.
With the application presented in Section 6, we provide a first example of how the presented taxonomy can be applied to existing research. However, appropriate avenues regarding the application of the taxonomy in the research and development of new cooperative interaction concepts remain yet to be investigated. A design guide can be found in the appendix (cf. Figure A1). This guide can serve as a starting point to facilitate the ideation process of cooperative driving HMIs.

8. Conclusions

This paper presents a taxonomy for human–vehicle cooperation in the field of automated driving that is based on an extensive systematic literature review. The initial screening of four databases identified n = 776 research items. Several screening iterations led to n = 50 research items that were finally included. By analyzing this base of related work, especially existing frameworks and taxonomies, we found potential for adding a more human-centered perspective of cooperation. This requires the integration of the specific use case, a cooperation frame, and the specifications of the HMI as central aspects of the taxonomy. Therefore, three research questions were formulated. Research Question 1 asked which elements can define the cooperation use case. We identified the involved agents (e.g., passenger, automation, teleoperator) and the motivation for the cooperation (internal vs. external) as main elements, together with the classification of the scenario regarding its criticality and urgency. Research Question 2 asked which elements are needed to define the cooperation frame. Here, our analysis identified the cooperation dynamics and the agent task residuals as the main contributing elements. Cooperation dynamics includes the aspects of time allocation (discrete vs. continuous) and whether the task an agent takes over is handled deterministic, or if negotiation is required. Finally, Research Question 3 asked which characteristics of the HMI are needed when describing the cooperative interaction. Here, our taxonomy defines the modality of the input and output channel as important characteristics together with the metaphors and mental models that users build based on the HMIs interaction principle. The presented taxonomy is an attempt at structuring human–vehicle cooperation that aims to be a supportive tool for researchers and designers in developing and evaluating HMI concepts based on a holistic consideration of the essential elements of cooperation. Using this taxonomy, cooperative concepts can be structured, not only based on the concrete interaction between humans and automation but also considering the overarching use case, which allows for a better understanding of how these concepts can be applied in real traffic situations and whether they make sense from a human factors’ perspective. Similarly, the taxonomy can help researchers better understand these aspects during the design process of new cooperative concepts. In future work, we plan on evaluating the taxonomy by creating a survey of cooperative driving concepts, analyzing how the structure matches these concepts, and investigating whether an extension or more fine-grained specification of the taxonomy’s elements might be necessary.

Author Contributions

Conceptualization, J.P. and H.D.; methodology, J.P. and H.D.; formal analysis, J.P. and B.E.; investigation, J.P. and B.E.; writing—original draft preparation, J.P. and H.D.; writing—review and editing, J.P., H.D., C.M. and A.R.; visualization, J.P.; supervision, A.R.; project administration, A.R.; funding acquisition, A.R. All authors have read and agreed to the published version of the manuscript.

Funding

We applied the SDC approach for the sequence of authors. This work is supported under the FH-Impuls program of the German Federal Ministry of Education and Research (BMBF) under Grant No. 13FH7I06IA (SAFIR IP6) and the Bavarian Ministry of Economic Affairs, Regional Development and Energy (StMWi) under Grant No. DIK 2106-0058//DIK0351/04 (BARCS)

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in the study are included in the article material, further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study, in the collection, analyses, or interpretation of data, in the writing of the manuscript, or in the decision to publish the results.

Appendix A

Table A1. Results of the Critical Appraisal Skills Programme (CASP) analysis of the final literature list of references included in the survey (y = yes (green), ? = cannot tell (blue), n.a = not applicable to the paper (grey)).
Table A1. Results of the Critical Appraisal Skills Programme (CASP) analysis of the final literature list of references included in the survey (y = yes (green), ? = cannot tell (blue), n.a = not applicable to the paper (grey)).
SourceWas There a Clear Statement of the Aims of the Research?Is the Methodology Appropriate?Was the Research Design Appropriate to Address the Aims of the Research?Was the Recruitment Strategy Appropriate to the Aims of the Research?Were the Data Collected in a Way that Addressed the Research Issue?Has the Relationship between Researcher and Participants Been Adequately Considered?Have Ethical Issues Been Taken into Consideration?Was the Data Analysis Sufficiently Rigorous?Is There a Clear Statement of Findings?Is the Research Valuable?
[26]yyy?yn.an.ayyy
[21]yyyyy??yyy
[40]yyyyyy?yyy
[37]yyyyyy?yyy
[41]yyyyyyyyyy
[34]yyyyyy?yyy
[20]yyy?y??yyy
[36]yyy?y?yyyy
[38]yyyyy??yyy
[42]yyy?y??yyy
[35]yyy?y?yyyy
[39]yyyn.an.an.an.an.ayy
[41]yyyn.an.an.an.an.ayy
[61]yyyn.an.an.an.ayyy
[62]yyyyy?yyyy
[65]yyyyy??yyy
[67]yyy?y??yyy
[69]yyy?y??yyy
[63]yyyn.ayn.a?yyy
[64]yyyyy?yyyy
[66]yyyn.an.an.an.ayyy
[55]yyyyy?yyyy
[70]yyyn.an.an.an.an.ayy
[73]yyy?y??yyy
[77]yyy?y??yyy
[43]yyy?y??yyy
[73]yyyyy??yyy
[71]yyy?y??yyy
[72]yyy?y??yyy
[75]yyyyy??yyy
[76]yyy?y??yyy
[48]yyy?y??yyy
Table A2. Continuation of Table A1: results of the Critical Appraisal Skills Programme (CASP) analysis of the final literature list of references included in the survey (y = yes (green), ? = cannot tell (blue), n.a = not applicable to the paper (grey)).
Table A2. Continuation of Table A1: results of the Critical Appraisal Skills Programme (CASP) analysis of the final literature list of references included in the survey (y = yes (green), ? = cannot tell (blue), n.a = not applicable to the paper (grey)).
SourceWas There a Clear Statement of the Aims of the Research?Is the Methodology Appropriate?Was the Research Design Appropriate to Address the Aims of the Research?Was the Recruitment Strategy Appropriate to the Aims of the Research?Were the Data Collected in a Way that Addressed the Research Issue?Has the Relationship between Researcher and Participants Been Adequately Considered?Have Ethical Issues Been Taken into Consideration?Was the Data Analysis Sufficiently Rigorous?Is There a Clear Statement of Findings?Is the Research Valuable?
[45]yyy?y??yyy
[47]yyyyyyyyyy
[49]yyyyyyyyyy
[22]yyn.an.an.an.an.an.ayy
[50]yyy?y??yyy
[52]yyy?y??yyy
[56]yyyyy??yyy
[59]yyyyy??yyy
[60]yyy?y??yyy
[46]yyyn.ayn.an.ayyy
[44]yyyn.an.an.an.an.ayy
[51]yyyyy??yyy
[53]yyy?y??yyy
[54]yyyn.an.an.an.an.ayy
[55]yyy?y??yyy
[57]yn.an.an.an.an.an.an.ayy
[58]yyy?y??yyy

Appendix B

Figure A1. Cooperative Driving Design Guide—The guide can be used in design sessions to facilitate ideation and communication about cooperative driving.
Figure A1. Cooperative Driving Design Guide—The guide can be used in design sessions to facilitate ideation and communication about cooperative driving.
Mti 08 00016 g0a1

References

  1. Ground Vehicle Standard J3016_202104; Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles. SAE International: Pittsburgh, PA, USA, 2021. [CrossRef]
  2. Walch, M.; Sieber, T.; Hock, P.; Baumann, M.; Weber, M. Towards Cooperative Driving: Involving the Driver in an Autonomous Vehicle’s Decision Making. In Proceedings of the 8th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, New York, NY, USA, 24–26 October 2016; pp. 261–268. [Google Scholar] [CrossRef]
  3. Frison, A.K.; Wintersberger, P.; Riener, A. Resurrecting the ghost in the shell: A need-centered development approach for optimizing user experience in highly automated vehicles. Transp. Res. Part F Traffic Psychol. Behav. 2019, 65, 439–456. [Google Scholar] [CrossRef]
  4. Detjen, H.; Faltaous, S.; Pfleging, B.; Geisler, S.; Schneegass, S. How to Increase Automated Vehicles’ Acceptance through In-Vehicle Interaction Design: A Review. Int. J. Hum. Comput. Interact. 2021, 37, 308–330. [Google Scholar] [CrossRef]
  5. Stiegemeier, D.; Kraus, J.; Baumann, M. Why drivers use in-vehicle technology: The role of basic psychological needs and motivation. Transp. Res. Part F Traffic Psychol. Behav. 2024, 100, 133–153. [Google Scholar] [CrossRef]
  6. Kettwich, C.; Schrank, A.; Oehl, M. Teleoperation of highly automated vehicles in public transport: User-centered design of a human–machine interface for remote-operation and its expert usability evaluation. Multimodal Technol. Interact. 2021, 5, 26. [Google Scholar] [CrossRef]
  7. Neumeier, S.; Wintersberger, P.; Frison, A.K.; Becher, A.; Facchi, C.; Riener, A. Teleoperation: The Holy Grail to Solve Problems of Automated Driving? Sure, but Latency Matters. In Proceedings of the 11th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, New York, NY, USA, 21–25 September 2019; pp. 186–197. [Google Scholar] [CrossRef]
  8. Guo, C.; Sentouh, C.; Haué, J.B.; Popieul, J.C. Driver–vehicle cooperation: A hierarchical cooperative control architecture for automated driving systems. Cogn. Technol. Work 2019, 21, 657–670. [Google Scholar] [CrossRef]
  9. Flemisch, F.; Bengler, K.; Bubb, H.; Winner, H.; Bruder, R. Towards cooperative guidance and control of highly automated vehicles: H-Mode and Conduct-by-Wire. Ergonomics 2014, 57, 343–360. [Google Scholar] [CrossRef] [PubMed]
  10. Ghasemi, A.H.; Johns, M.; Garber, B.; Boehm, P.; Jayakumar, P.; Ju, W.; Gillespie, R.B. Role Negotiation in a Haptic Shared Control Framework. In Proceedings of the 8th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Ann Arbor, MI, USA, 24–26 October 2016; pp. 179–184. [Google Scholar] [CrossRef]
  11. Walch, M.; Colley, M.; Weber, M. Driving-Task-Related Human–Machine Interaction in Automated Driving: Towards a Bigger Picture. In Proceedings of the 11th International Conference on Automotive User Interfaces and Interactive Vehicular Applications: Adjunct Proceedings, New York, NY, USA, 21–25 September 2019; pp. 427–433. [Google Scholar] [CrossRef]
  12. Winner, H.; Hakuli, S. Conduct-by-Wire: Following a new paradigm for driving into the future. In Proceedings of the FISITA World Automotive Congress, Yokohama, Japan, 22–27 October 2006. [Google Scholar]
  13. Hoc, J.M. Towards a cognitive approach to human–machine cooperation in dynamic situations. Int. J. Hum. Comput. Stud. 2001, 54, 509–540. [Google Scholar] [CrossRef]
  14. Pichen, J.; Stoll, T.; Baumann, M. From SAE-Levels to Cooperative Task Distribution: An Efficient and Usable Way to Deal with System Limitations? In Proceedings of the 13th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, New York, NY, USA, 9–14 September 2021; pp. 109–115. [Google Scholar] [CrossRef]
  15. Wiegand, G.; Holländer, K.; Rupp, K.; Hussmann, H. The Joy of Collaborating with Highly Automated Vehicles. In Proceedings of the 12th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, New York, NY, USA, 21–22 September 2020; pp. 223–232. [Google Scholar] [CrossRef]
  16. Xing, Y.; Lv, C.; Cao, D.; Hang, P. Toward human-vehicle collaboration: Review and perspectives on human-centered collaborative automated driving. Transp. Res. Part C Emerg. Technol. 2021, 128, 103199. [Google Scholar] [CrossRef]
  17. Flemisch, F.; Adams, C.; Conway, S.; Goodrich, K.; Palmer, M.; Schutte, P. The H-Metaphor as a Guideline for Vehicle Automation and Interaction; NASA/TM-2003-212672; NASA: Washington, DC, USA, 2003. [Google Scholar]
  18. Woide, M.; Miller, L.; Colley, M.; Damm, N.; Baumann, M. I have Got the Power: Exploring the Impact of Cooperative Systems on Driver-Initiated Takeovers and Trust in Automated Vehicles. In Proceedings of the 15th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Ingolstadt, Germany, 18–22 September 2023; pp. 123–135. [Google Scholar] [CrossRef]
  19. Michon, J.A. A Critical View of Driver Behavior Models: What Do We Know, What Should We Do? In Human Behavior and Traffic Safety; Evans, L., Schwing, R.C., Eds.; Springer: Boston, MA, USA, 1985; pp. 485–524. [Google Scholar] [CrossRef]
  20. Wang, C.; Krüger, M.; Wiebel-Herboth, C.B. “Watch Out!”: Prediction-Level Intervention for Automated Driving. In Proceedings of the 12th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, New York, NY, USA, 21–22 September 2020; pp. 169–180. [Google Scholar] [CrossRef]
  21. Wang, C.; Chu, D.; Martens, M.; Krüger, M.; Weisswange, T.H. Hybrid Eyes: Design and Evaluation of the Prediction-Level Cooperative Driving with a Real-World Automated Driving System. In Proceedings of the 14th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, New York, NY, USA, 17–20 September 2022; pp. 274–284. [Google Scholar] [CrossRef]
  22. Wang, C.; Weisswange, T.H.; Krüger, M. Designing for Prediction-Level Collaboration Between a Human Driver and an Automated Driving System. In Proceedings of the 13th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, New York, NY, USA, 9–14 September 2021; pp. 213–216. [Google Scholar] [CrossRef]
  23. Zimmermann, M.; Bengler, K. A multimodal interaction concept for cooperative driving. In Proceedings of the 2013 IEEE Intelligent Vehicles Symposium (IV), Gold Coast, QLD, Australia, 23–26 June 2013; pp. 1285–1290. [Google Scholar] [CrossRef]
  24. Petermeijer, S.M.; Tinga, A.; Jansen, R.; de Reus, A.; van Waterschoot, B. What Makes a Good Team?—Towards the Assessment of Driver-Vehicle Cooperation. In Proceedings of the 13th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, New York, NY, USA, 9–14 September 2021; pp. 99–108. [Google Scholar] [CrossRef]
  25. Wang, C. A Framework of the Non-Critical Spontaneous Intervention in Highly Automated Driving Scenarios. In Proceedings of the 11th International Conference on Automotive User Interfaces and Interactive Vehicular Applications: Adjunct Proceedings, New York, NY, USA, 21–25 September 2019; pp. 421–426. [Google Scholar] [CrossRef]
  26. Mirnig, A.G.; Gärtner, M.; Laminger, A.; Meschtscherjakov, A.; Trösterer, S.; Tscheligi, M.; McCall, R.; McGee, F. Control Transition Interfaces in Semiautonomous Vehicles: A Categorization Framework and Literature Analysis. In Proceedings of the 9th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, New York, NY, USA, 24–27 September 2017; pp. 209–220. [Google Scholar] [CrossRef]
  27. Wolf, I. The Interaction Between Humans and Autonomous Agents. In Autonomous Driving: Technical, Legal and Social Aspects; Springer: Berlin/Heidelberg, Germany, 2016; pp. 103–124. [Google Scholar] [CrossRef]
  28. Malve, B.; Peintner, J.; Sadeghian, S.; Riener, A. “Do You Want to Drive Together?”—A Use Case Analysis on Cooperative, Automated Driving. In Proceedings of the 15th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Ingolstadt, Germany, 18–22 September 2023; pp. 209–214. [Google Scholar] [CrossRef]
  29. Peintner, J.; Manger, C.; Riener, A. Communication of Uncertainty Information in Cooperative, Automated Driving: A Comparative Study of Different Modalities. In Proceedings of the 15th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Ingolstadt, Germany, 18–22 September 2023; pp. 322–332. [Google Scholar] [CrossRef]
  30. Walch, M.; Mühl, K.; Kraus, J.; Stoll, T.; Baumann, M.; Weber, M. From Car-Driver-Handovers to Cooperative Interfaces: Visions for Driver–Vehicle Interaction in Automated Driving. In Automotive User Interfaces: Creating Interactive Experiences in the Car; Meixner, G., Müller, C., Eds.; Springer International Publishing: Cham, Switzerland, 2017; pp. 273–294. [Google Scholar] [CrossRef]
  31. Woide, M.; Stiegemeier, D.; Pfattheicher, S.; Baumann, M. Measuring driver-vehicle cooperation: Development and validation of the Human–Machine-Interaction-Interdependence Questionnaire (HMII). Transp. Res. Part F Traffic Psychol. Behav. 2021, 83, 424–439. [Google Scholar] [CrossRef]
  32. Moher, D.; Liberati, A.; Tetzlaff, J.; Altman, D.G.; PRISMA Group. Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. Ann. Intern. Med. 2009, 151, 264–269. [Google Scholar] [CrossRef]
  33. Haddaway, N.R.; Page, M.J.; Pritchard, C.C.; McGuinness, L.A. PRISMA2020: An R package and Shiny app for producing PRISMA 2020-compliant flow diagrams, with interactivity for optimised digital transparency and Open Synthesis. Campbell Syst. Rev. 2022, 18, e1230. [Google Scholar] [CrossRef]
  34. Peintner, J.B.; Manger, C.; Riener, A. “Can You Rely on Me?” Evaluating a Confidence HMI for Cooperative, Automated Driving. In Proceedings of the 14th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, New York, NY, USA, 17–20 September 2022; pp. 340–348. [Google Scholar] [CrossRef]
  35. Saito, T.; Wada, T.; Sonoda, K. Control Transferring between Automated and Manual Driving Using Shared Control. In Proceedings of the 9th International Conference on Automotive User Interfaces and Interactive Vehicular Applications Adjunct, New York, NY, USA, 24–27 September 2017; pp. 115–119. [Google Scholar] [CrossRef]
  36. Walch, M.; Woide, M.; Mühl, K.; Baumann, M.; Weber, M. Cooperative Overtaking: Overcoming Automated Vehicles’ Obstructed Sensor Range via Driver Help. In Proceedings of the 11th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, New York, NY, USA, 21–25 September 2019; pp. 144–155. [Google Scholar] [CrossRef]
  37. Kuramochi, H.; Utsumi, A.; Ikeda, T.; Kato, Y.O.; Nagasawa, I.; Takahashi, K. Effect of Human–Machine Cooperation on Driving Comfort in Highly Automated Steering Maneuvers. In Proceedings of the 11th International Conference on Automotive User Interfaces and Interactive Vehicular Applications: Adjunct Proceedings, New York, NY, USA, 21–25 September 2019; pp. 151–155. [Google Scholar] [CrossRef]
  38. Tatsumi, K.; Utsumi, A.; Ikeda, T.; Kato, Y.O.; Nagasawa, I.; Takahashi, K. Evaluation of Driver’s Sense of Control in Lane Change Maneuvers with a Cooperative Steering Control System. In Proceedings of the 13th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, New York, NY, USA, 9–14 September 2021; pp. 107–111. [Google Scholar] [CrossRef]
  39. Sarabia, J.; Diaz, S.; Marcano, M.; Zubizarreta, A.; Pérez Rastelli, J. Haptic Steering Wheel for Enhanced Driving: An Assessment in Terms of Safety and User Experience. In Proceedings of the 14th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, New York, NY, USA, 17–20 September 2022; pp. 219–221. [Google Scholar] [CrossRef]
  40. Colley, M.; Askari, A.; Walch, M.; Woide, M.; Rukzio, E. ORIAS: On-The-Fly Object Identification and Action Selection for Highly Automated Vehicles. In Proceedings of the 13th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, New York, NY, USA, 9–14 September 2021; pp. 79–89. [Google Scholar] [CrossRef]
  41. Ros, F.; Terken, J.; van Valkenhoef, F.; Amiralis, Z.; Beckmann, S. Scribble Your Way Through Traffic. In Proceedings of the 10th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, New York, NY, USA, 23–25 September 2018; pp. 230–234. [Google Scholar] [CrossRef]
  42. Detjen, H.; Faltaous, S.; Geisler, S.; Schneegass, S. User-Defined Voice and Mid-Air Gesture Commands for Maneuver-Based Interventions in Automated Vehicles. In Proceedings of the Mensch Und Computer 2019, New York, NY, USA, 8–11 September 2019; pp. 341–348. [Google Scholar] [CrossRef]
  43. Damböck, D.; Kienle, M.; Bengler, K.; Bubb, H. The H-Metaphor as an Example for Cooperative Vehicle Driving. In Human-Computer Interaction: Towards Mobile and Intelligent Interaction Environments, Proceedings of the 14th International Conference on Human–Computer Interaction, Orlando, FL, USA, 9–14 July 2011; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2011; Volume 6763, pp. 376–385. [Google Scholar] [CrossRef]
  44. Sefati, M.; Gert, D.; Kreiskoether, K.D.; Kampker, A. A Maneuver Based Interaction Framework for External Users of an Automated Assistance Vehicle. In SMARTGREENS 2017, VEHITS 2017: Smart Cities, Green Technologies, and Intelligent Transport Systems; Donnellan, B., Klein, C., Helfert, M., Gusikhin, O., Pascoal, A., Eds.; Springer: Cham, Switzerland, 2019; pp. 274–295. [Google Scholar] [CrossRef]
  45. Ercan, Z.; Carvalho, A.; Tseng, H.E.; Gökaşan, M.; Borrelli, F. A predictive control framework for torque-based steering assistance to improve safety in highway driving. Veh. Syst. Dyn. 2018, 56, 810–831. [Google Scholar] [CrossRef]
  46. Rothfuß, S.; Ayllon, C.; Flad, M.; Hohmann, S. Adaptive Negotiation Model for Human–Machine Interaction on Decision Level. IFAC-PapersOnLine 2020, 53, 10174–10181. [Google Scholar] [CrossRef]
  47. Li, Q.; Wang, Z.; Wang, W.; Zeng, C.; Li, G.; Yuan, Q.; Cheng, B. An Adaptive Time Budget Adjustment Strategy Based on a Take-Over Performance Model for Passive Fatigue. IEEE Trans. Hum.-Mach. Syst. 2022, 52, 1025–1035. [Google Scholar] [CrossRef]
  48. Walch, M.; Colley, M.; Weber, M. CooperationCaptcha: On-The-Fly Object Labeling for Highly Automated Vehicles. In Proceedings of the Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems, New York, NY, USA, 4–9 May 2019; pp. 1–6. [Google Scholar] [CrossRef]
  49. Muslim, H.; Kiu Leung, C.; Itoh, M. Design and evaluation of cooperative human–machine interface for changing lanes in conditional driving automation. Accid. Anal. Prev. 2022, 174, 106719. [Google Scholar] [CrossRef]
  50. Kridalukmana, R.; Eridani, D.; Septiana, R.; Rochim, A.F.; Setyobudhi, C.T. Developing Autopilot Agent Transparency for Collaborative Driving. In Proceedings of the 2022 19th International Joint Conference on Computer Science and Software Engineering (JCSSE), Bangkok, Thailand, 22–25 June 2022; pp. 1–6. [Google Scholar] [CrossRef]
  51. Wang, Z.; Zheng, R.; Kaizuka, T.; Nakano, K. Driver-Automation Shared Control: Modeling Driver Behavior by Taking Account of Reliance on Haptic Guidance Steering. In Proceedings of the 2018 IEEE Intelligent Vehicles Symposium (IV), Changshu, China, 26–30 June 2018; pp. 144–149. [Google Scholar] [CrossRef]
  52. Marcano, M.; Tango, F.; Sarabia, J.; Castellano, A.; Pérez, J.; Irigoyen, E.; Diaz, S. From the Concept of Being “the Boss” to the Idea of Being “a Team”: The Adaptive Co-Pilot as the Enabler for a New Cooperative Framework. Appl. Sci. 2021, 11, 6950. [Google Scholar] [CrossRef]
  53. Zwaan, H.; Petermeijer, S.; Abbink, D. Haptic shared steering control with an adaptive level of authority based on time-to-line crossing. IFAC-PapersOnLine 2019, 52, 49–54. [Google Scholar] [CrossRef]
  54. Sentouh, C.; Popieul, J.C.; Debernard, S.; Boverie, S. Human–Machine Interaction in Automated Vehicle: The ABV Project. IFAC Proc. Vol. 2014, 47, 6344–6349. [Google Scholar] [CrossRef]
  55. Li, R.; Li, Y.; Li, S.E.; Zhang, C.; Burdet, E.; Cheng, B. Indirect Shared Control for Cooperative Driving Between Driver and Automation in Steer-by-Wire Vehicles. IEEE Trans. Intell. Transp. Syst. 2021, 22, 7826–7836. [Google Scholar] [CrossRef]
  56. Izadi, V.; Ghasemi, A.H. Quantifying the performance of an adaptive haptic shared control paradigm for steering a ground-vehicle. Transp. Eng. 2022, 10, 100141. [Google Scholar] [CrossRef]
  57. van Zoelen, E.M.; Peeters, L.; Bos, S.J.; Ye, F. Shared Control and the Democratization of Driving in Autonomous Vehicles. In Proceedings of the 11th International Conference on Automotive User Interfaces and Interactive Vehicular Applications: Adjunct Proceedings, New York, NY, USA, 21–25 September 2019; pp. 120–124. [Google Scholar] [CrossRef]
  58. Da Lio, M.; Donà, R.; Papini, G.P.R.; Plebe, A. The Biasing of Action Selection Produces Emergent Human-Robot Interactions in Autonomous Driving. IEEE Robot. Autom. Lett. 2022, 7, 1254–1261. [Google Scholar] [CrossRef]
  59. Muslim, H.; Itoh, M. Trust and Acceptance of Adaptive and Conventional Collision Avoidance Systems. IFAC-PapersOnLine 2019, 52, 55–60. [Google Scholar] [CrossRef]
  60. Bhardwaj, A.; Ghasemi, A.H.; Zheng, Y.; Febbo, H.; Jayakumar, P.; Ersal, T.; Stein, J.L.; Gillespie, R.B. Who’s the boss? Arbitrating control authority between a human driver and automation system. Transp. Res. Part F Traffic Psychol. Behav. 2020, 68, 144–160. [Google Scholar] [CrossRef]
  61. Rothfuß, S.; Schmidt, R.; Flad, M.; Hohmann, S. A Concept for Human–Machine Negotiation in Advanced Driving Assistance Systems. In Proceedings of the 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC), Bari, Italy, 6–9 October 2019; pp. 3116–3123. [Google Scholar] [CrossRef]
  62. Wang, Z.; Zheng, R.; Kaizuka, T.; Nakano, K. A Driver-Automation Shared Control for Forward Collision Avoidance While Automation Failure. In Proceedings of the 2018 IEEE International Conference on Intelligence and Safety for Robotics (ISR), Shenyang, China, 24–27 August 2018; pp. 93–98. [Google Scholar] [CrossRef]
  63. Tao, W.; Chen, Y.; Yan, X.; Li, W.; Shi, D. Assessment of Drivers’ Comprehensive Driving Capability Under Man-Computer Cooperative Driving Conditions. IEEE Access 2020, 8, 152909–152923. [Google Scholar] [CrossRef]
  64. Bhardwaj, A.; Lu, Y.; Pan, S.; Sarter, N.; Gillespie, R.B. Comparing Coupled and Decoupled Steering Interface Designs for Emergency Obstacle Evasion. IEEE Access 2021, 9, 116857–116868. [Google Scholar] [CrossRef]
  65. Pichen, J.; Miller, L.; Baumann, M. Cooperative Speed Regulation in Automated Vehicles: A Comparison Between a Touch, Pedal, and Button Interface as the Input Modality. In Proceedings of the 2021 IEEE Intelligent Vehicles Symposium (IV), Nagoya, Japan, 11–17 July 2021; pp. 245–250. [Google Scholar] [CrossRef]
  66. Nguyen, A.T.; Sentouh, C.; Popieul, J.C. Driver-Automation Cooperative Approach for Shared Steering Control Under Multiple System Constraints: Design and Experiments. IEEE Trans. Ind. Electron. 2017, 64, 3819–3830. [Google Scholar] [CrossRef]
  67. Johns, M.; Mok, B.; Sirkin, D.M.; Gowda, N.M.; Smith, C.A.; Talamonti, W.J., Jr.; Ju, W. Exploring Shared Control in Automated Driving. In Proceedings of the 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Christchurch, New Zealand, 7–10 March 2016; pp. 91–98. [Google Scholar] [CrossRef]
  68. Yan, Z.; Yang, K.; Wang, Z.; Yang, B.; Kaizuka, T.; Nakano, K. Intention-Based Lane Changing and Lane Keeping Haptic Guidance Steering System. IEEE Trans. Intell. Veh. 2021, 6, 622–633. [Google Scholar] [CrossRef]
  69. Muslim, H.; Itoh, M. The Effects of System Functional Limitations on Driver Performance and Safety When Sharing the Steering Control during Lane-Change. In Proceedings of the 2017 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Banff, AB, Canada, 5–8 October 2017; pp. 135–140. [Google Scholar] [CrossRef]
  70. Karakaya, B.; Kalb, L.; Bengler, K. Cooperative Approach to Overcome Automation Effects During the Transition Phase of Conditional Automated Vehicles. 2018. Available online: https://www.semanticscholar.org/paper/Cooperative-Approach-to-Overcome-Automation-Effects-Karakaya-Kalb/7684c52cd6c16e52380ee25b9b23409277f264ce (accessed on 18 December 2023).
  71. Guo, C. Designing Driver-Vehicle Cooperation Principles for Automated Driving Systems. Ph.D. Thesis, Université de Valenciennes et du Hainaut-Cambresis, Valenciennes, France, 2017. [Google Scholar]
  72. Walch, M.; Lehr, D.; Colley, M.; Weber, M. Do not You See Them? Towards Gaze-Based Interaction Adaptation for Driver-Vehicle Cooperation. In Proceedings of the 11th International Conference on Automotive User Interfaces and Interactive Vehicular Applications: Adjunct Proceedings, New York, NY, USA, 21–25 September 2019; pp. 232–237. [Google Scholar] [CrossRef]
  73. Li, X.; Zhao, X.; Li, Z.; Rong, J. Effects of Cooperative Vehicle Infrastructure System on driver’s attention with different personal attribute. In Proceedings of the 2021 IEEE International Intelligent Transportation Systems Conference (ITSC), Indianapolis, IN, USA, 19–22 September 2021; pp. 3990–3995. [Google Scholar] [CrossRef]
  74. Li, X.; Li, Z.; Zhao, X.; Rong, J.; Zhang, Y. Effects of Cooperative Vehicle Infrastructure System on Driver’s Visual and Driving Performance Based on Cognition Process. Int. J. Automot. Technol. 2022, 23, 1213–1227. [Google Scholar] [CrossRef]
  75. Liu, Y.; Zhang, J.; Li, Y.; Hansen, P.; Wang, J. Human-Computer Collaborative Interaction Design of Intelligent Vehicle—A Case Study of HMI of Adaptive Cruise Control. In HCII 2021: HCI in Mobility, Transport, and Automotive Systems; Krömker, H., Ed.; Springer International Publishing: Cham, Switzerland, 2021; pp. 296–314. [Google Scholar] [CrossRef]
  76. Hoc, J.M.; Mars, F.; Milleville-Pennel, I.; Jolly, É.; Netto, M.; Blosseville, J.M. Human–machine cooperation in car driving for lateral safety: Delegation and mutual control. Trav. Hum. 2006, 69, 153–182. [Google Scholar] [CrossRef]
  77. Jordan, N.; Franck, M.; Jean-Michel, H. Lateral Control Support for Car Drivers: A Human–Machine Cooperation Approach. In Proceedings of the 14th European Conference on Cognitive Ergonomics: Invent! Explore! London, UK, 28–31 August 2007; pp. 249–252. [Google Scholar] [CrossRef]
  78. Baltzer, M.; López, D.; Flemisch, F. Interaction Patterns for Cooperative Guidance and Control of Vehicles. In Proceedings of the 2022 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Prague, Czech Republic, 9–12 October 2022; pp. 929–934. [Google Scholar] [CrossRef]
  79. Cohn, M. User Stories Applied: For Agile Software Development; Addison-Wesley Professional: Boston, MA, USA, 2004. [Google Scholar]
  80. Yanco, H.A.; Drury, J.L. A taxonomy for human–robot interaction. In Proceedings of the AAAI Fall Symposium on Human-Robot Interaction, Arlington, VA, USA, 17–19 November 2002; pp. 111–119. [Google Scholar]
  81. Rotter, J.B. Generalized expectancies for internal versus external control of reinforcement. Psychol. Monogr. Gen. Appl. 1966, 80, 1–28. [Google Scholar] [CrossRef]
  82. Caspar, E.A.; Cleeremans, A.; Haggard, P. The relationship between human agency and embodiment. Conscious. Cogn. 2015, 33, 226–236. [Google Scholar] [CrossRef]
  83. Jeunet, C.; Albert, L.; Argelaguet, F.; Lécuyer, A. “Do you feel in control?”: Towards novel approaches to characterise, manipulate and measure the sense of agency in virtual environments. IEEE Trans. Vis. Comput. Graph. 2018, 24, 1486–1495. [Google Scholar] [CrossRef] [PubMed]
  84. Bandura, A. Social cognitive theory of personality. In Handbook of Personality: Theory and Research, 2nd ed.; Guilford Press: New York, NY, USA, 1999; pp. 154–196. [Google Scholar]
  85. Deci, E.L.; Ryan, R.M. The “What” and “Why” of Goal Pursuits: Human Needs and the Self-Determination of Behavior. Psychol. Inq. 2000, 11, 227–268. [Google Scholar] [CrossRef]
  86. Peters, D.; Calvo, R.A.; Ryan, R.M. Designing for Motivation, Engagement and Wellbeing in Digital Experience. Front. Psychol. 2018, 9, 797. [Google Scholar] [CrossRef]
  87. Wang, C.; Weisswange, T.H.; Krüger, M.; Wiebel-Herboth, C.B. Human-Vehicle Cooperation on Prediction-Level: Enhancing Automated Driving with Human Foresight. In Proceedings of the 2021 IEEE Intelligent Vehicles Symposium Workshops (IV Workshops), Nagoya, Japan, 11–17 July 2021; pp. 25–30. [Google Scholar] [CrossRef]
  88. Abbink, D.; Mulder, M.; Boer, E. Haptic shared control: Smoothly shifting control authority? Cogn. Technol. Work 2012, 14, 19–28. [Google Scholar] [CrossRef]
  89. Guo, C.; Sentouh, C.; Popieul, J.C.; Haué, J.B.; Langlois, S.; Loeillet, J.J.; Soualmi, B.; Nguyen That, T. Cooperation between driver and automated driving system: Implementation and evaluation. Transp. Res. Part F Traffic Psychol. Behav. 2019, 61, 314–325. [Google Scholar] [CrossRef]
  90. Tas, O.S.; Kuhnt, F.; Zöllner, J.M.; Stiller, C. Functional system architectures towards fully automated driving. In Proceedings of the 2016 IEEE Intelligent Vehicles Symposium (IV), Gothenburg, Sweden, 19–22 June 2016; pp. 304–309. [Google Scholar] [CrossRef]
  91. Stampf, A.; Colley, M.; Rukzio, E. Towards Implicit Interaction in Highly Automated Vehicles—A Systematic Literature Review. Proc. ACM Hum.-Comput. Interact. 2022, 6, 191. [Google Scholar] [CrossRef]
  92. Johnson-Laird, P. The history of mental models. In Psychology of Reasoning; Psychology Press: London, UK, 2004. [Google Scholar]
  93. Wilson, J.R.; Rutherford, A. Mental Models: Theory and Application in Human Factors. Hum. Factors 1989, 31, 617–634. [Google Scholar] [CrossRef]
  94. Johns, M.; Mok, B.; Talamonti, W.; Sibi, S.; Ju, W. Looking Ahead: Anticipatory Interfaces for Driver-Automation Collaboration. In Proceedings of the 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC), Yokohama, Japan, 16–19 October 2017; pp. 1–7. [Google Scholar] [CrossRef]
  95. Meyer, R.; Graf von Spee, R.; Altendorf, E.; Flemisch, F.O. Gesture-Based Vehicle Control in Partially and Highly Automated Driving for Impaired and Non-impaired Vehicle Operators: A Pilot Study. In UAHCI 2018: Universal Access in Human-Computer Interaction. Methods, Technologies, and Users; Antona, M., Stephanidis, C., Eds.; Springer: Cham, Switzerland, 2018; pp. 216–227. [Google Scholar] [CrossRef]
  96. Francesco Biondi, I.A.; Jeong, K.A. Human-Vehicle Cooperation in Automated Driving: A Multidisciplinary Review and Appraisal. Int. J. Hum.-Comput. Interact. 2019, 35, 932–946. [Google Scholar] [CrossRef]
  97. Cimolino, G.; Graham, T.N. Two Heads Are Better Than One: A Dimension Space for Unifying Human and Artificial Intelligence in Shared Control. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, New York, NY, USA, 29 April–5 May 2022. [Google Scholar] [CrossRef]
  98. Pano, B.; Chevrel, P.; Claveau, F.; Sentouh, C.; Mars, F. Obstacle Avoidance in Highly Automated Cars: Can Progressive Haptic Shared Control Make it Safer and Smoother? IEEE Trans. Hum.-Mach. Syst. 2022, 52, 547–556. [Google Scholar] [CrossRef]
  99. Benderius, O.; Berger, C.; Malmsten Lundgren, V. The Best Rated Human–Machine Interface Design for Autonomous Vehicles in the 2016 Grand Cooperative Driving Challenge. IEEE Trans. Intell. Transp. Syst. 2018, 19, 1302–1307. [Google Scholar] [CrossRef]
Figure 1. Top level view of our proposed 3-step taxonomy—For designing cooperation, we are starting with identifying (1) the cooperation use case, (2) deducting a suitable cooperation strategy in the given frame, and (3) designing the HMI based on that frame.
Figure 1. Top level view of our proposed 3-step taxonomy—For designing cooperation, we are starting with identifying (1) the cooperation use case, (2) deducting a suitable cooperation strategy in the given frame, and (3) designing the HMI based on that frame.
Mti 08 00016 g001
Figure 2. Overview of the PRISMA process with the number for the reports included and excluded for each of the steps.
Figure 2. Overview of the PRISMA process with the number for the reports included and excluded for each of the steps.
Mti 08 00016 g002
Figure 3. Detailed view of the cooperation use case, mainly defined by the agents, the motivation for the cooperation, and the scenario categorized by its criticality and urgency.
Figure 3. Detailed view of the cooperation use case, mainly defined by the agents, the motivation for the cooperation, and the scenario categorized by its criticality and urgency.
Mti 08 00016 g003
Figure 4. Detailed view of the Cooperation Frame, defined by the Cooperation Dynamics (determinism) and the Task Agent Residuals, which are split up into the task level and the exact step of the driving loop at which the cooperative task is located.
Figure 4. Detailed view of the Cooperation Frame, defined by the Cooperation Dynamics (determinism) and the Task Agent Residuals, which are split up into the task level and the exact step of the driving loop at which the cooperative task is located.
Mti 08 00016 g004
Figure 5. Detailed view of the HMI taxonomy—this taxonomy represents the HMI part of the cooperation. The input and output channels define the interaction principles of the HMI, which in return inspire the metaphors used for the cooperation concept.
Figure 5. Detailed view of the HMI taxonomy—this taxonomy represents the HMI part of the cooperation. The input and output channels define the interaction principles of the HMI, which in return inspire the metaphors used for the cooperation concept.
Mti 08 00016 g005
Table 1. Matching of the reports in the literature review on the sub-dimensions in the Cooperation Use Case dimension of the taxonomy.
Table 1. Matching of the reports in the literature review on the sub-dimensions in the Cooperation Use Case dimension of the taxonomy.
Cooperation Use CaseAgentsAgentsSingle Passenger and Single Automation[10,20,21,22,26,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,94,95,96,97,98,99]
Multiple Passengers and Single Automation[57]
Teleoperator and Automation-
Teleoperator and Passenger-
MotivationInternal[35,57]
External[10,20,21,22,26,34,36,37,38,39,40,41,44,45,46,47,48,49,50,51,52,53,54,55,56,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78]
Internal & External[42,43]
Scenario/GoalCriticalityLow[20,34,35,36,38,39,41,42,44,46,51,53,54,55,57,58,68,75,76]
High[10,21,22,26,37,40,43,45,47,48,49,50,52,56,59,60,61,62,63,64,65,66,67,69,70,71,72,73,74,77,78]
UrgencyLow[36,38,41,51,55,57,58,65,75,76]
High[10,20,21,22,26,34,35,37,39,40,42,43,44,45,46,47,48,49,50,52,53,54,56,59,60,61,62,63,64,66,67,68,69,70,71,72,73,74,77,78]
Cooperation FrameCooperation DynamicsTime Allocation/Window
and Determinism
Continuous and Deterministic[42,57,72,73,74]
Discrete and Deterministic[26,34,35,36,47,48,49,50,54,59,63,65,71,75,76]
Continuous and Negotiated[10,20,21,22,37,39,40,41,43,45,51,52,53,55,56,58,60,61,62,64,66,67,68,70,77,78]
Discrete and Negotiated[38,44,46,69]
Agents Task ResidualsTask LevelStrategical-
Tactical[20,21,22,26,34,36,40,41,42,44,46,47,48,50,54,57,58,63,65,71,72,73,74,75,76]
Operational[10,35,37,38,39,43,45,49,51,52,53,55,56,59,60,61,62,64,66,67,68,69,70,77,78]
Task CoveragePerception[40,57]
Projection/Prediction[21,22,26,47,50,54,63,71]
Multiple Steps[10,20,34,35,36,37,38,39,41,42,43,44,45,46,48,49,51,52,53,55,56,58,59,60,61,62,64,65,66,67,68,69,70,72,73,74,75,76,77,78]
Human–Machine InterfaceInputModalityHaptic[10,26,34,35,36,37,38,39,41,44,45,46,47,49,50,51,52,53,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,71,72,75,76,77,78]
Auditory[40]
Gaze[21]
Multiple Modalities[20,22,43,48,54,70]
OutputModalityVisual[21,22,26,34,40,41,42,44,47,49,50,54,57,63,65,71,72,73,74]
Tactile[10,35,37,38,51,53,55,56,59,60,66,67,68,77]
Multiple Modalities[20,36,39,43,52,64,69,70,78]
-[46,48,58,75,76]
Metaphors/MentalH-Mode[43,70]
ModelsManeuver Based Interaction[44]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Peintner, J.; Escher, B.; Detjen, H.; Manger, C.; Riener, A. How to Design Human-Vehicle Cooperation for Automated Driving: A Review of Use Cases, Concepts, and Interfaces. Multimodal Technol. Interact. 2024, 8, 16. https://doi.org/10.3390/mti8030016

AMA Style

Peintner J, Escher B, Detjen H, Manger C, Riener A. How to Design Human-Vehicle Cooperation for Automated Driving: A Review of Use Cases, Concepts, and Interfaces. Multimodal Technologies and Interaction. 2024; 8(3):16. https://doi.org/10.3390/mti8030016

Chicago/Turabian Style

Peintner, Jakob, Bengt Escher, Henrik Detjen, Carina Manger, and Andreas Riener. 2024. "How to Design Human-Vehicle Cooperation for Automated Driving: A Review of Use Cases, Concepts, and Interfaces" Multimodal Technologies and Interaction 8, no. 3: 16. https://doi.org/10.3390/mti8030016

Article Metrics

Back to TopTop