**1. Introduction**

Advanced driver assistant systems (ADAS) and autonomous driving (AD) are stepwise turning into reality. According to the Society of Automotive Engineers (SAE) this process is defined in five levels. In Figure 1 the levels are depict from conventional manual driving—no driving automation (SAE Level 0)—through conditional automated driving

**Citation:** Clement, P.; Veledar, O.; Könczöl, C.; Danzinger, H.; Posch, M.; Eichberger, A.; Macher, G. Enhancing Acceptance and Trust in Automated Driving trough Virtual Experience on a Driving Simulator. *Energies* **2022**, *15*, 781. https://doi.org/10.3390/ en15030781

Academic Editor: Wiseman Yair

Received: 1 December 2021 Accepted: 13 January 2022 Published: 21 January 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

(SAE Level 3) with the fallback-ready user to the full driving automation (SAE Level 5), which covers all situations everywhere [1].

**Figure 1.** Society of Automotive Engineers (SAE) levels for driving automation following On-Road Automated Driving (ORAD) committee, SAE International [1] with Dynamic Driving Task (DDT).

In the intermediate steps, the human–machine interaction becomes an indispensable factor for the technological progress in the development [2,3], as it acts as a technological enabler that improves performance, safety, trustworthiness and comfort [4]. This is reflected by many studies concerning the interaction of humans with vehicle controls [5–7]. As there is a significant probability of harming the study participants, many studies are shifted from the real driving experience on the road into the virtual driving simulator environment, with particular attention on the selection of the appropriate simulator fidelity and validity [8]. This also turns out to be valid for future driver training programs with a potential expected to work even in countries with an already pretty low accident rate [9,10]. Moreover, simulators provide the opportunity to enhance training for safety-critical systems and situations [11–13]. The ongoing discussion of differences between real driving and the simulation has been overcome by the need for repeatability of the tests and the safety of the participants.

The future challenge on automated driving is that most of the time the task for the driver is to observe the environment and the vehicle performance without being actively involved. Then, occasionally in already near-critical situations the vehicle requests the driver to perform those critical situations immediately. These two contrary tasks challenge our society with the little knowledge about the technology and its interaction with human beings. Despite the potential benefits and influence of virtual training on the assessment of learning effects and trust in automation, there is a lack of sound research data on the effects of virtual training on those new systems. Besides study questionnaires, current research is ongoing to measure trust in automation in real-time scenarios, as in Azevedo Sá et al. [14].

#### *1.1. The Technology Challenges*

Multiple industrial sectors, including transportation and mobility, are experiencing a significant technological transformation. Considering the safety-critical nature of the technologies that drive this transformation, the automotive safety community has considerable interest in the development. However, the established safety engineering methods are somewhat failing to support these changes entirely. Thus, extensive effort is invested in developing safety engineering methods, safety design approaches and safety development processes that support this technological transformation [15].

AD is generally seen as one of the critical trends in the mobility sector and also one of the key components of the mentioned transformation [16]. To fulfil the anticipated AD

evolution, there is a demand for progress in supporting technologies (e.g., sensors and actuators), particularly in artificial intelligence (AI) (at the edge) based approaches [17] and further to a combined driver evaluation out of non-obstrusive monitoring sensors and the AI technology [2]. AI technologies are increasingly used for critical applications because these approaches exceed the current state of the art (e.g., recognising patterns and inferring relationships), creating a high demand for AI in terms of realising automated and autonomous driving functions [18]. The outcomes are expected to be technically viable and compelling for humans, hence enticing user acceptance of AD. A lot of effort is put into completing the challenges related to trustworthy AI solutions to cater for human stakeholders [19]. Trust and safety are of utmost importance as to gain acceptance of AD functions.

The key challenge in this context is that the established safety engineering processes and practices (e.g., described by ISO 26262 [20] and ISO/PAS 21448 [21] for the automotive domain) have been only successfully applied in conventional model-based system development. In many industry contexts, none of the available safety standards have defined processes that explicitly consider the specifics of AI-based approaches. These factors include the requirements on dataset collections, the definition of performance evaluation metrics, the AI architectures, and the handling of uncertainty [18].

The ultimate goal of safety-critical systems is to maximise the evidence of a positive risk balance. Conventional safety engineering processes apply model-based system development to establish structured and human-understandable arguments of the risks inherent to the system under development. The risk-driven safety engineering processes implement a system function with the required service quality (i.e., safety integrity level). For AI-based systems, it is challenging to provide such service quality metrics. For example, AI-based concepts build upon probabilistic modelling, including random variables and probability distributions, to model situations and events. As such, an AI function model returns a probability distribution as output for specific input.

In general, AI-based approaches depend on the data used to parameterise its function and the process of parameterising (called training or learning). The quality of the used dataset and the choice of the AI architecture directly influence the quality of the function. In contrast, human-programmed model-based functions typically return a specific result and not a probability distribution to a particular input. In the context of this work, this AI-related topic also reappears in relation to the establishment of trust and acceptance of automated driving in general, and the selection of virtual experiences in particular. It contributes to the establishment of trust and acceptance of such systems, with the expectation to open the gate for implementation of new AI functionality in future.

#### *1.2. Trust and Acceptance*

Trust in automation is a key determinant for the adoption of automated systems and their appropriate use [22] and along with other factors, has a significant impact on interest in using an automated vehicle [23]. Studies sugges<sup>t</sup> a strong connection between trust in technical solutions and user acceptance on the same level [24–26]. The key challenges in terms of acceptance of fully or partially autonomous vehicles are balanced between the needed interaction with the technology, the associated benefits and the hidden risks. In particular, it is the level of required driver's awareness in combination with the human trust towards automated decisions in all conditions that are associated with acceptance [24]. For example, the need for continuous monitoring limits user acceptance as it diminishes the benefit of securing the driver with more freedom for side tasks. The additional open questions that could limit user acceptance are presented by the need for the drivers to take over vehicle control at emergency scenarios or extreme conditions or when the driving automation fails [27], unless there is a dependable fail-operational strategy in place to take over the vehicle control [28]. Most of the user acceptance studies analyse the effects of cognitive factors, but with a limited impact from social aspects [25], since the decisionmaking process is heavily influenced by opinions that are most important to the people

making these decisions [29] and their hedonistic motivations [30]. While that is empirically demonstrated [31], it is also expected that acceptance of the driving automation would be closely related to social influence but with a possibly different strength across different cultures, e.g., some cultures are likely to exert stronger on social influence than other ones [24]. Hence, there is a need for wider conclusive studies based on empirical data across demographic groups.

While acceptance is an important condition for the successful implementation of automated driving vehicles [32], recent research considers the relation between attitudes towards automated systems and their actual use [33]. Reduced acceptance level is also correlated to a lack of safety [34]. Furthermore, it is important to consider that currently AD has not ye<sup>t</sup> been experienced by the majority of drivers [35]. Driving simulators are used to conquer the acceptance obstacle by determining the trust in AD functions and improvements of trust through experience while eliminating the need for complete vehicle prototypes and securing entire scenario repeatability [36]. It is the reproducible validation and demonstration of mature automated functions, their reliability, and safety that work towards securing societal acceptance [37]. Besides ensuring the durability of the technical solutions, increased user acceptance of the new automotive technologies crucially determines the sustainability of the consequent business implementation [38]. Sustainability is further supported through the trustworthiness of the solutions, which acts as an enabler for the development and implementation of an appropriate value creation strategy for maximisation of benefits amongs<sup>t</sup> the engaging stakeholders [38].

In addition to focusing on the AD functions, the driving simulation potentially educates newly qualified drivers on unexpected critical situations on the road, which are generally unpredictable and not part of the driving license education. The potential of such training is evaluated by pre- and post-testing questionnaires for a broad spectrum of users, with attention to the low yearly driving experience group. There is a potential to increase acceptance and trust levels and situational experience through virtual training.

The consequent research questions are: can an advanced simulator (Section 2.4) experience improve trust in and the acceptance of automated driving, and are there any significant differences resulting from the driving simulations for different demographic groups (i.e., gender, age, experienced vs. inexperienced drivers, ADAS/AD experience)?

To answer these research questions we employ a unique combination of a state-ofthe-art dynamic simulator in its characteristic measurement setup, the relevant scenarios targeting appropriate psychological assessment and a sizable study sample.

#### **2. Materials and Methods**

Simulators have the potential to raise the AD profile amongs<sup>t</sup> the general population. To examine the impact of such exposure on a safe version of AD, a study comprising ten driving scenarios was designed to gather feedback on tailored questions from a defined sample group.

The expression of trust (EOT) questionnaire is the primary measurement tool in the present study to evaluate participant's subjective trust. The expression of trust is a modified version of the Trust into Automation questionnaire [39]. Each participant faces the questionnaire twice within the study to enable an initial estimate of trust and explore the change of trust in an AD system throughout the designed experiment. The questionnaire is used before the test as a baseline for each participant. Upon the test procedure, the participants face the same questionnaire again. This allows to measure the impact of such a simulator experience on the trust in the AD system.

The system usability scale (SUS) [40] is used to verify the suitability and usability of the simulator and the simulation within the testing environment, and the behaviour of the system itself. Furthermore, it indicates whether or not such a testing method is suitable for such a study.

A raw NASA TLX questionnaire [41,42] is used to research the workload of the participants. As the study does not request the participants to drive or react and control the vehicle on their own, the NASA TLX reflects the workload of the participants monitoring the vehicle behaviour and environment during the automated drive. This effort is expected to be low due to the eliminated need for active driving tasks from the participants.
