Next Article in Journal
Recent Progress in the Regeneration and Genetic Transformation System of Cucumber
Previous Article in Journal
Optimal Design of Fluid Flow and Heat Transfer in Pipe Jackets Having Bow Cross-Sections
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Research on Transparency Design Based on Shared Situation Awareness in Semi-Automatic Driving

1
Car Interaction Design Lab, College of Arts and Media, Tongji University, Shanghai 201804, China
2
Shenzhen Research Institute, Sun Yat-Sen University, Shenzhen 518057, China
3
Department of Computer and Systems Sciences, Stockholm University, 114 19 Stockholm, Sweden
4
College of Design and Innovation, Tongji University, Shanghai 200092, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(14), 7177; https://doi.org/10.3390/app12147177
Submission received: 15 June 2022 / Revised: 14 July 2022 / Accepted: 14 July 2022 / Published: 16 July 2022

Abstract

:
While automated driving has attracted more and more attention, human drivers still play an important role. Drivers must be aware of their surroundings and potential danger to take over the vehicle’s control immediately when the automation system cannot function, which could otherwise pose a road safety risk. The transparent design of the human–machine interface can accurately convey the perceived information of the machine to help drivers understand the dynamic environment. We designed the scenario where the rear vehicle is about to overtake the ego vehicle and has a dangerous tendency to collide with the ego vehicle by stepping on the lane line. We analyzed the information shared by the intelligent system in this scenario to make “transparent” the intelligent system. According to the requirements of the task model of transparency and the three stages of situation awareness, we formed three designs. Then we analyzed the availability, situation awareness, and workload of different designs. Results showed that the transparency design according to the shared situation awareness stage can have a positive impact on the usability of the intelligent system. Therefore, the effective transparency-based design in automated driving can enhance the user’s understanding of the system and improve their experience.

1. Introduction

The taxonomy of automation is essential when conducting automation research. The degree of automation can be defined as the fraction of automated functions from the overall functions of an installation or system. In an automated, or semi-automatic system, some or frequent human intervention is required [1]. There is a taxonomy and standards in robotics, according to the research of Tamás Haidegger [2]. In the field of autonomous driving, according to the degree of vehicle intelligence, the American Society of Automotive Engineers (SAE) defines six levels of driving automation [3]: L0—no driving automation, L1—driver assistance, L2—partial driving automation, L3—conditional driving automation, L4—high driving automation and L5—full driving automation. In L2 automated driving, the intelligent driving system provides driving assistance for steering wheel operation, acceleration and deceleration operation, and other driving actions are performed by human drivers. In this process, drivers need to constantly supervise the information of the interface even if their feet are off the pedals and they are not steering. However, they must steer, brake, or accelerate as needed to maintain safety.
At present, various smart car brands such as TESLA, NIO and XPENG have launched L2 assisted driving functions, such as Tesla’s NOA (navigation on autopilot), Nio’s NOP (navigation on pilot) and Xpeng’s NGP (navigate guided pilot). In this kind of function, the driver is the main controller of the car, and the assisted driving function provides certain driving assistance in specific scenes.
Drivers will not be required to control the trajectory, maneuver the vehicle, or plan the route as we currently do [4]. So, when the driving assistance function is turned on, the driver will easily relax their vigilance and pay attention to other things instead of the driving environment.
When there is danger in the driving environment, the driver cannot take measures quickly because they do not receive the environmental information in time, which frequently results in traffic accidents. According to the National Highway Traffic Safety Administration (NHTSA), 11 accidents related to Tesla autopilot or TACC have been confirmed since 2018. Therefore, in L2 automated driving, it is very important for drivers to understand the environmental state in real time and maintain high situation awareness.
In L2 automated driving, the human driver cooperates with the “virtual driver” (intelligent driving system) to execute driving tasks together. Through the human–machine interface (HMI), the system can provide the human driver with environmental information. One way to provide drivers with information about system status is system transparency [5].
Lyons’s research indicated that the transparency between the intelligent system and the human is a mechanism to promote the effective interaction between the human and the intelligent system. The intelligent system conveys its understanding of the current task to the driver to promote the shared situation awareness [6]. Moreover, the HMI must also provide a transparent representation of different stages of the machine decision-making process, so that the driver can understand the behavior intention of the intelligent system [7]. To improve the driver’s understanding of the intelligent system and its perception level of the surroundings in assisted driving, this paper proposes a continuous change transparency design method according to the task stage of the scenario and compares it with the static transparency design method.
Based on the task model of transparency and situation awareness theory, we investigated which stage of situation awareness (SA) has a better transparency design effect and the impact of adding the SA stage of transparency design on usability, driver’s workload, and situation awareness. Thus, an effective transparency design method is proposed to accurately display the current situation awareness information of the intelligent system.

2. Related Work

Shared situation awareness and transparency are currently the subjects of research. Related work involves the content of shared situation awareness in human–machine interaction, the three levels of situation awareness, the three principles of the task model of transparency, the relationship between transparency level and work performance, the HMI requirements of the system transparency, and handovers in autonomous driving.

2.1. Shared Situation Representation

Christoffersen et al. (2001) [8] proposed that in order to promote team cooperation, the collaboration interface needs to provide a shared situation representation. This human–machine shared situation representation includes information about the machine’s status, plan, objectives, and activities, as well as information about the current task status, environment, and situation. Therefore, in human–machine cooperation, the automation system informs the driver of environmental information, decision-making and subsequent behavior through the human–machine interface, which can promote team cooperation.
Walch et al. [9] believed that the human–machine interface needs to meet the requirement of mutual predictability, detectability, shared situation representation, and calibrated trust. The shared situation representation should express the comprehension of both parties of the current situation, to establish common ground.

2.2. Situation Awareness

Situation awareness (SA) is a term used to describe the level of the operator’s perception and understanding of the system environment. In 1998, Endsley [10] proposed situation awareness as a means to: “perceive and comprehensively understand some elements of the environment in a time and space range, and predict their subsequent state changes.” On this basis, a three-level theoretical model of situation awareness was constructed, and situation awareness was divided into three stages: SA1: perception, SA2: comprehension and SA3: projection. The first stage (SA1: perception) is to perceive their own state, external environment, and information changes. The second stage (SA2: comprehension) is to identify, interpret and evaluate the perceived information. The third stage (SA3: projection) is to comprehensively judge the scene attributes, states, and changes, and predict the upcoming actions or possible changes and situations.

2.3. Lyons’s Task Model of Transparency

Lyons summarized the transparency elements of the intelligent system to people in four models, including the intention model, task model, analysis model and environment model [6]. Lyons described the information that the intelligent system needs to present to users before, during and after interaction.
The task model will provide detailed information on the cognitive framework in the process of human–computer interaction, including information on the understanding of a specific task, the system’s progress toward those goals, information signifying awareness of the system’s capabilities, and awareness of errors.
On this basis, Debernard [7] proposed that the task model under automated driving conditions has three principles:
  • In the autonomous mode, the driver must be informed that the system will control the vehicle by following accepted driving practices and traffic laws (predictability of the behavior of the vehicle). Furthermore, the driver must be able to detect the actions (e.g., lane change) being performed by the vehicle and understand them.
  • In the autonomous mode, the driver must be able to perceive the intention of the system (the maneuver it intends carrying out), why, how, and when this maneuver will be carried out. In the case of a lane change, the decision should be displayed, as well as its cause (in pointing out, for example, a slow vehicle in front of the ego vehicle).
  • In the autonomous mode, the driver should know each maneuver that could possibly interrupt the current one. This information will help him/her avoid being surprised or frightened by what is happening.

2.4. Relationship between Transparency, Performance, and Trust

Mercado et al. [11] found that when interacting with the intelligent planning agent, the operator’s performance and trust in the agent increase with the increase in the agent’s transparency level. Parasuraman [12] proposed that system transparency plays an important auxiliary role in explaining the behavior and decision making of the intelligent system and can improve the driver’s trust in the system and team performance. The degree of trust may decrease if there is inadequate transparency, and the driver’s own observation of the intelligent system’s behavior is insufficient to correctly understand what the vehicle is doing.

2.5. HMI Requirements of System Transparency

Beggiato et al. [13] showed that users prefer to have information about the surrounding traffic for lane change maneuvers, current target speed with an explanation for free driving and speed limit scenarios, and information on route options, delays, and reasons for the congestion-related scenarios.
Diels and Thompson [14] indicated that users expect interfaces to provide two categories of information: situation awareness (what the vehicle sees) and behavioral awareness (what the vehicle is going to do).
Pokam, R. et al. [15] defined HMI according to the information processing functions: information acquisition, information analysis, decision making and action execution. They found that the “information acquisition” and the “action execution” functions seem to be essential, and a level of transparency should consist of showing to the human agent what the technical agent sees and what it is doing.

2.6. Handovers in Autonomated Driving

In L2 automated driving, the driver needs to constantly supervise the information of the interface even if their feet are off the pedals and they are not steering [3]. When the automated system faces a situation that is beyond its functional capabilities, the driver must take back control of the automated system immediately, and this process is called “handover” [16].
During the handover process, the automated system has to transmit environmental information to the driver through the human–machine interface (HMI), so that the driver can quickly acquire SA, such as spatial awareness, identity awareness, temporal awareness, goal awareness and system awareness [17].
The safety of vehicles with L2–3 automated driving must be taken seriously. Therefore, it is crucial to effectively design the human–machine interface so that the driver can immediately obtain the SA during the handover process and understand the system intent and environmental conditions.
Based on these studies on transparency and situation awareness, we selected a driving safety-related scenario. We created transparency designs for the intelligent system’s comprehension of environmental information in the process of human–computer interaction, according to Lyons’s task model, based on the event development stage so that the driver can promptly understand the current environmental conditions and the system’s analysis. We determined which stage of situation awareness (SA) has a better transparency design effect. We proposed to draw a better transparency design method through experimental comparative analysis, to better man–machine cooperation and improve user experience and driving safety.

3. Study

3.1. Scenario

According to Lyons’s task model of transparency, we studied the mode of human–computer cooperation in security scenarios. The ego vehicle was in L2 automated driving state and ran on a one-way three-lane road (as shown in Figure 1, the ego vehicle is white and other vehicles are yellow). There was a vehicle (the yellow one) approaching in the left lane, and the vehicle had a dangerous intention to drive in the lane line. The driver of the ego vehicle needed to constantly supervise the driver support features and environment information through the interface, and they should respond accurately and promptly to the driving control switching request issued by the ego vehicle.
In L2 automated driving state, the driver is primarily responsible for the safety. Therefore, the driver needs to take over for risk avoidance in dangerous situations. In this scenario, when the intelligent system of the ego vehicle (the white one) detected that the yellow vehicle crossed the line, the system judged that there might be a risk of collision with the ego car. It then informed the driver of the environmental monitoring information in real time so that the driver could take over the vehicle if the yellow vehicle was about to collide with the ego vehicle.
The scenario of this task (Figure 1) was mainly divided into two stages, stage I: vehicles with dangerous intentions approaching out of sight, and stage II: vehicles with dangerous intentions approaching and moving away from view. In this scenario, the dangerous vehicle behind was driving on the line, and the speed was fast, which presented a potential risk of colliding with the white vehicle. Therefore, the intelligent system of the ego vehicle needed to pay attention to the vehicle with dangerous intentions and inform the ego vehicle driver of the situation.

3.2. Framework

Combined with the theoretical background of shared situation representation, the task model of transparency and situation awareness, we analyzed the information in this interactive scenario according to the three stages of input, processing, and output. Then we made a transparency design of the environmental information in the scenario according to the SA stage, so as to compare the effects of the transparency design. We proposed a task process transparency design model based on shared situation awareness.
We hypothesized that transparency design can enhance users’ perceptions of the environment, improve driving safety, and lessen workload. We further assumed that the dynamic transparency design for information can achieve greater usability and perception than the static transparency design by not only displaying the information state but also communicating the system’s prediction of the environment state.
As shown in Figure 2, the intelligent system collects data from sensors about the environment during the input stage, such as vehicle distance, vehicle speed, vehicle state, lane offset, etc., and then analyzes the data, which is the first stage in SA, the SA1 stage (perception). Then, through the design of transparency, the intelligent machine provides the driver with fundamental information about their present condition, goals, intentions, and plans, assisting the driver in understanding the intelligent machine’s present activities and plans.
In the process phase, the intelligent system understands and analyzes the information, which is the second stage in SA, the SA2 stage (comprehension), and displays the understanding process through transparency to inform the driver of its understanding of the environment information and the task. In this phase, the information is status information (whether the approaching vehicle is dangerous, the degree of danger of the approaching vehicle), so that the driver can understand the actions performed by the vehicle and the intention of the system.
In the output phase, the intelligent system makes predictions based on the continuously collected information, that is, the third stage in SA, the SA3 stage (projection), and then displays the prediction results through transparency to inform the driver to avoid being surprised or frightened by unexpected situations (such as a vehicle automatically changing lanes). In this phase, the information is changing information (changes in the dangerous state of rear vehicles, changes in the relative distance between rear vehicles and the ego vehicle). Thus, the driver knows what environmental factors may be interrupting the current driving behavior and when to take over. This information will help drivers avoid being surprised or frightened by unexpected situations.
In the process phase of this model, the intelligent system displays the environmental state information through discontinuous changing interface elements, such as whether the rear vehicle is dangerous and the degree of danger of the rear vehicle. In the output phase, the intelligent system displays the dynamic information of the environment through continuously changing interface elements, such as the change in the dangerous state of the rear vehicle and the change in the relative distance between the rear vehicle and the ego vehicle, which causes the driver to pay attention to the change in environmental information at all times to deal with the occurrence of accidents.

3.3. Transparency Design

According to the model, we designed three interface schemes for the scenario to verify which stage of SA was better for transparency design, and to explore how increasing transparency design will affect the scheme’s usability, driver’s workload, and situation awareness.
As shown in Figure 3, in the original state interface, the black vehicle in the middle lane is the ego vehicle, and the gray vehicle in the left lane is the vehicle with dangerous intentions. The gray vehicle gradually approached the ego vehicle, overtook the vehicle, and then drove out of sight.
Through this original interface, the driver could know that a vehicle passed by the ego vehicle in the left lane, but they could not know the safety information factors and potential risks identified by the intelligent system.
As shown in Figure 4, in Design1, the content of the SA2 stage (understanding) of the intelligent system is displayed through the design of transparency, that is, the danger of vehicles coming from behind is expressed through the red color. In Design2, in addition to showing the danger information in SA2 through color, the color is also displayed with dynamic transparency design to express the content of SA3, that is, the vehicle informs the driver of the judged environmental safety state. Compared with Design2, Design3 has more transparency design of information in the input stage (Figure 4), which informs the driver of the surrounding environment, vehicle information, and other basic information, so that the driver can understand the driving environment.

4. Experiment

We experimented with these three design schemes to assess the influence of transparency design in various SA stages on the deployment of this task process transparency design model based on shared situation awareness. Therefore, we carried out a preliminary experimental verification. We recruited participants to drive on the driving simulator. When participants completed the specified tasks as required, we evaluated their workload, situation awareness, and the usability of the schemes through questionnaires.

4.1. Participants

We recruited 16 participants, 8 men and 8 women, with an average age of 32.1 years (SD = 4.61) and an average driving experience of 7.3 years (SD = 3.11). All participants were skilled in driving, had not participated in any automated driving experiment, had not used this automated system before, had no cognitive or visual impairment, and were in good mental condition during the test. The detailed information is shown in Table 1.

4.2. Prototype

This experimental material used Tesla’s interface as the prototype to carry out design and research, as shown in Figure 5.
In Design1, a continuous red car was displayed on the interface to signal the danger of an approaching vehicle when a rear vehicle with dangerous intentions entered the field of view of the rearview mirror of the ego vehicle.
In Design2, the color change of the car on the interface displayed the danger level of the approaching car when a rear vehicle with dangerous intentions entered the field of vision of the rearview mirror. The danger level was then lowered from red to yellow and then to gray as this rear vehicle sped away. Design3 added the advance indication of the approach of the rear vehicle based on Design2. The intelligent system detected an approaching vehicle in the left lane and alerted the driver on the interface with yellow gradient color blocks.

4.3. Procedure

Before the experiment, the participants filled in the basic personal information form and the informed consent form of the experiment. The testers described the purpose and task of the experiment to the participants. After the participants understood and became familiar with the simulator, the testers wore the eye tracker for the participants and conducted calibration. After completion, the experiment officially began.
In the simulator, the vehicle was in an automated driving state, maintaining a constant speed of around 30 km/h while moving steadily in the middle lane. This ensured that the vehicle was in a stable and safe driving state. The task started when the prompt sound (“di”) started. At this time, the participant needed to observe the environmental information prompting method on the screen about the approaching vehicle with dangerous intentions. When the dangerous intent vehicle disappeared from view, the testers announced the end of the task and instructed the participant to complete the questionnaires and then interviewed the participant about the three designs.

4.4. Apparatus and Simulated Environment

The study was conducted in a driving simulator (Figure 6) that our laboratory created [18]. The software system of this experimental platform was based on the development engine-unity. It could simulate a variety of driving scenarios, such as inner-city, highway, and suburbs. Three high-definition screens were placed in front of the driver and displayed simulated driving scenarios that covered a 120° visual field of view. Participants drove in a vehicle mockup that implemented a fully equipped vehicle interface. The simulated environment of this experiment was a one-way three-lane road in the city during the day. The participants drove in the middle lane, and the experimental vehicle passed in the left lane.

4.5. Materials

All participants filled in questionnaires with the subjective scores of the three designs. After each experiment, we handed the three questionnaires to them. We selected the following questionnaires to measure these designs from the dimensions of availability, workload, and situation awareness.

4.5.1. After-Scenario Questionnaire

The After-Scenario Questionnaire (ASQ) was published by Lewis (1993) [19]. There are three items in total that measure the satisfaction of users in three aspects: task difficulty, completion efficiency, and help information. ASQ projects are scored from 1 (strongly agree) to 7 (strongly disagree). The ASQ score is the average score of the three items.

4.5.2. NASA-TLX

The driver workload after each scenario was measured with the NASA-TLX. This questionnaire consists of six dimensions (mental demand, physical demand, temporal demand, performance, effort, and frustration linked to completing a specific task). Participants were required to rate their perceived workload on these six Likert scales ranging from 0 to 100 [20].

4.5.3. SAGAT

The Situation Awareness Global Assessment Technique (SAGAT) is based on a comprehensive assessment of the operator’s situation awareness needs and is used to assess all elements of situation awareness. Participants are required to answer questions about their understanding and evaluation of the environment after a single test. One point is given for each correct answer [21].

5. Results

Table 2 contains the experimental data:
  • Transparency
Comparing Design3 with Design2, Design3 had higher usability than Design2. From the perspective of transparency: the transparency of SA1 (perception) in Design3 was higher than that in Design2, which allowed the driver to know the status information of a car coming from behind.
Comparing Design2 with Design1, Design2 had higher usability than Design1. From the perspective of transparency: Design2 had more transparency of SA3 (projection) than Design1. Through the dynamic change of color, the driver knew the dynamic change in the risk degree of the incoming vehicle.
2.
Situation awareness
As shown in Figure 7, Design1 enhanced participants’ information perception, but their comprehension and projection were weak. Design1 performed relatively well in situation awareness and scored 0.74, while the other two designs scored 0.71. Design3 had the highest projection score since it performed better at forecasting the rear cars. We can see from comparing the “understanding” scores of Design1 and Design2 that the color change effectively assisted drivers in comprehending the situation.
3.
Usability
Overall, the usability of Design3 was better (Figure 8), which means that the driver was more effective in completing the task under this scheme. At the same time, Design2 scored higher in efficiency and satisfaction, indicating that users need direct and accurate information supply.
4.
Workload
The results of the NASA-TLX scale (Figure 9) showed that although Design1 could improve participants’ information perception, it also increased the attention workload.
According to the correlation analysis of the scale data of the three designs in Table 3, there was a significant difference between NASA-TLX and ASQ (p = 0.011), which showed a negative correlation (r = −0.364 *). The NASA score had a greater impact on the efficiency in ASQ (r = −0.487 **, p = 0).
According to the scale data (Figure 8), increasing the transparency design stage greatly improved the interface’s usability, especially in terms of helpfulness. The three designs showed significant differences in helpfulness (F = 4.442, p = 0.017 *). Among them, the helpfulness score of Design3 was higher than that of Design1, which means increasing the transparency design according to situation awareness can significantly improve the interface’s usability and user satisfaction. Moreover, through correlation analysis (Table 4), helpfulness had a strong correlation with SA3 (projection) (r = 0.488 **, p = 0). This implied that increasing the transparency design of the SA3 stage (projection) can greatly benefit users by improving their satisfaction with the interface and improving the interface’s usability. Therefore, the transparency design of the SA3 stage (projection) can significantly improve the usability of the interface and improve user satisfaction.
According to the interview, 12 of the 16 participants preferred Design3 (75%). In the comparison between Design1, Design2, and Design3, the participants indicated that the change of color conveyed a change in safety level (yellow indicated vigilance, red indicated danger) and attracted their attention. If the color did not change, they would have to pay long-term attention, which could cause stress, exhaustion, or neglect of the change. Comparing Design2 and Design3, the participants said that the color block reminded them in advance, attracted their attention, and provided additional environmental information. In addition to this, based on user feedback, increasing the transparency design resulted in higher workload, possibly owing to the increased amount of information.

6. Discussion

The main purpose of this paper was to study the effect of interface transparency design according to the SA stage and determine which SA stage had a better transparency design effect.
We studied the impacts of transparency design of the SA stages on usability, driver workload, and situation awareness in the scenario of dangerous vehicles approaching. According to the three stages of situation awareness and the task model of transparency, we proposed three transparency design schemes. We used the dynamic change of color to show the current environmental information, that is, to increase the driver’s SA in the comprehension and projection stages. The results showed that increasing the transparency design of SA significantly improved the interface’s usability and improved user satisfaction. Therefore, Design3 outperformed Design2 and Design2 outperformed Design1. Moreover, the transparency design of the SA3 stage (projection) significantly improved the helpfulness of the prompt information on the interface, thus improving the interface’s usability.
Transparency between intelligent systems and humans is a mechanism to promote effective interaction between humans and intelligent systems. Against the background of automated driving in the future, the transparency design of automobile HMI can effectively improve drivers’ understanding of environmental information, thus strengthening cooperative driving. The system obtains the environmental information through the sensor, then analyzes and makes predictions, and prompts the driver through the HMI, which can broaden the driver’s information perception and foresee potentially dangerous situations.
Although increasing the SA stage of transparency design can significantly improve the usability, the workload increases accordingly, especially in the attention aspect. Combined with the analysis of the interviews, this may be because the transparency design provides more information and requires the driver’s attention to the environment, thus increasing the workload. In more complicated situations, this may be an advantage. For instance, proper attention can prevent drivers from over-trusting the automated driving system and ignoring environmental prompt information, which may lead to dangerous driving.
There was little difference in the total SAGAT scores among the three design schemes. Design3 had a lower score in the comprehension dimension and a higher score in the projection dimension. Combined with the interview, this may be because Design3 added the prompt information in the stage when the rear vehicle is outside the driver’s field of view. The first two designs did not have this prompt. Although this increases the comprehension difficulty for the driver, it lets the driver obtain more danger premonition prompts.

7. Conclusions and Future Work

Based on the typical security scenario, according to the transparency requirements of the task model, we learned from the three stages of situation awareness and used three design schemes as examples to study the transparency design of human–machine team cooperation interface in semi-automated driving.
Overall, the research showed that increasing the transparency design of SA can significantly improve the usability of the interface and improve user satisfaction. The transparency design of the SA3 stage (projection) can significantly improve the helpfulness of the prompt information on the interface, thus improving the interface’s usability. The transparency design of shared situation awareness can accurately convey the perceived information of the intelligent system to help the driver understand the dynamic environment, which has a very important impact on human–machine cooperation. Changes in transparency will increase the workload, which may be because of the increase in information.
This experiment was carried out in the driving simulator, which was safer than a real vehicle experiment. Compared with the animation demonstration experiment, it was more immersive, more closely resembled a real driving scene, and the results obtained were more effective. Using a driving simulator to perform design experiments is innovative and enables many discoveries. In the future, we plan to recruit more participants and obtain more data, thus more accurately showing the statistical differences between these design schemes. Moreover, we plan to add a car frame to the simulator for a better driving experience and upgrade the front screen to a curved screen for better driving vision to simulate real driving scenarios. Moreover, future research will focus on increasing the environmental information in more complex situations and studying the impact of task process transparency design on driver satisfaction, work efficiency, and trust.

Author Contributions

Conceptualization, F.Y.; Formal analysis, H.D.; Methodology, H.D. and J.Z.; Supervision, F.Y. and J.Z.; Writing—original draft, H.D.; Writing—review & editing, F.Y., P.H. and J.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Social Science Fund of China (No. 19FYSB040); CES-Kingfar Excellent Young Scholar Joint Research Funding (No. 202002JG26); China Scholarship Council Foundation (2020-1509); Shenzhen Collaborative Innovation Project: International Science and Technology Cooperation (No. GJHZ20190823164803756); Shenzhen Basic Research Program for Shenzhen Virtual University Park (2021Szvup175).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Nof, S.Y. Automation: What It Means to Us Around the World. In Springer Handbook of Automation; Nof, S., Ed.; Springer: Berlin/Heidelberg, Germany, 2009; pp. 13–52. [Google Scholar] [CrossRef]
  2. Haidegger, T. Taxonomy and Standards in Robotics. In Encyclopedia of Robotics; Springer: Berlin/Heidelberg, Germany, 2021; pp. 1–10. [Google Scholar] [CrossRef]
  3. J3016_202104; Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles. SAE International: Warrendale, PA, USA, 2016.
  4. Michon, J.A. A critical view of driver behavior models: What do we know, what should we do? In Human Behavior and Traffic Safety; Evans, L., Schwing, R.C., Eds.; Springer Science & Business Media: Boston, MA, USA, 1985; pp. 485–520. [Google Scholar] [CrossRef]
  5. Lee, J.D.; See, K.A. Trust in automation: Designing for appropriate reliance. Hum. Factors 2004, 46, 50–80. [Google Scholar] [CrossRef] [PubMed]
  6. Lyons, J.B. Being transparent about transparency: A model for human- robot interaction. In Proceedings of the AAAI Spring Symposium: Trust and Autonomous System, Palo Alto, CA, USA, 25–27 March 2013; pp. 189–204. [Google Scholar]
  7. Debernard, S.; Chauvin, C.; Pokam, R.; Langlois, S. Designing Human-Machine Interface for Autonomous Vehicles. IFAC-PapersOnLine 2016, 49, 609–614. [Google Scholar] [CrossRef]
  8. Christoffersen, K.; Woods, D.D.; Salas, E. How to make automated systems team players. In Advances in Human Performance & Cognitive Engineering Research; Emerald Group Publishing: Bradford, UK, 2001; Volume 2, pp. 1–12. [Google Scholar] [CrossRef]
  9. Walch, M.; Muehl, K.; Kraus, J.; Stoll, T.; Baumann, M.; Weber, M. From Car-Driver-Handovers to Cooperative Interfaces: Visions for Driver–Vehicle Interaction in Automated Driving. In Automotive User Interfaces; Meixner, G., Müller, C., Eds.; Human–Computer Interaction Series; Springer: Cham, Switzerland, 2017; pp. 273–294. [Google Scholar] [CrossRef]
  10. Endsley, M.R. Design and evaluation for situation awareness enhancement. In Proceedings of the Human Factors & Ergonomics Society Annual Meeting, Anaheim, CA, USA, 24–28 October 1988; Volume 32, pp. 97–101. [Google Scholar] [CrossRef]
  11. Mercado, J.E.; Rupp, M.A.; Chen, J.Y.; Barnes, M.J.; Barber, D.J.; Procci, K. Intelligent Agent Transparency in Human-Agent Teaming for Multi-UxV Management. Hum. Factors J. Hum. Factors Ergon. Soc. 2016, 58, 401–415. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  12. Parasuraman, R.; Sheridan, T.B.; Wickens, C.D. A model for types and levels of human interaction with automation. IEEE Trans. Syst. Man Cybern.-Part A Syst. Hum. 2000, 30, 286–297. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Beggiato, M.; Schleinitz, K.; Krems, J. What would drivers like to know during automated driving? Information needs at different levels of automation. In Proceedings of the 7th Conference on Driver Assistance, Munich, Germany, 25–26 November 2015. [Google Scholar]
  14. Diels, C.; Thompson, S. Information expectations in highly and fully automated vehicles. In Advances in Human Aspects of Transportation, Proceedings of the International Conference on Applied Human Factors and Ergonomics, Los Angeles, CA, USA, 17–21 July 2017; Springer: Berlin/Heidelberg, Germany, 2018; pp. 742–748. [Google Scholar] [CrossRef]
  15. Pokam, R.; Debernard, S.; Chauvin, C.; Langlois, S. Principles of transparency for autonomous vehicles: First results of an experiment with an augmented reality human–machine interface. Cogn. Technol. Work 2019, 21, 643–656. [Google Scholar] [CrossRef]
  16. Banks, V.A.; Plant, K.L.; Stanton, N.A. Driver error or designer error: Using the Perceptual Cycle Model to explore the circumstances surrounding the fatal Tesla crash on 7th May 2016. Saf. Sci. 2018, 108, 278–285. [Google Scholar] [CrossRef] [Green Version]
  17. Drexler, D.A.; Takács, Á.; Nagy, T.D.; Haidegger, T. Handover Process of Autonomous Vehicles–Technology and Application Challenges. Acta Polytech. Hung. 2019, 16, 235–255. [Google Scholar]
  18. Liu, Y.; Wang, J.; Wang, W.; Zhang, X. Experiment Research on HMI Usability Test Environment Based on Driving Simulator. Trans. Beijing Inst. Technol. 2020, 40, 949–955. [Google Scholar] [CrossRef]
  19. Lewis, J.R. Psychometric evaluation of an after-scenario questionnaire for computer usability studies: The ASQ. ACM SIGCHI Bull. 1991, 23, 78–81. [Google Scholar] [CrossRef]
  20. Hart, S.G.; Staveland, L.E. Development of NASA-TLX (Task Load Index): Results of empirical and theoretical research. Adv. Psychol. 1988, 52, 139–183. [Google Scholar] [CrossRef]
  21. Endsley, M.R. Direct Measurement of Situation Awareness: Validity and Use of SAGAT. In Situation Awareness Analysis and Measurement; Lawrence Erlbaum Associates Publishers: Mahwah, NJ, USA, 2000. [Google Scholar] [CrossRef]
Figure 1. Driving scenario where a dangerous intent vehicle is approaching and overtaking the ego vehicle.
Figure 1. Driving scenario where a dangerous intent vehicle is approaching and overtaking the ego vehicle.
Applsci 12 07177 g001
Figure 2. Task process transparency design model based on shared situation awareness.
Figure 2. Task process transparency design model based on shared situation awareness.
Applsci 12 07177 g002
Figure 3. The original state of interface and three design schemes. The original interface can only show the change in distance between the rear vehicle and the ego vehicle; Design1 shows a change in distance and a red danger indication; Design2 shows the change of distance and the change of color indicating the danger level; Design3 provides a forewarning of the rear vehicle based on Design2.
Figure 3. The original state of interface and three design schemes. The original interface can only show the change in distance between the rear vehicle and the ego vehicle; Design1 shows a change in distance and a red danger indication; Design2 shows the change of distance and the change of color indicating the danger level; Design3 provides a forewarning of the rear vehicle based on Design2.
Applsci 12 07177 g003
Figure 4. The analysis process of the three designs. Design1 enhances the original interface with the SA2 stage’s transparency design, Design2 enhances Design1 with the SA3 stage’s transparency design, and Design3 enhances Design2 with the SA1 stage’s transparency design.
Figure 4. The analysis process of the three designs. Design1 enhances the original interface with the SA2 stage’s transparency design, Design2 enhances Design1 with the SA3 stage’s transparency design, and Design3 enhances Design2 with the SA1 stage’s transparency design.
Applsci 12 07177 g004
Figure 5. Experimental prototype of the three designs. (a) Design1; (b) Design2; (c) Design3.
Figure 5. Experimental prototype of the three designs. (a) Design1; (b) Design2; (c) Design3.
Applsci 12 07177 g005
Figure 6. Driving simulator used in the study.
Figure 6. Driving simulator used in the study.
Applsci 12 07177 g006
Figure 7. Scores from the SAGAT questionnaire. The first column of each design is the SAGAT score, and the next three columns are its three-dimensional scores.
Figure 7. Scores from the SAGAT questionnaire. The first column of each design is the SAGAT score, and the next three columns are its three-dimensional scores.
Applsci 12 07177 g007
Figure 8. Scores from the ASQ questionnaire. The first column of each design is the ASQ score, and the next three columns are its three-dimensional scores.
Figure 8. Scores from the ASQ questionnaire. The first column of each design is the ASQ score, and the next three columns are its three-dimensional scores.
Applsci 12 07177 g008
Figure 9. Scores from the NASA-TLX questionnaire. The first column of each design is the NASA-TLX score, and the next six columns are its six-dimensional scores.
Figure 9. Scores from the NASA-TLX questionnaire. The first column of each design is the NASA-TLX score, and the next six columns are its six-dimensional scores.
Applsci 12 07177 g009
Table 1. Information of participants.
Table 1. Information of participants.
CategoryOptionProportion of Total Population (%)
GenderMale50
Female50
Age25~3037.5
30~3531.3
35~4031.3
Driving experience
(year)
<525
5~1056.3
>1018.8
Driving frequency
(day)
1~3/month12.5
1~3/week6.3
4~6/week12.5
Every day68.8
Table 2. Questionnaire results for the three designs (n = 16).
Table 2. Questionnaire results for the three designs (n = 16).
NASAASQSAGAT
Design1M3.325.730.74
SD1.150.540.14
Design2M3.356.020.71
SD1.290.560.13
Design3M3.546.150.71
SD1.350.630.09
Table 3. Results of Pearson correlation analysis.
Table 3. Results of Pearson correlation analysis.
NASAASQSAGAT
NASAr1
p
ASQr−0.364 *1
p0.011
SAGATr−0.020−0.0561
p0.8900.707
* p < 0.05.
Table 4. Results of remarkable otherness analysis.
Table 4. Results of remarkable otherness analysis.
Helpfulness
Projectionr0.488 **
p0
** p < 0.01.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

You, F.; Deng, H.; Hansen, P.; Zhang, J. Research on Transparency Design Based on Shared Situation Awareness in Semi-Automatic Driving. Appl. Sci. 2022, 12, 7177. https://doi.org/10.3390/app12147177

AMA Style

You F, Deng H, Hansen P, Zhang J. Research on Transparency Design Based on Shared Situation Awareness in Semi-Automatic Driving. Applied Sciences. 2022; 12(14):7177. https://doi.org/10.3390/app12147177

Chicago/Turabian Style

You, Fang, Huijun Deng, Preben Hansen, and Jun Zhang. 2022. "Research on Transparency Design Based on Shared Situation Awareness in Semi-Automatic Driving" Applied Sciences 12, no. 14: 7177. https://doi.org/10.3390/app12147177

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop