Next Article in Journal
Linear Quadratic Tracking Control of Car-in-the-Loop Test Bench Using Model Learned via Bayesian Optimization
Previous Article in Journal
Performance Improvement of Active Suspension System Collaborating with an Active Airfoil Based on a Quarter-Car Model
Previous Article in Special Issue
Safety of the Intended Functionality Validation for Automated Driving Systems by Using Perception Performance Insufficiencies Injection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

External Human–Machine Interfaces of Autonomous Vehicles: Insights from Observations on the Behavior of Game Players Driving Conventional Cars in Mixed Traffic

1
Department of Mechanical and System Design Engineering, Hongik University, Seoul 04066, Republic of Korea
2
Department of Industrial Design, Hongik University, Seoul 04066, Republic of Korea
3
Department of Visual Design, Hongik University, Seoul 04066, Republic of Korea
*
Author to whom correspondence should be addressed.
Vehicles 2024, 6(3), 1284-1299; https://doi.org/10.3390/vehicles6030061 (registering DOI)
Submission received: 17 June 2024 / Revised: 18 July 2024 / Accepted: 22 July 2024 / Published: 28 July 2024
(This article belongs to the Special Issue Design and Control of Autonomous Driving Systems)

Abstract

:
External human–machine interfaces (eHMIs) may be useful for communicating the intention of an autonomous vehicle (AV) to road users, but it is questionable whether an eHMI is effective in guiding the actual behavior of road users, as intended by the eHMI. To address this question, we developed a Unity game in which the player drove a conventional car and the AVs were operating with eHMIs. We examined the effects of different eHMI designs—namely, textual, graphical, and anthropomorphic—on the driving behavior of a player in a gaming environment, and compared it to one with no eHMI. Participants (N = 18) had to follow a specified route, using the typical keys for PC games. They encountered AVs with an eHMI placed on the rear window. Five scenarios were simulated for the specified routes: school safety zone; traffic island; yellow traffic light; waiting for passengers; and an approaching e-scooter. All scenarios were repeated three times (a total of 15 sessions per participant), and the eHMI was randomly generated among the four options. The behavior was determined by observing the number of violations in combination with keystrokes, fixations, and saccades. Their subjective evaluations of the helpfulness of the eHMI and their feelings about future AVs revealed their attitudes. Results showed that a total of 45 violations occurred, the most frequent one being exceeding the speed limit in the school safety zones (37.8%) when the eHMI was textual, anthropomorphic, graphical, and when there was no eHMI, in decreasing order; the next was collisions (33.3%), when the eHMI was anthropomorphic, none, or graphical. The rest were ignoring the red light (13.3%), crossing the stop line (13.3%), and violation of the central line (2.2%). More violations occurred when the eHMI was set to anthropomorphic, followed by no eHMI, graphical, and textual eHMI. The helpfulness of the five scenarios scored high (5.611 to 6.389) on a seven-point Likert scale, and there was no significant difference for the scenarios. Participants felt more positive about the future of AVs after their gaming experience (p = 0.049). We conclude that gazing at unfamiliar and ambiguous information on eHMIs may cause a loss of driver attention and control. We propose an adaptive approach in terms of timing and distance depending on the behavior of other road users.

1. Introduction

Autonomous vehicles (AVs) are vehicles capable of sensing their environment and moving safely without human input. They are also known as driverless vehicles, robotic vehicles, or vehicles that exhibit SAE Level 5 automation [1,2,3]. To achieve traffic efficiency, AVs must at least communicate among themselves, with the infrastructure, with the cloud, with pedestrians, and with mobile phones and other personal devices, becoming Connected Autonomous Vehicles. All these information exchanges are globally known as V2X communications [4]. With the advent of AVs of higher level of driving automation that do not require the presence of the driver, the importance of external human–machine interfaces (eHMIs) has grown significantly [5]. An eHMI is defined as any interface perceivable from the exterior of a vehicle that communicates and/or interacts with a human road user [6]. AV–road user interaction includes not only conventional methods, such as honks or turn indicators, but also novel concepts, such as projecting images on the ground. Early examples mimicked eye contact or gestures of human drivers [7,8,9] for intent communication by AVs. The well-established taxonomies in this field [10,11,12] have identified a broad range of novel eHMI concepts.

1.1. Main Effects of eHMIs of AVs

The main effects of eHMIs are twofold. The first is increased efficiency and effectiveness of street-crossing decisions. The others are psychological effects, such as higher perceived safety and subjective satisfaction (attractiveness, novelty, preference, nice-to-have, and willingness to purchase).
Pedestrians can make the correct street-crossing decision more quickly if the approaching car has the novel interface of “eyes” [13], the “smiling mouth” [9], “robot driver”, or the “green man and yellow hand (GMYH)” symbols of traffic lights [14,15]. Although there are concerns about language barriers [3,15], textual eHMI is much less ambiguous than conventional front-brake lights, abstract light animation, and smiley expressions [16,17,18]. Since the introduction of anthropomorphic eHMIs, faces showing emotions such as smiling and angry faces have been found to be more efficient than conversational facial expressions such as nodding or head shaking [19].
Additionally, pedestrians feel safer when crossing a street if the approaching car has eyes and if the eyes are looking at them [13]. Pedestrians can even associate friendliness/kindness with being offered a safe crossing and thus proceed to cross first [20].
The eHMI also provides an advantage in terms of the public acceptance of AVs. More than half the participants (56.3%) expressed that they would pay extra for an eHMI on an AV [14]. Even when pedestrians rely on legacy behaviors rather than leveraging information from an eHMI, many participants believe that AVs require additional displays [21].

1.2. AV–Road User Interaction on Public Roads

Applications of eHMIs tend to extend to a broad range of scenarios other than street crossings. Some studies have demonstrated that eHMIs can be used in more critical scenarios for the safety of vulnerable road users, such as by providing directional information to a pedestrian who is almost hit by a car [22] or by providing urgent warnings to drivers or jaywalkers with limited visibility [23]. Scenarios other than pedestrian crossings should attract more attention for the safe operation of AVs under real traffic conditions [24].
Ever since AVs began operating on public roads, incidents such as public mockery [25,26], attacks [27], and traffic flow obstructions [28,29,30] have been reported. More recently, there have been social protests to stop self-driving cars on public roads [31,32].
Our research aimed to extend the use of eHMIs for emotional and social interaction because these emotional and social protestations against AVs are partly the root cause of conflict between road users and AVs.

1.3. Causes of Traffic Accidents with AVs

According to a study by the National Highway Traffic Safety Administration (NHTSA), an estimated 94% of motor vehicle accidents are caused by human driver error [33]. It is also notable that almost one-third (28%) of traffic fatalities are speed-related. In terms of demographic characteristics, the youngest age group was recorded as being the most involved in fatal crashes in relation to speeding [34]. Thirty-five percent of male drivers in the 15- to 20-year-old age group and 18 percent of female drivers in both 15- to 20 and 21- to 24-year-old age groups involved in fatal crashes in 2020 were speeding, the highest among the age groups [35]. This pattern might cause a conflict when drivers from these cohorts encounter AVs that operate with speed limitations on public roads.
Accidents involving AVs, which reduce driver intervention, are mostly caused by human error. Waymo released safety reports after reaching more than one million miles by 2023 [36,37,38]. According to previous reports, in more than half of all contact accidents, human drivers hit stationary Waymo vehicles. In addition, every vehicle-to-vehicle event involved one or more road rule violations and/or instance of dangerous behavior by human drivers in other vehicles. The most severe accident occurred when a driver looked at their cell phone while approaching a red light. The most common patterns were backing other agents and front-to-rear contact. The Waymo vehicle was stationary or moving slowly at the moment of impact.
The following are the top five accident types.
  • Stopping in the aisle of a parking lot to drop off or pick up passengers.
  • Stopping on a road, yielding to a vehicle that is backing up or to road users.
  • Stopping or moving forward when preparing to turn right on a red traffic light.
  • Stopping or yielding to traffic when preparing to merge onto another roadway.
  • Making a gradual stop to yield to traffic that has stopped at a red traffic light.
Accidents involving AVs are more frequent than those involving conventional vehicles. An analysis of traffic accidents with AVs conducted in the US state of California in the period from 2015 to 2017 [39] found that the “rear-end” type of collision was more frequent with AVs than conventional vehicles. Driver errors in conventional vehicles, which are more common in accidents with AVs, were “unsafe speed”, “following too closely”, and “traffic signal and sign violations”.
Considering these updates from the perspective of the actual operation of AVs, it would be much safer if eHMIs were used to attract the voluntary participation and attention of road users.

1.4. Aim of the Study

Previous studies have demonstrated the main effects of the eHMI in AVs: efficiency and psychological assurance in respect of decisions made by pedestrians. These were derived by investigating the decisions made by pedestrians at road crossings. However, actual traffic conflicts with AVs on public roads are caused by the human drivers of conventional cars from the rear. Considering the fact that an estimated 94% of motor vehicle accidents are caused by human driver errors such as improper lookout, excessive speed, and inattention, the front-to-rear eHMI has to play more significant role in preventing such “human errors”.
Our research question was to determine whether eHMIs are effective in guiding the behavior of road users. To address this question, we developed a Unity game in which the players drove a conventional car, and the AVs with eHMIs operated on the same road. The eHMIs communicated three types of visual information (textual, graphical, and anthropomorphic), and the behavior of the players was compared with their behavior towards an AV with no eHMI. The AV equipped with an eHMI, or without, would meet road users in sociocultural contexts, simulated as per the five scenarios, to attract positive behavior, resulting in improved traffic flow.
Our hypothesis is that the eHMI is effective in improving traffic flow and that anthropomorphic eHMIs will demonstrate a greater emotional influence on road users’ goodwill than textual or graphical eHMIs in the above-mentioned socio-cultural contexts.

2. Materials and Methods

This section explains how we developed a Unity game in which participants drove a conventional car and AVs operated on the same road with or without eHMIs, and how we set up the experiments.

2.1. Development of the Game

2.1.1. AV with eHMI

Three types of visual interfaces were designed: (1) English text messages, (2) graphical animation, and (3) anthropomorphic (facial expressions and gestures using emojis). An AV with an eHMI on the rear windshield, as shown in Figure 1, was placed in our game and developed using Unity.
Textual eHMIs are less ambiguous than graphical or anthropomorphic eHMIs. However, they show less emotion. Graphical eHMIs using universal symbols from traffic conventions may not be as clear as text, but are easy to understand. In terms of accessibility, textual eHMIs are limited to those who speak a specific presentation language. When using emojis from familiar messaging services, anthropomorphic eHMIs may engage people in communication based on emotions, although this is ambiguous. This may also be understood differently, depending on the culture.

2.1.2. Scenarios

By reflecting on the lessons learned from the actual operation of AVs on public roads [36,37,38,39], use cases from a previous study [40] were elaborated within more specific scenarios with regard to other human drivers. In all cases, the eHMIs act on human violations or the potential temptation for negative behavior. It would be best if the eHMI could turn this into positive behavior or emotion toward the AV. In Table 1, the causes of each conflict and the expected results are described.

2.1.3. eHMI Design

Both English text messages and graphical animations explain the intent of the AV and suggest ways for road users to react easily. The principal terms and pictorials were chosen from existing traffic symbols (see Table 2). Textual information comprises two alternating messages. Graphical information is composed of two scenes in a GIF format. Anthropomorphic features are known to increase trust and influence voluntary behavior [41,42]. Smiles can promote careful driving [43]. In our study, other emotional messages were analyzed according to the scenarios. Emojis were chosen from Microsoft Word [44,45].

2.2. Experimentation

2.2.1. Apparatus

The Unity game was played on a PC computer in a test room. A 27-inch monitor (the maximum size required to use the Tobii Spark Pro eye tracker [46]) was used. The test room was kept dimly lit to minimize unexpected interference with eye tracking. Therefore, backlight-enabled keyboards were required. The distance between the monitor and the participant was maintained at approximately 65 cm [47]. The experimental settings are shown in Figure 2. The participant’s screen and the participant themselves (using a webcam) were recorded for further analysis.
The test game was developed using Unity 2022.3.13f1 LTS, and the AVs were operated using the Mobile Traffic System in Unity. Figure 3 illustrates the key scenes in the five scenarios. Participants had to follow the route indicated in the pop-up at the beginning of each session and control their car using the typical keys for PC games (A, W, S, D or arrows for left, forward, backward, and right, and space for stopping or jumping).

2.2.2. Study Design and Procedure

The tests were within-subject. Five scenarios were simulated; the participants repeated each of the scenarios three times for a total of 15 sessions.
The eHMI type was randomized for each session. If this had not been the case, the test may have become too easy from the second trial because the participants could easily have identified that the same scenario was being repeated. However, by randomly applying eHMI types, we expected to minimize the learning effect. The order of the scenarios played by each participant was random. Therefore, the test design was within-subject, although the number of eHMI types shown in all sessions may have differed for each participant.
Below were the steps for a participant to complete a test:
  • Introduction of the test goal and procedure.
  • Agreement on consent.
  • Pre-interview.
  • Watch a short video of the real operation of Waymo One in San Francisco (YouTube).
  • Eye-tracker configuration.
  • Free drive exercise.
  • Play the game (five scenarios in random order × three times using a random eHMI type).
  • Post-interview.
  • Wrap-up (30,000 KRW was given as incentive).

2.2.3. Participants

Participants were recruited using an internal network of researchers involved in the project. For several reasons, we tested the youngest groups who had the right to get their driver’s licenses. The game that we developed had the purpose of virtually exposing AVs in mixed traffic to the young audience before they became involved in actual traffic with AVs on public roads. Younger groups have higher levels of acceptance than older groups [48] and are at the right stage to learn how to coexist with the new technology on public roads as early adopters. If they have a tendency to speed in actual driving situations, virtual experience with future AVs might engender prior understanding of how future technology works and also how it is limited.
In the pilot test sessions, participants aged over 30 were tested, too. Tests failed when participants were not accustomed to gaming. In the test procedure (Step 6), a learning session was provided so that they could practise the key stokes till they felt comfortable using them before the real test. In most cases, it was proven not very effective and one person who did not play the game even felt dizzy during the test. These experiences during our pilot test sessions led us to the decision to focus on young participants after screening out unexperienced game users.
All the participants had experience playing games such as “KartRider”, “Battleground”, “Need for Speed”, or similar games to avoid bias due to different levels of agility in controlling virtual vehicles. A total of 18 people (eight female and ten male), ranging in age from 21 to 26 years, with a mean age of 23.50 years, participated in the tests in February and March 2024. All of them had at least heard of what AVs were when they were recruited and their understanding was more or less aligned while watching the video on the Waymo One service (step 4 in the procedure). In the pre-interview (step 3 in the procedure), they were asked to describe their attitudes as a road user (either pedestrian, cyclist, or driver). Five participants mentioned that they were inattentive as a pedestrian or impatient (do not tend to yield) as a driver.
Their characteristics are summarized in Figure 4.

2.2.4. Data and Analysis

Both behavioral and attitudinal data were collected using relevant UX research methods [49]. Eye-tracking is frequently used as a method of gathering behavioral data, especially regarding road safety [50]. Tobii Pro Lab Version 1.232.52758 was used. Using our simulation, both participants’ viewing angle and AVs in the vicinity were constantly moving, not static, reflecting actual driving observations [51]. Due to this condition, gaze data such as fixation count, fixation duration, and saccades [51] were used instead of a heatmap [52]. Based on these data, participants’ behavior, especially violations, could be filtered based on whether they had seen the eHMI at the moment of the incident. Key strokes also supported the participants’ intentions.
Three attitudinal datasets were collected using a survey: (1) the helpfulness of the eHMI in each scenario (on a 7-point Likert scale) to determine how well the eHMI design met people’s needs in the traffic situations they were in; (2) the most preferred eHMI type to cover these scenarios; and (3) their positive or negative feelings (on a 7-point Likert scale) about AVs before and after the test.
Statistical analyses (ANOVA, paired Student’s t-test) were performed using Jamovi Version 2.3.26.0.

3. Results

3.1. Effect of the eHMI

The behavior of the participants was observed and the violations were counted. The following subsections provide an interpretation of violation data in relation to eHMI type and design.

3.1.1. Frequency of Violations and Accidents

A total of 45 violations were observed. The following list summarizes the types of violation that comprised the 45 cases:
  • Exceeding the speed limit in school safety zones (37.8%).
  • Collision (33.3%).
  • Ignoring the red light (13.3%).
  • Crossing the stop line (13.3%).
  • Crossing the central line (2.2%).
The type of eHMI set at the instant the incident took place is divided into different colors in the bar chart in Figure 5.
Regarding Scenario 1, namely school safety zones, our assumption was that the eHMI would be effective in guiding the speed of the driver. Contrary to expectations, most cases occurred in the presence of an eHMI. Twelve participants violated the speed limits. Two of them did not seem to realize that they had to slow down drastically even when their eyes were fixed on the road signs and eHMIs. In the case of textual eHMIs, seven participants violated the speed limits once during the session. Close monitoring revealed that most participants did not recognize the road signs indicating speed limits and passed them by. However, as soon as they found the textual information on the rear of the other AVs, they slowed down immediately and did not violate the speed limits again in that particular session (there were two pedestrian crossings between which the speed had to be kept within the limit). In this case, even though the violation was counted once, the textual eHMI was clearly effective.
Rear-end collisions in Scenario 3, which was the traffic island, were the most frequent (9 collisions in 15 sessions). The participants tended to miss the right moment to brake when looking at the emoji in the eHMI (Figure 6). The conspicuous (size and color) and unfamiliar (in the traffic context) designs caught the drivers’ eyes and seemed to prevent them from noticing the pedestrian ahead. However, it should be noted that unfamiliar use of emojis seemed to cause longer gazing and fixation times, as was the case in the experimentation with unfamiliar traffic signs [53].
Otherwise, collisions occurred frequently in the absence of an eHMI. Regarding Scenario 4, namely, waiting for a passenger, the eHMI intended that the AV be parked so that the driver behind changed lanes and moved forward. The result was that some drivers were nearly hit by a car approaching them in the next lane.
Ignoring the red light and crossing the stop line was observed in Scenario 2, namely the yellow traffic light scenario, and half of the cases occurred in the absence of an eHMI. Notably, for some people, as soon as they recognized the AV’s intention, they briefly sought ways to avoid it (e.g., changing lanes or speeding up), during which accidents occurred.
The chart in Figure 7 was obtained by rearranging the same data on the basis of eHMI type. The textual eHMI is the most effective for two reasons. First, it had the lowest number of violations and accidents. Second, no fatal accidents such as collisions occurred. Collisions occurred most frequently when the eHMI showed emojis, followed by cases where no eHMI or a graphic eHMI were used.

3.1.2. Personal Differences

When analyzing the patterns of violations and accidents, we found that the same person tended to repeat the same violation. We cannot surmise that some of the participants were not good at adapting to the sensitivity of the keys mapped to the driving task or simply lacked focus. We examined the extent to which these personal differences existed.
Figure 8 presents a histogram of the total number of violations and accidents with group differences by gender. The mean score of all participants was 2.500 (SD = 2.007). Dividing by sex, the mean of the male participants was 2.900 (SD = 2.424), while it was 2.000 (SD = 1.309) for the female group. A one-way ANOVA was performed, and the difference was not statistically significant (mean difference = 0.900, F = 1.010, df = 1, p-value = 0.332).

3.2. Perception of the eHMI

3.2.1. Helpfulness of the eHMIs

How strongly did the participants feel about the eHMI being helpful in each scenario? The participants answered questions after their gaming tasks rating from 1 (extremely unhelpful) to 7 (extremely helpful). When they finished their tasks in the game, they had a good understanding of the scenarios and could determine how they would feel about other drivers in mixed traffic.
The means and standard deviations are presented in Table 3. Scenarios 1 and 2 had lower means than the other three scenarios. In Scenarios 1 and 2, people tended to believe that they were able to obtain information from the environment. Unlike these two scenarios, in Scenarios 3, 4, and 5, the eHMI projected information that they could not see or be made aware of.
ANOVA was performed to compare the means (n = 18). The results showed no statistically significant differences in helpfulness based on scenario. The cross-effects of scenario and sex were not significant, with only one remarkable pattern. Regarding Scenario 3, women’s scores tended to be higher than those of men. The difference in means was not statistically significant (Table 4), but the gap between the yellow and blue lines diverged in Scenario 3 as seen in Figure 9.
In our study, female participants tended to drive more slowly than male participants. The eHMI displayed on the traffic island was more recognizable to people who drove slowly and allowed sufficient time to decelerate by braking. The estimated marginal means of the helpfulness of using eHMIs with 95% confidence intervals and the observed scores (stacked dots) are shown in Figure 9.

3.2.2. Preference on eHMI Type

When asked to select an eHMI type that they thought was best for scenarios covered in the test or even wider traffic scenarios, 11 (61.1%) participants chose graphical eHMIs, followed by textual (22.2%) and anthropomorphic eHMIs (16.7%). None of the participants reported preferring not to use eHMIs.
However, most of them had difficulty choosing only one because they sometimes thought one was appropriate, but at other times, another seemed better. Most preferred the graphical eHMI to the textual one because they could understand the message faster. In contrast, when the situation was not as urgent as queuing at a traffic island or approaching a robot taxi waiting for its passengers, they said the textual eHMI would be better.
Emojis in traffic situations can be unfamiliar and ambiguous, but our anthropomorphic eHMI using emoji facial expressions with hand gestures covered a large area of the rear windshield. It was clearly visible because of the space it covered and its distinct color (bright yellow).

3.2.3. Attitude toward Future AVs

We compared the participants’ attitudes toward future AVs before and after the test. They underwent a pre-interview and were asked to answer whether they had a positive or negative view of future AVs. Before the test, five participants scored either very negative or negative (Figure 10). Autonomous vehicles are still under development and are imperfect. The same question was answered at the end of the test after undergoing the entire procedure described in Section 2.2.2.
Three participants were less positive after the test. One male participant was very positive, but turned negative (decreased from 6 to 3). He was one of the more frequent drivers (more than once per week). After learning about AVs and indirectly experiencing them (watching Waymo One’s riding video and driving the car in the game), he came to think that AVs were convenient for service users but did not share the public road as well as other human drivers. For him, communication using the eHMI could not overcome his impatience (as a driver) at the low speed and conservative driving style of AVs. A female participant (with a driving license, but who did not drive more than once a week) also lowered the rating from 7 to 5 (still positive) because she felt annoyance about the driving style of the AVs around her. The other female participant who had a driving license but did not drive more than once a week lowered the rating from 6 to 5 (still positive) and mentioned that mixed traffic did not seem to be the answer for her, after experiencing the “e-scooter approaching” scenario, for example.
Four participants remained the same, while the other 11 participants increased their score. Those who turned in more positive ratings said that the advances made in AVs were better than their prior perception (based on fatal accidents that they saw in the news), and even though they did not believe the technology was ready, its ability to handle unexpected situations in unpredictable traffic made them understand the technology better (even though many of them still felt annoyance about the driving style).
A paired-sample student’s t-test was performed to determine whether the attitudes before and after the test were the same (n = 18). The difference was statistically significant (p-value = 0.049, see Table 5). This showed participants’ general feelings about AVs, and we assumed that the effect of the eHMI was only a part of it.

4. Discussion and Limitations

The tested participants were not a representative sample of universal road users. For several reasons, we focused on the youngest groups who had the right to get their driver’s licenses. The game had the purpose of virtually exposing AVs in mixed traffic to the young audience before they became involved in actual traffic with AVs on public roads. We expect this type of game to be used to provide preliminary experiences on how to coexist with the new technology. However, the limitations of this experiment stem from the game environment that we developed. The eHMIs of future AVs should be usable by road users of any age, but the gaming environment limited our ability to cover broader age ranges. The game could have been more playable by elderly users, for example [54], and the test protocol considering the specific population would need to be approved by Institutional Review Board (IRB).
Our Unity program activated the eHMIs of AVs depending on their distance from the driven vehicle. In addition, one eHMI design was on continuously until the condition was over, meaning that the smiling emoji to induce the driver to slow down remained on even after the collision occurred. The graphical eHMI left much to be desired in terms of coded functionality, although it was voted as the most desirable because it did not reflect the details of the situation. For example, the GIF image of a pedestrian crossing is simply a repetition that does not show actual progress. Updating these details may help meet the expectations of other road users. Adaptive interaction using AI can improve reality by determining the best activation time and type of eHMI, depending on the distance and speed of the interacting vehicles as well as the possibility of violations and accidents.
From a close observation of the participants’ eye-tracking data, we found that the more accustomed they were to looking at eHMIs, the less they tended to pay attention to road situations such as pedestrians, traffic light signals, and road signs. When participants seemed to understand the situation regarding the safety-oriented driving style of AVs (slow speed and conservative decision-making, in particular), they tried to deviate and accelerate to move farther away. The effectiveness of eHMIs generates a dependency on the behaviors of other drivers. If this trend results in a loss of attention and control, the side effects can be fatalities.
Five scenarios were tested. For each scenario, the most appropriate eHMI type may differ, as mentioned by most participants during the post-interview survey. The eHMI can assist novice drivers to react quickly on approaching a yellow traffic light next to an AV, or let them understand whether a parked robotaxi will move soon. Regardless of driving proficiency, eHMIs can be useful for other drivers when visibility is limited owing to tightly queued vehicles or vehicles hidden by pillars, etc. In addition, an urgent warning about an invisible moving object cannot be achieved without the existence of an eHMI of any type.

5. Conclusions

We tested our hypothesis on whether the eHMI is effective in improving traffic flow (causing fewer violations and accidents) on a virtual public road, and whether anthropomorphic eHMIs demonstrate a greater emotional influence on road users’ goodwill than textual and graphical eHMIs in the above-mentioned socio-cultural contexts.
The results showed that the textual eHMI was the most effective in the sense that there was the lowest number of violations and no collisions occurred while it was in use. Fatal accidents, such as collisions, occurred most frequently when the eHMI showed emojis or when there was no eHMI. Immediate correction of behavior was observed for some participants as soon as they looked at the eHMI. For the first time, when they encountered a new scenario, they could understand the traffic situation using the eHMIs of the AVs around them.
Fewer violations and accidents occurred when the eHMIs were textual and graphical than when there was no eHMI. Nevertheless, it would become a concern if vigilance toward road signs was be lowered in line with dependency on surrounding eHMIs. When the eHMIs were anthropomorphic (the tested emoji design stood out in size and color) or graphical (the GIF images had movement), the eye-catching design caused some collisions (emoji eHMI approaching the traffic island).
When an eHMI existed, it caught the eyes of other drivers and was effective in decreasing violations and accidents. However, when the design was eye-catching, it could cause loss of attention and control in other drivers. Finally, we propose an adaptive approach to determining the activation timing and type of eHMI depending on the distance and speed of the interacting vehicles, as well as on the possibility of violations and accidents.

Author Contributions

Conceptualization and methodology, D.L.; software and experimentation, Y.K.; hardware and experimentation, Y.S. and M.S.Y.; data curation and analysis, D.L., Y.S., M.S.Y. and Y.K.; writing—from review and editing to final revision, supervision, project administration, and funding acquisition, D.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Korea Institute of Design Promotion (KIDP) grant funded by the Korean Government (MOE, MOTIE) (2023-Design and Emerging Technology Integrated Education Program for Cultivating Innovative Talents-0006) and by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (NRF-2021R1F1A1047870).

Informed Consent Statement

The protocol for this study was approved on 5 September 2023 by the Hongik University Institutional Review Board (approval number 7002340-202309-HR-0 15).

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. J3016C: Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles—SAE International. Available online: https://www.sae.org/standards/content/j3016_202104/ (accessed on 30 July 2021).
  2. Llorca, D.F. From Driving Automation Systems to Autonomous Vehicles: Clarifying the Terminology. arXiv 2021, arXiv:2103.10844. [Google Scholar]
  3. Taeihagh, A.; Lim, H.S.M. Governing Autonomous Vehicles: Emerging Responses for Safety, Liability, Privacy, Cybersecurity, and Industry Risks. Transp. Rev. 2019, 39, 103–128. [Google Scholar] [CrossRef]
  4. Martínez-Díaz, M.; Soriguera, F. Autonomous Vehicles: Theoretical and Practical Challenges. Transp. Res. Procedia 2018, 33, 275–282. [Google Scholar] [CrossRef]
  5. Tabone, W.; de Winter, J.; Ackermann, C.; Bärgman, J.; Baumann, M.; Deb, S.; Emmenegger, C.; Habibovic, A.; Hagenzieker, M.; Hancock, P.A.; et al. Vulnerable Road Users and the Coming Wave of Automated Vehicles: Expert Perspectives. Transp. Res. Interdiscip. Perspect. 2021, 9, 100293. [Google Scholar] [CrossRef]
  6. D6.2 InterACT Evaluation Report on On-Board User and Road Users Interaction with AVs Equipped with the InterACT Technologies. Available online: https://www.interact-roadautomation.eu/wp-content/uploads/interACT_D6_2_v1.0_FinalWebsite.pdf (accessed on 15 March 2023).
  7. Diaz, J. People Don’t Trust Autonomous Vehicles, so Jaguar Added Googly Eyes. Available online: https://www.fastcompany.com/90231563/people-dont-trust-autonomous-vehicles-so-jaguar-is-adding-googly-eyes (accessed on 17 June 2024).
  8. Ross, W.P.; Liu, C. Intention Signaling for an Autonomous Vehicle. U.S. Patent 9,969,326, 24 August 2017. [Google Scholar]
  9. The Smiling Car—Self Driving Car That Sees You. Available online: https://semcon.com/smilingcar/ (accessed on 10 July 2020).
  10. Rouchitsas, A.; Alm, H. External Human–Machine Interfaces for Autonomous Vehicle-to-Pedestrian Communication: A Review of Empirical Work. Front. Psychol. 2019, 10, 2757. [Google Scholar] [CrossRef] [PubMed]
  11. Dey, D.; Habibovic, A.; Löcken, A.; Wintersberger, P.; Pfleging, B.; Riener, A.; Martens, M.; Terken, J. Taming the eHMI Jungle: A Classification Taxonomy to Guide, Compare, and Assess the Design Principles of Automated Vehicles’ External Human-Machine Interfaces. Transp. Res. Interdiscip. Perspect. 2020, 7, 100174. [Google Scholar] [CrossRef]
  12. Carmona, J.; Guindel, C.; Garcia, F.; de la Escalera, A. eHMI: Review and Guidelines for Deployment on Autonomous Vehicles. Sensors 2021, 21, 2912. [Google Scholar] [CrossRef] [PubMed]
  13. Chang, C.-M.; Toda, K.; Sakamoto, D.; Igarashi, T. Eyes on a Car: An Interface Design for Communication between an Autonomous Car and a Pedestrian. In Proceedings of the 9th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, New York, NY, USA, 24–27 September 2017; pp. 65–73. [Google Scholar]
  14. Holländer, K.; Colley, A.; Mai, C.; Häkkilä, J.; Alt, F.; Pfleging, B. Investigating the Influence of External Car Displays on Pedestrians’ Crossing Behavior in Virtual Reality. In Proceedings of the 21st International Conference on Human-Computer Interaction with Mobile Devices and Services, Taipei Taiwan, 1–4 October 2019; pp. 1–11. [Google Scholar]
  15. Fridman, L.; Mehler, B.; Xia, L.; Yang, Y.; Facusse, L.Y.; Reimer, B. To Walk or Not to Walk: Crowdsourced Assessment of External Vehicle-to-Pedestrian Displays. arXiv 2017, arXiv:1707.02698. [Google Scholar]
  16. de Clercq, K.; Dietrich, A.; Núñez Velasco, J.P.; de Winter, J.; Happee, R. External Human-Machine Interfaces on Automated Vehicles: Effects on Pedestrian Crossing Decisions. Hum. Factors 2019, 61, 1353–1370. [Google Scholar] [CrossRef]
  17. Bazilinskyy, P.; Dodou, D.; de Winter, J. Survey on eHMI Concepts: The Effect of Text, Color, and Perspective. Transp. Res. Part F Traffic Psychol. Behav. 2019, 67, 175–194. [Google Scholar] [CrossRef]
  18. Bazilinskyy, P.; Kooijman, L.; Dodou, D.; de Winter, J.C.F. How Should External Human-Machine Interfaces Behave? Examining the Effects of Colour, Position, Message, Activation Distance, Vehicle Yielding, and Visual Distraction among 1434 Participants. Appl. Ergon. 2021, 95, 103450. [Google Scholar] [CrossRef] [PubMed]
  19. Rouchitsas, A.; Alm, H. Ghost on the Windshield: Employing a Virtual Human Character to Communicate Pedestrian Acknowledgement and Vehicle Intention. Information 2022, 13, 420. [Google Scholar] [CrossRef]
  20. Rouchitsas, A.; Alm, H. Smiles and Angry Faces vs. Nods and Head Shakes: Facial Expressions at the Service of Autonomous Vehicles. Multimodal Technol. Interact. 2023, 7, 10. [Google Scholar] [CrossRef]
  21. Clamann, M.; Aubert, M.; Cummings, M. Evaluation of Vehicle-to-Pedestrian Communication Displays for Autonomous Vehicles. In Proceedings of the 96th Annual Transportation Research Board Meeting, Washington DC, USA, 10 January 2017. [Google Scholar]
  22. Bazilinskyy, P.; Kooijman, L.; Dodou, D.; Mallant, K.; Roosens, V.; Middelweerd, M.; Overbeek, L.; de Winter, J. Get Out of The Way! Examining eHMIs in Critical Driver-Pedestrian Encounters in a Coupled Simulator. In Proceedings of the 14th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Seoul, Republic of Korea, 17–20 September 2022; Association for Computing Machinery: New York, NY, USA, 2022; pp. 360–371. [Google Scholar]
  23. Lim, D.; Kwon, Y. How to Design the eHMI of AVs for Urgent Warning to Other Drivers with Limited Visibility? Sensors 2023, 23, 3721. [Google Scholar] [CrossRef] [PubMed]
  24. Aramrattana, M.; Habibovic, A.; Englund, C. Safety and Experience of Other Drivers While Interacting with Automated Vehicle Platoons. Transp. Res. Interdiscip. Perspect. 2021, 10, 100381. [Google Scholar] [CrossRef]
  25. Stoklosa, A. Autonomous Car Pulled Over by Cops, Makes a Run For It. Available online: https://www.motortrend.com/news/gm-cruise-av-self-driving-car-pulled-over-police/ (accessed on 17 June 2024).
  26. Stumpf, R. Some Arizonians Express Frustration Over Waymo’s Semi-Autonomous Cars. Available online: https://www.thedrive.com/news/23232/arizonians-absolutely-hate-waymos-semi-autonomous-cars (accessed on 17 June 2024).
  27. Randazzo, R. A Slashed Tire, a Pointed Gun, Bullies on the Road: Why Do Waymo Self-Driving Vans Get so Much Hate? Available online: https://www.azcentral.com/story/money/business/tech/2018/12/11/waymo-self-driving-vehicles-face-harassment-road-rage-phoenix-area/2198220002/ (accessed on 17 June 2024).
  28. Elias, J. Taking a Driverless Waymo in Phoenix over the Holidays Was Fun But Unsettling. Available online: https://www.cnbc.com/2022/01/08/heres-what-it-was-like-to-ride-in-a-waymo-with-no-driver-in-phoenix.html (accessed on 17 June 2024).
  29. Stumpf, R. A Swarm of Self-Driving Cruise Taxis Blocked San Francisco Traffic for Hours. Available online: https://www.thedrive.com/news/a-swarm-of-self-driving-cruise-taxis-blocked-san-francisco-traffic-for-hours (accessed on 17 June 2024).
  30. Day, L. Cruise’s Self-Driving Cars Keep Blocking Traffic in San Francisco. Available online: https://www.thedrive.com/news/cruises-self-driving-cars-keep-blocking-traffic-in-san-francisco (accessed on 17 June 2024).
  31. Kerr, D. Armed with Traffic Cones, Protesters Are Immobilizing Driverless Cars. NPR, 26 August 2023. [Google Scholar]
  32. Luscombe, R. Driverless Taxi Vandalized and Set on Fire in San Francisco’s Chinatown. The Guardian, 12 February 2024. [Google Scholar]
  33. U.S. Department of Transportation Releases. Traffic Safety Facts. Critical Reasons for Crashes Investigated in the National Motor Vehicle Crash Causation Survey. March 2018. Available online: https://www.nhtsa.gov/press-releases/usdot-releases-2016-fatal-traffic-crash-data (accessed on 17 June 2024).
  34. Almost One-Third of Traffic Fatalities Are Speed-Related Crashes|NHTSA. Available online: https://www.nhtsa.gov/press-releases/speed-campaign-speeding-fatalities-14-year-high (accessed on 17 June 2024).
  35. NHTSA DOT HS 813 320 Traffic Safety Facts 2020 Data: Speeding (June 2022). Available online: https://crashstats.nhtsa.dot.gov/Api/Public/ViewPublication/813320 (accessed on 15 July 2024).
  36. Victor, T.; Kusano, K.; Gode, T.; Chen, R.; Schwall, M. Safety Performance of the Waymo Rider-Only Automated Driving System at One Million Miles; Waymo LLC: Mountain View, CA, USA, 2023. [Google Scholar]
  37. Waymo Safety Report. Available online: https://waymo.com/safety/ (accessed on 17 June 2024).
  38. Waypoint—The Official Waymo Blog: First Million Rider-Only Miles: How the Waymo Driver Is Improving Road Safety. Available online: https://waymo.com/blog/2023/02/first-million-rider-only-miles-how.html (accessed on 17 June 2024).
  39. Petrović, Đ.; Mijailović, R.; Pešić, D. Traffic Accidents with Autonomous Vehicles: Type of Collisions, Manoeuvres and Errors of Conventional Vehicles’ Drivers. Transp. Res. Procedia 2020, 45, 161–168. [Google Scholar] [CrossRef]
  40. Lim, D.; Kim, B. UI Design of eHMI of Autonomous Vehicles. Int. J. Hum.-Comput. Interact. 2022, 38, 1944–1961. [Google Scholar] [CrossRef]
  41. Waytz, A.; Heafner, J.; Epley, N. The Mind in the Machine: Anthropomorphism Increases Trust in an Autonomous Vehicle. J. Exp. Soc. Psychol. 2014, 52, 113–117. [Google Scholar] [CrossRef]
  42. Darling, K. Anthropomorphic Framing in Human-Robot Interaction, Integration, and Policy. Robot. Ethics 2015, 2. [Google Scholar] [CrossRef]
  43. Guéguen, N.; Eyssartier, C.; Meineri, S. A Pedestrian’s Smile and Drivers’ Behavior: When a Smile Increases Careful Driving. J. Saf. Res. 2016, 56, 83–88. [Google Scholar] [CrossRef]
  44. Lim, D.; Kim, Y.; Gwon, H.; Shin, Y. Anthropomorphic External Human-Machine Interface Design of Autonomous Vehicles in Roblox to Change Road Users’ Behavior. In Proceedings of the 2023 11th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW), Cambridge, MA, USA, 10–13 September 2023; pp. 1–4. [Google Scholar]
  45. Fluent Emoji—1|Figma Community. Available online: https://www.figma.com/community/file/1138254942249677742 (accessed on 17 June 2024).
  46. Tobii Pro Lab User Manual v 1.217 June 2023. 1. Available online: https://connect.tobii.com/s/lab-downloads?language=en_US (accessed on 17 June 2024).
  47. Tobii Customer Portal. Available online: https://connect.tobii.com (accessed on 17 June 2024).
  48. Weigl, K.; Steinhauser, M.; Riener, A. Gender and Age Differences in the Anticipated Acceptance of Automated Vehicles: Insights from a Questionnaire Study and Potential for Application. Gend. Technol. Dev. 2023, 27, 88–108. [Google Scholar] [CrossRef]
  49. World Leaders in Research-Based User Experience. When to Use Which User-Experience Research Methods. Available online: https://www.nngroup.com/articles/which-ux-research-methods/ (accessed on 18 July 2024).
  50. Ojsteršek, T.C.; Topolšek, D. Eye Tracking Use in Researching Driver Distraction: A Scientometric and Qualitative Literature Review Approach. J. Eye Mov. Res. 2019, 12, 1–30. [Google Scholar] [CrossRef] [PubMed]
  51. Vetturi, D.; Tiboni, M.; Maternini, G.; Bonera, M. Use of Eye Tracking Device to Evaluate the Driver’s Behaviour and the Infrastructures Quality in Relation to Road Safety. Transp. Res. Procedia 2020, 45, 587–595. [Google Scholar] [CrossRef]
  52. Gerber, M.A.; Schroeter, R.; Johnson, D.; Janssen, C.P.; Rakotonirainy, A.; Kuo, J.; Lenné, M. An Eye Gaze Heatmap Analysis of Uncertainty Head-Up Display Designs for Conditional Automated Driving. In Proceedings of the CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 11–16 May 2024; Association for Computing Machinery: New York, NY, USA, 2024; pp. 1–16. [Google Scholar]
  53. Babić, D.; Dijanić, H.; Jakob, L.; Babić, D.; Garcia-Garzon, E. Driver Eye Movements in Relation to Unfamiliar Traffic Signs: An Eye Tracking Study. Appl. Ergon. 2020, 89, 103191. [Google Scholar] [CrossRef]
  54. Rienzo, A.; Cubillos, C. Playability and Player Experience in Digital Games for Elderly: A Systematic Literature Review. Sensors 2020, 20, 3958. [Google Scholar] [CrossRef]
Figure 1. Four types of visual interfaces designed.
Figure 1. Four types of visual interfaces designed.
Vehicles 06 00061 g001
Figure 2. Experiment settings.
Figure 2. Experiment settings.
Vehicles 06 00061 g002
Figure 3. Five scenarios: (a) school safety zone; (b) traffic island (yield to pedestrian); (c) yellow traffic light; (d) waiting for passenger; (e) e-scooter approaching.
Figure 3. Five scenarios: (a) school safety zone; (b) traffic island (yield to pedestrian); (c) yellow traffic light; (d) waiting for passenger; (e) e-scooter approaching.
Vehicles 06 00061 g003
Figure 4. Demographic representation of participants (n = 18).
Figure 4. Demographic representation of participants (n = 18).
Vehicles 06 00061 g004
Figure 5. Frequency of violations and accidents: (a) by violation; (b) by scenario.
Figure 5. Frequency of violations and accidents: (a) by violation; (b) by scenario.
Vehicles 06 00061 g005
Figure 6. Driver violations: (a) exceeding the speed limit; (b) rear-end collision; (c) side collision; (d) ignoring the red light; (e) crossing the stop line.
Figure 6. Driver violations: (a) exceeding the speed limit; (b) rear-end collision; (c) side collision; (d) ignoring the red light; (e) crossing the stop line.
Vehicles 06 00061 g006
Figure 7. Frequency of violations and accidents by eHMI type.
Figure 7. Frequency of violations and accidents by eHMI type.
Vehicles 06 00061 g007
Figure 8. Frequency of violations and accidents by gender.
Figure 8. Frequency of violations and accidents by gender.
Vehicles 06 00061 g008
Figure 9. Estimated marginal means of helpfulness of using eHMIs by scenario and by gender (7-point Likert scale, n = 18).
Figure 9. Estimated marginal means of helpfulness of using eHMIs by scenario and by gender (7-point Likert scale, n = 18).
Vehicles 06 00061 g009
Figure 10. Attitude toward future AVs before and after the test (7-point Likert scale, n = 18): (a) comparison of the frequencies of the answer; (b) comparison of the means.
Figure 10. Attitude toward future AVs before and after the test (7-point Likert scale, n = 18): (a) comparison of the frequencies of the answer; (b) comparison of the means.
Vehicles 06 00061 g010
Table 1. User scenarios.
Table 1. User scenarios.
NoScenarioCause of ConflictIntended Behavior
1In a school safety zone, the AV informs other drivers behind it to drive slowly and within the speed limit (30 km/h).The driver behind it may follow too closely or exceed the speed limit.The drivers following cautiously keep within the speed limit all the way through the zone.
2When the AV approaches a yellow traffic light, it informs other drivers behind it to slow down and stop until the light turns green.The driver following, rushes to pass when the light is orange or urges the AV before the light turns green.The drivers following are prepared to gradually slow down and stop.
3At a traffic island, the AV informs other drivers behind it to wait while yielding to pedestrians crossing.The driver following does not slow down or urges the AV impatiently.The drivers following courteously wait till the AV moves again.
4The AV is parked on the side of the road to wait for a passenger, apologizing to other drivers for obstructing the traffic flow.Other drivers become irritated and show anger to the AV.Other drivers in the vicinity excuse the AV tolerantly and bypass it.
5The AV approaching an e-scooter informs other drivers behind it to be cautious.Sudden braking of the AV causes fright in the drivers following.The following drivers are wide awake and feel grateful for the visibility of the AV system.
Table 2. eHMI design by type and by scenario.
Table 2. eHMI design by type and by scenario.
NoTextualGraphicalAnthropomorphic
1“School Zone” ⇔ “Slow Down”Vehicles 06 00061 i001Vehicles 06 00061 i002
2“Waiting for Green” ⇔ “Wait to Go”Vehicles 06 00061 i003Vehicles 06 00061 i004
3“Yield to Pedestrian” ⇔ “Wait to Go”Vehicles 06 00061 i005Vehicles 06 00061 i006
4“Waiting for Passenger” ⇔ “Sorry to Obstruct”Vehicles 06 00061 i007Vehicles 06 00061 i008
5“e-Scooter Approaching” ⇔ “Watch Out”Vehicles 06 00061 i009Vehicles 06 00061 i010
Table 3. Mean and standard deviation of helpfulness by scenario.
Table 3. Mean and standard deviation of helpfulness by scenario.
NoScenarioMinMaxMeanSD
1School Safety Zone175.6381.577
2Yellow Traffic Light375.9001.132
3Traffic Island176.0251.862
4Waiting for Passenger576.3000.840
5E-Scooter Approaching176.3881.501
Table 4. Within-subjects effect by ANOVA.
Table 4. Within-subjects effect by ANOVA.
NoSum of SquaresdfMean SquareFp
Scenario6.57241.6431.2970.281
Scenario × Gender10.12842.5321.9990.105
Residual81.050641.266
Table 5. Paired samples t-test.
Table 5. Paired samples t-test.
BeforeAfterStudent’s t Statisticdfp
Mean = 4.833Mean = 5.7222.12017.0000.049
SD = 1.543SD = 0.958
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lim, D.; Kim, Y.; Shin, Y.; Yu, M.S. External Human–Machine Interfaces of Autonomous Vehicles: Insights from Observations on the Behavior of Game Players Driving Conventional Cars in Mixed Traffic. Vehicles 2024, 6, 1284-1299. https://doi.org/10.3390/vehicles6030061

AMA Style

Lim D, Kim Y, Shin Y, Yu MS. External Human–Machine Interfaces of Autonomous Vehicles: Insights from Observations on the Behavior of Game Players Driving Conventional Cars in Mixed Traffic. Vehicles. 2024; 6(3):1284-1299. https://doi.org/10.3390/vehicles6030061

Chicago/Turabian Style

Lim, Dokshin, Yongjun Kim, YeongHwan Shin, and Min Seo Yu. 2024. "External Human–Machine Interfaces of Autonomous Vehicles: Insights from Observations on the Behavior of Game Players Driving Conventional Cars in Mixed Traffic" Vehicles 6, no. 3: 1284-1299. https://doi.org/10.3390/vehicles6030061

Article Metrics

Back to TopTop