**External Human–Machine Interfaces: The E**ff**ect of Display Location on Crossing Intentions and Eye Movements**

**Y. B. Eisma, S. van Bergen, S. M. ter Brake, M. T. T. Hensen, W. J. Tempelaar and J. C. F. de Winter \***

Department Cognitive Robotics, Delft University of Technology, Mekelweg, 2628 CD Delft, The Netherlands; Y.B.Eisma@tudelft.nl (Y.B.E.); S.vanBergen@student.tudelft.nl (S.v.B.); S.M.terBrake@student.tudelft.nl (S.M.t.B.); M.T.T.Hensen@student.tudelft.nl (M.T.T.H.); W.J.Tempelaar@student.tudelft.nl (W.J.T.)

**\*** Correspondence: j.c.f.dewinter@tudelft.nl

Received: 19 November 2019; Accepted: 17 December 2019; Published: 24 December 2019 -

**Abstract:** In the future, automated cars may feature external human–machine interfaces (eHMIs) to communicate relevant information to other road users. However, it is currently unknown where on the car the eHMI should be placed. In this study, 61 participants each viewed 36 animations of cars with eHMIs on either the roof, windscreen, grill, above the wheels, or a projection on the road. The eHMI showed 'Waiting' combined with a walking symbol 1.2 s before the car started to slow down, or 'Driving' while the car continued driving. Participants had to press and hold the spacebar when they felt it safe to cross. Results showed that, averaged over the period when the car approached and slowed down, the roof, windscreen, and grill eHMIs yielded the best performance (i.e., the highest spacebar press time). The projection and wheels eHMIs scored relatively poorly, yet still better than no eHMI. The wheels eHMI received a relatively high percentage of spacebar presses when the car appeared from a corner, a situation in which the roof, windscreen, and grill eHMIs were out of view. Eye-tracking analyses showed that the projection yielded dispersed eye movements, as participants scanned back and forth between the projection and the car. It is concluded that eHMIs should be presented on multiple sides of the car. A projection on the road is visually effortful for pedestrians, as it causes them to divide their attention between the projection and the car itself.

**Keywords:** eHMI; eye-tracking; attention distribution; road safety; automated driving; driverless vehicles

#### **1. Introduction**

In recent years, a substantial number of studies have emerged on external human–machine interfaces (eHMIs) for automated cars. In automated driving, non-verbal communication between the driver and other road users is often impossible, because the driver is not physically present in the driver seat, or because the driver is engaged in a non-driving task. One reason for employing eHMIs would be to substitute the lack of eye-contact and other types of non-verbal communication. A second reason for using eHMIs is to transmit information about the future state of the automated vehicle to other traffic participants. For example, if the path planning software of the automated driving system knows that the vehicle will slow down for an upcoming intersection, the eHMI could accordingly communicate that the vehicle is about to slow down [1]. Thus, eHMIs could communicate information that is not apparent from implicit ways of communication, for example, from the car's acceleration and deceleration.

So far, a number of different eHMIs have been designed. Bazilinskyy et al. [2] provided an overview of 22 eHMI concepts from industry, whereas Rasouli and Tsotsos [3] and Schieben et al. [4]

presented a survey of eHMIs that are studied in academic contexts. The eHMIs proposed so far come in a variety of modalities, for example as text and light strips (e.g., as in [5]), as well as in many colours (green, red, cyan; [6,7]). Research has found that text-based eHMIs are regarded as easily understood without learning [1,8], and that text has disadvantages related to legibility from a distance and cross-national interpretability [2]. A scientific consensus regarding the most efficient modality for eHMIs has not been reached so far.

A lesser studied question is where on the car the eHMI should be positioned to attain maximum compliance and decision-making efficiency. A variety of locations for eHMIs have been proposed, including:


The positioning of the eHMI is important because pedestrians (and other road users) visually sample the road environment in an intermittent matter [34]. The presented information may be critical to road safety, and should be understood early in time.

From the existing body of literature, an eHMI on the front (grill) or roof of the car seems to be the most frequently used option. These locations are justifiable because they may easily allow for mounting a communication device. An eHMI that projects a message on the road or an eHMI that is integrated with the windscreen are challenging to manufacture. However, these types of eHMIs hold promise because they can be made larger than regular screen-based eHMIs, enhancing their visibility from a distance. This notion is supported by a study using self-reports by Ackermann et al. [9]. They showed that participants found eHMIs that projected its messages on the windscreen or the ground were regarded as better recognisable than display-based eHMIs. Ackermann et al. [9] pointed out that the relatively large size of the projections was probably an underlying reason for these effects.

Even though research (e.g., [35]) shows that pedestrians and drivers do not make direct eye contact very often, an eye-tracking study by Dey et al. [36] showed that pedestrians tend to look at the windscreen when an approaching car is close by, "likely to seek the intention or information about the situational awareness of the driver" (p. 375). Accordingly, a windscreen-based eHMI may be an attractive location for presenting a message. In the same way, Bazilinskyy et al. [37] found that pedestrians often look at the wheels of parked cars; this provides motivation for using a wheel-based eHMI.

At present, it is unclear which location of the eHMI results in the best-perceived clarity and behavioural compliance among pedestrians. This lack of knowledge impedes the standardisation of eHMI designs. In the present study, we let participants view animated video clips in which automated vehicles drove with an eHMI at one of the five abovementioned locations. Participants were asked to hold the spacebar when they felt safe to cross. Consequently, we examined which type of eHMI resulted in the highest time-percentage of spacebar pressings while the automated vehicle slowed down for the participant. This is a continuous behavioural measurement method that was introduced by De Clercq et al. [1]. Additionally, we used eye-tracking to infer which type of eHMI yields the most concentrated gaze patterns.

A survey of eHMI concepts proposed by the automotive industry indicated that about 50% of the concepts contained a text message of some kind [2]. Research has also shown that the commanding text 'Walk' can be understood without particular training or prior exposure [1,2]. However, the development of commanding-text eHMIs is technologically challenging, because such design requires that the automated vehicle knows for which road user the command is meant. Another disadvantage of commanding texts concerns liability: if an automated vehicle displays 'Walk', and a pedestrian walks onto the road and collides with a third road user, the manufacturer of the automated vehicle may be at fault.

It has further been shown that a light-based eHMI can be perceived as ambiguous without learning [1,8]. For example, it may be unclear whether a green or red light signal applies to the pedestrian (egocentric perspective) or the automated vehicle (allocentric perspective; [2]).

Our eHMIs consisted of non-commanding text ('Waiting' or 'Driving') combined with an icon. The text on the eHMI was white to avoid the above-mentioned red/green dilemma. We opted for a relatively salient (i.e, large display/projection) and redundant (i.e., text combined with an icon) eHMI to ensure that participants would have no difficulty understanding what the eHMI message means. We do not aim to suggest that a text-based eHMI would be the optimal solution in real traffic. However, because the present study is concerned with examining the effect of eHMI location, we selected an eHMI design that was shown to be effective in previous research in virtual environments.

#### **2. Methods**

#### *2.1. Participants*

The participants were 51 males and 10 females. They were all aged between 19 and 27 years (*M* = 23.0, *SD* = 1.8). The participants were all students of BSc and MSc studies at the faculty of Mechanical, Maritime and Materials Engineering at the Delft University of Technology, the Netherlands. About half of the participants were recruited based on opportunity sampling within the faculty building, whereas the other half participated for course credit. All participants provided written, informed consent. The research was approved by the TU Delft Human Research Ethics Committee.

#### *2.2. Apparatus*

Eye movements were recorded at 2000 Hz using the Eyelink 1000 Plus eye-tracker v5.15 (SR-Research; Ottawa, ON, Canada). Participants were asked to place their head in the head support during the entire experiment. The stimuli were shown on a 24-inch BENQ monitor (Taipei, Taiwan) with a resolution of 1920 × 1080 pixels (531 × 298 mm). The refresh rate of the monitor was set at 60 Hz. The distance between the monitor and the head support was 95 cm. Accordingly, the monitor subtended 31 deg and 18 deg horizontal and vertical viewing angles, respectively. The experimental setup is shown in Figure 1.

**Figure 1.** Experimental setup. In the actual experiment, the windows were blinded with aluminium foil.

#### *2.3. Independent Variable*

The independent variable was the eHMI type. Six eHMI conditions were used: Roof, Windscreen, Grill, Projection, Wheels, and No eHMI. Figure 2 shows a car that combines all five eHMIs. In the experiment, only one eHMI condition was used at a time. The eHMI could show either 'Waiting' or 'Driving' (Figure 3). The 'Driving' message turned on when the approaching car would not stop for the pedestrian. The 'Waiting' message turned on when the approaching car would stop for the pedestrian.

This study was designed to examine participants' responses when the car was stopping and the eHMI showed 'Waiting'. The responses to the non-stopping vehicles were not analysed herein. The non-stopping vehicles were included to ensure that participants would not start to expect that all cars would stop for them. Note that stopping vehicles had a dominant effect on participants' spacebar-pressing behaviours, whereas no meaningful differences in spacebar-press behaviour between the eHMI conditions occurred for non-stopping vehicles. For example, when the stopping vehicle drove off, it became unsafe to cross, and participants released the spacebar. A non-stopping vehicle that was approaching at that time could not affect spacebar-pressing behaviour because participants already had the spacebar released. We used white text together with a symbol on a black background to achieve the highest possible contrast, because colours (e.g., red and green) already have a meaning, yet this meaning becomes ambiguous when the colour is presented on an approaching vehicle [2].

**Figure 2.** Car combining all five external human–machine interfaces (eHMIs). In the experiment, the car showed only one eHMI at a time. Here, the car has stopped for the pedestrian. The distance between the centre of the car and the camera (pedestrian) is 7 m longitudinal (i.e., parallel to the direction of the road) and 4.5 m lateral (i.e., perpendicular to the road). The white markings on the road were intended to create a pedestrian crossing on the road, without designated priority to the pedestrian.

**Figure 3.** (**a**) Image presented on the eHMI when the approaching car stopped for the pedestrian, (**b**) Image presented on the eHMI when the approaching car did not stop for the pedestrian.

#### *2.4. Design of the Animated Video Clips*

The experiment consisted of 36 non-interactive animated video clips: 6 virtual environments × 6 eHMI conditions. All cars drove at a speed of about 35 km/h unless slowing down for the pedestrian. The videos were 25 s long and played at 60 frames/s. Three environments were used: a straight road, a T-junction and an intersection, with two different preprogrammed traffic behaviours per eHMI. Accordingly, there were six videos per eHMI condition. The lane width was 3.66 m (a standard lane width, e.g., [38]). The camera perspective was from the eyes of a pedestrian waiting to cross the road at a crossing with a traffic island. The field of view of the animation was 80 deg, which ensured that a large part of the environment could be seen (e.g., cars making a right turn, cars driving straight on, and cars making a left turn). In each video, cars were driving on both lanes. The cars did not contain a driver or passenger. This was done to resemble future driverless vehicles, which may transport goods rather than people.

Within a video, all cars featured the same eHMI type. The eHMI could show one of two messages: If the approaching car passed without slowing down, the eHMI changed from blank to 'Driving' (Figure 3, right). If the approaching car did stop for the participant, the eHMI changed from blank to 'Waiting' (Figure 3, left). The change of state from blank to 'Waiting' occurred when the longitudinal distance between the center of the car and the pedestrian was 23 m. After 1.2 s, when the longitudinal distance had reduced to 11 m, the car started to decelerate to a full stop. The car came to a full stop 2.0 s after the eHMI had switched on, at a longitudinal distance of 7 m between the center of the car and the pedestrian (Figure 2). About 2 s after the car had come to a full stop, the eHMI switched to blank again. About 1.2 s later, the car drove off and passed the participant. These timing and distance parameters yielded a scenario in which cars drove by and stopped in rapid succession. The traffic was not created according to actual traffic data or models of human behaviour.

As stated above, there were six videos per eHMI condition, with each video showing a different traffic environment. The traffic environments were the same for each eHMI, except for a temporal offset (up to 10 s) of the starting moments and corresponding ending moments of the video clips. This offset was included to encourage that participants could not recognise/memorise the behaviour of the cars in the video. In each of the six traffic-environment videos for a particular eHMI condition, one or two of the approaching cars stopped and subsequently drove away. In total, across the six traffic-environment videos per eHMI condition, ten approaching cars stopped for the participant. Details about the video clips and data exclusions are available in the Supplementary Material (Figures S1–S6).

#### *2.5. Procedure and Task*

Participants first read and signed an informed consent form. Next, the eye-tracker was calibrated. Then, participants performed two 10 s training scenarios. These concerned an empty straight road, showing a single car without eHMI; this car approached, stopped and drove off. The participants' task was to press and hold the spacebar whenever they felt it was safe to cross the road. Subsequently, the participants viewed the 36 animated video clips in random order. After each scenario, the participants were asked to rate their perceived clarity with the statement: 'It was clear when I could cross' on a scale from 0 (completely disagree) to 10 (completely agree).

#### *2.6. Dependent Variables*


#### *2.7. Statistical Analyses*

The effects of eHMI type on the dependent variables were assessed using a repeated-measures analysis of variance (ANOVA), after averaging the performance scores of the individual vehicle approaches per participant. Significant differences between conditions were assessed with MATLAB's *multcompare* function, using the Tukey–Kramer critical value.

#### **3. Results**

#### *3.1. Self-Reported Clarity*

Figure 4 shows the results for self-reported clarity per eHMI condition. There was a significant difference between the six eHMI conditions, *F*(5,300) = 114.4, *p* < 0.001, η<sup>p</sup> <sup>2</sup> = 0.66. Pairwise comparisons showed that Roof, Windscreen, and Grill were not significantly different from each other. The mean clarity scores between the other combinations differed significantly.

**Figure 4.** Mean self-reported clarity rating per participant. An average is taken of the scores of six scenarios per participant.

#### *3.2. Performance for Approaching Cars*

Figure 5 shows the performance scores, averaged for the nine approaches where the car drove straight on or made a left turn before stopping for the pedestrian. The six eHMI conditions were significantly different from each other, *F*(5,300) = 130.1, *p* < 0.001, η<sup>p</sup> <sup>2</sup> = 0.68. Again, Roof, Windscreen, and Grill were not significantly different from each other, whereas all other combinations differed significantly.

**Figure 5.** Mean performance score per participant for car approaches. The performance score is defined as the percentage of time that the spacebar was pressed, from the moment the eHMI turned on until 3 s later. The average is taken for the nine approaches where the car drove straight on or made a left turn before stopping for the pedestrian.

Figure 6 illustrates participants' spacebar pressing behaviour as a function of elapsed time since the moment of eHMI onset at *t* = 0 s. It can be seen that initially (between 0 and 0.5 s), the percentage of participants pressing the spacebar dropped with time, which can be explained by the fact that the approaching car kept getting closer; hence, it became less safe to cross. The Roof, Windscreen, and Grill caused participants to press the spacebar at about 0.5 s since the eHMI turned on. The Projection and especially Wheels triggered a later spacebar-press response, presumably because these eHMIs were poorly visible from a distance; see Figure 7 for an illustration. Figure 6 also shows that for No eHMI, participants only started to press the spacebar once they could detect that the car decelerated (the car decelerated between 1.2 and 2.0 s).

**Figure 6.** Percentage of participants who pressed the spacebar during car approaches. The average was taken for the nine approaches where the car drove straight on or made a left turn. *t* = 0 s: the eHMI turns on. *t* = 2 s: the car has come to a stop.

**Figure 7.** Screenshot of the animation in a straight approach case with the Projection eHMI. The yellow markers represent the gaze positions of all of the participants. The projection in front of the car is difficult to discern from a distance.

Figure 8 shows the performance score for one selected approach condition: a case where the approaching car made a right turn. Again, the difference in performance scores was significant, *F*(5,300) = 10.6, *p* < 0.001, η<sup>p</sup> <sup>2</sup> = 0.15. All five eHMIs differed significantly from the No eHMI condition, and Wheels differed significantly from Roof and Grill. In other words, in straight and left approach cases, Wheels yielded the lowest performance (Figures 5 and 6), whereas in the right-turn case, Wheels yielded the highest performance (Figure 8).

**Figure 8.** Mean performance score per participant for car approaches where the car made a right turn before stopping for the pedestrian. The performance score is defined as the percentage of time that the spacebar was pressed, from the moment the eHMI turned on until 3 s later.

The high performance for Wheels, and to a lesser extent for Projection, can be explained by the visibility of the sign in the right-turn case (Figure 9). The Roof, Windscreen, and Grill, however, only became visible after the car had made the turn.

**Figure 9.** Screenshot of the animation in the right-turn approach case with the Wheels eHMI. The yellow markers represent the gaze positions of the participants.

The results above showed similar results for self-reported clarity and objective performance. In order to describe the degree of similarity, we averaged the performance scores and clarity scores for all participants per eHMI. The results, shown in Figure 10, reveal a strong association (*r* = 0.99). In other words, in the aggregate, it appears that clarity and performance are both affected by the same mechanism, which we think is the visibility/readability of the display.

**Figure 10.** Overall mean self-reported clarity versus overall mean performance score during car approaches. The performance score is defined as the percentage of time that the spacebar was pressed, from the moment the eHMI turns on until 3 s later.

#### *3.3. Eye-Movements for Approaching Cars*

A visual inspection of the participants' eye movements indicated that these were often goal-directed, focusing on future interactions. For example, in Figure 11, the majority of participants looked at the approaching car even before the eHMI had turned on; participants did not necessarily look towards the nearest or more salient car. Furthermore, we found that participants' attention distribution was sometimes dispersed (e.g., when multiple cars were visible) and at other times concentrated (e.g., when a relevant car approached the participant, e.g., Figure 9). Herein, we introduce a new measure to describe the degree of gaze dispersion. We defined dispersion as the mean distance from the participants' overall mean gaze coordinate for that particular animated video clip. A dispersion score of, e.g., 200 pixels, means that participants' gaze was, on average, 200 pixels away from the mean fixation gaze position of all participants.

**Figure 11.** Screenshot of the animation in an intersection scenario. The yellow markers represent the gaze position of the participants.

The results of the gaze dispersion analysis (Figure 12) show that approaching cars attracted attention, as evidenced by low dispersion (<150 pixels) for the No eHMI condition while the car was approaching (0 to 2 s). The Wheels attracted attention, especially just before coming to a stop (from 1 to 2 s). The Projection, on the other hand, resulted in diversified attention, as illustrated in Figure 13. The Windscreen, on the other hand, yielded in a low gaze dispersion when the car was standing still. The eye-movement dispersion was significantly different between the six eHMI conditions, *F*(5,300) = 31.4, *p* < 0.001, η<sup>p</sup> <sup>2</sup> = 0.34. The Projection yielded a significantly higher dispersion than all five other conditions. The Wheels yielded a significantly lower dispersion than all conditions, except for Windscreen. The Windscreen yielded a significantly lower dispersion than Roof and Projection.

**Figure 12.** Eye movement dispersion score during car approaches. The average was taken of the nine approaches where the car drove straight on or made a left turn. *t* = 0 s: the eHMI turned on. *t* = 2 s: the car has come to a stop.

**Figure 13.** Screenshot of the animation in a straight approach scenario with the Projection eHMI. The yellow markers represent the gaze positions of the participants. The Projection results in dispersed eye gaze, with some participants looking at the eHMI on the asphalt and other participants looking at the car.

#### *3.4. Performance for Cars Driving o*ff

So far, we examined only the performance of eHMI for approaching cars. Another relevant aspect of eHMI evaluation is how participants respond after the eHMI switches off before the car drives away. Figure 14 shows that all eHMIs resulted in improved performance compared to No eHMI; that is, participants were more likely to release the spacebar before the car drove off. Initially (at *t* = 0 s), participants using one of the five eHMIs had the spacebar pressed, because the eHMI displayed 'Waiting' until that point. It took about 0.2 for the first participants to release the spacebar after this

eHMI message disappeared. Participants in the No eHMI condition started to release the spacebar only after the car drove off (at 1.4 s), see Figure 14.

An analysis of the performance scores (Figure 15) showed a significant difference between the five eHMI conditions, *F*(5,300) = 37.4, *p* < 0.001, η<sup>p</sup> <sup>2</sup> = 0.38. The No eHMI condition differed significantly from the five other eHMI conditions; there were no significant differences between Roof, Windscreen, Grill, Projection, and Wheels. In other words, participants responded similarly to the eHMI turning off, regardless of the type of eHMI.

**Figure 14.** Eye movement dispersion score while the car was driving off. The average is taken of nine times driving off. *t* = 0 s: the eHMI turned off. *t* = 1.4 s: the car started to accelerate.

**Figure 15.** Mean performance score per participant for cases where the car drove off. The performance score is defined as the percentage of time that the spacebar was released, from the moment the eHMI turned off until 3 s later. For each participant, the average is taken of nine times driving off.

#### **4. Discussion**

In this study, five eHMI locations, together with a baseline No eHMI condition, were compared in a within-subjects design using a total of 61 participants. The participants viewed animated video clips and were asked to press and hold the spacebar when they thought it was safe to cross, while their eye-movements were recorded using an eye-tracker.

#### *4.1. Performance*

The results showed that the Roof, Windscreen, and Grill-based eHMIs yielded the best performance, defined in terms of the pressing time of the spacebar when it was safe to cross. However, this finding did not hold in all scenarios; the eHMI right above the wheel was found to be the best-performing eHMI when the car approached from a corner. In this specific scenario, the eHMIs on the front (Roof, Windscreen, and Grill) were not visible, and therefore failed to communicate their messages to the pedestrian. Together, our findings suggest that eHMIs should be omnidirectional if they are to be applied in traffic scenarios where cars can approach from multiple directions. Vlakveld et al. [26] showed animations of cars with an omnidirectional eHMI on the roof, whereas drive.ai [27] used multiple displays on the car's exterior. Another solution to ensure visibility from all sides is to use a light emitting diode (LED) strip as in Cefkin et al. [39], or LED patterns on the lateral surfaces of the car [40].

The Projection yielded poor spacebar-pressing performance when the car was approaching. This finding can be explained by the poor visibility of the projection at a far distance due to the shallow viewing angle. We do not mean to suggest that our results generalize to all possible projections. In a virtual reality study, Löcken et al. [31] tested different animations of eHMIs, including a projection which they dubbed F015 (after the name of the concept car presented by Mercedes–Benz USA [33]). Their results showed that the F015 yielded high ratings (5.7 on a scale from 1 to 7) on the User Experience Questionnaire. The concept of Löcken et al. [31] differed from ours, as their projection was highly salient, consisting of a bright green zebra message for the pedestrian. Our findings point to limitations in the use of projections that move with the car, as a projection may not be clear from a distance. We expect that these limitations will be more severe in real traffic. Although technologically feasible (e.g., [41]), it may require powerful lasers to ensure that a projection is visible on the road in daylight. An eHMI on a windscreen may also be technologically challenging to achieve, and may have variable contrast depending on whether or not the eHMI is mounted on a transparent windscreen or whether the windscreen is blinded (in the case of level 5 autonomous vehicles).

For the events where the car was driving away, and the eHMI switched from 'Waiting' to a blank display, all five eHMI locations were found to yield equivalent performance. These findings can be explained because the removal of the message was a salient event, which participants could detect independent of eHMI location or even message content.

Our findings indicate that it is possible to convince users to cross or not to cross before the car slows down or drives away. In other words, all eHMI locations were shown to evoke a more accurate response compared to the No eHMI condition.

#### *4.2. Eye-Tracking*

The eye-tracking results showed that the Windscreen eHMI yielded a concentrated gaze pattern, which can be explained by the fact that this eHMI is embedded in the centre of the car. This finding is in line with Dey et al. [36], who showed that pedestrians are inclined to look at the windscreen when an oncoming car gets close to the pedestrian. The Wheels eHMI also yielded a concentrated gaze pattern, but only for a brief period of about 1 s before the car came to a full stop. This finding may be explained by the fact that the Wheels eHMI was poorly visible from a distance; when the car came close to the participant, they were inclined to fixate on the eHMI to read its message.

We found that the Projection eHMI yielded a dispersed eye-movement pattern, a finding that can be attributed to the fact that participants looked at the projection and the car itself. These results are consistent with Powelleit et al. [42], who tested a projection in front of the car showing the predicted vehicle trajectory. The results of Powelleit et al. [42] showed that drivers found such a display distracting. Similarly, we see a risk that a projection on the road may result in distraction, where road

users may fixate on the projection on the road at the expense of attention towards the car itself, and therefore may miss relevant implicit cues.

Such results have been found in the use of visual augmented feedback in air traffic control: Eisma et al. [43] found that augmented visual feedback helps to achieve a better task performance, but also has distraction potential.

#### *4.3. Self-Reports*

An interesting result was that, in the aggregate, self-reported clarity was strongly associated with objective performance, with a correlation of 0.99. This strong correlation may be due to a single underlying factor, such as the legibility of the display. In other words, the Projection and Wheels eHMIs were hard to read from a distance, as a result of which participants pressed the spacebar late and gave a low clarity rating. The strong correlation between subjective and objective performance is promising for those who examine eHMIs using self-reports (e.g., [8]).

#### *4.4. Limitations and Recommendations*

The present study was conducted in rather constrained conditions. We used a computer monitor that offered a physical field of view of 31 deg and a virtual field of view of 80 deg. The 36 videos followed each other in quick succession, and the cars in the videos did not behave according to a realistic traffic flow model. Furthermore, participants were given a straightforward task to press the spacebar when feeling that it was safe to cross.

It would be worthwhile to employ more ecologically-valid methods, such as a virtual reality headset combined with a motion suit [44] or a field test using a Wizard of Oz approach [39]. It remains to be investigated how participants would respond to eHMIs in real traffic, in which situations arise more naturally and in which pedestrians may be in a hurry or lack the concentration to focus on a particular eHMI. We especially recommend testing eHMIs in traffic environments that involve competing visual demands. It is possible that pedestrians in complex traffic rely on peripheral vision without sustained visual attention towards the eHMI [39,45]. Wide fields of view could be achieved using a head-mounted display or surround projections. An advantage of our setup, in which head movement was constrained, is that we were able to measure eye movements with high accuracy.

Our computer monitor had a standard resolution of 1920 × 1080 pixels. The text-based eHMIs may have been hard to read when the virtual car drove at a large distance, especially for participants that suffer from near-sightedness. As discussed above, the Projection eHMI was relatively difficult to perceive just after it has appeared. However, despite the limited display resolution, participants rated the Roof, Windscreen, and Grill eHMIs as clear, with scores of about 8 on a scale from 0 to 10, as shown in Figure 4. Furthermore, our experiment proved to be highly sensitive for detecting differences between eHMIs conditions. To illustrate, 1.5 s after the eHMI turned on, over 70% of the participants pressed the spacebar for the Roof, Windscreen, and Grill eHMIs, compared to only 4% without eHMI. The limitation of display quality also applies to other simulation environments, such as CAVE simulations and head-mounted displays (e.g., [1]). In real traffic, legibility will be affected by other types of visual factors, such as direct sunlight, rain, or smog.

Our simulation did not feature sound. In reality, pedestrians may rely on auditory information to establish the state and relative position of oncoming vehicles. Participants in the simulation were not moving through the virtual environment, and the oncoming car decelerated abruptly while not interacting with the participant. These factors should be improved in future research.

For the present experiment, we selected an eHMI consisting of a non-commanding text message combined with an icon. We do not suggest that this type of eHMI is optimal in real-life applications. Clamann et al. [14] mounted a 32-inch screen on the front of a vehicle, depicting messages that were legible from about 75 m distance. Such large screens, or even multiple screens (see [27]), may not be desirable from an aesthetics and aerodynamics point of view and will require careful system integration. Because display clarity is an essential factor for performance, we recommend that future research examines highly salient eHMI, such as a blinking LED strip.

A final limitation is that the present experiment was conducted using young engineering students, who can be expected to have a relatively high spatial ability [46] and perceptual speed [47]. It remains to be investigated whether older people would be able to intuitively understand eHMIs, such as the ones tested in the present study.

#### **5. Conclusions**

In conclusion, eHMIs on the Grill, Windscreen, and Roof were subjectively regarded as the clearest and evoked the highest rate of compliance for approaching cars. A projection-based eHMI has limitations in the form of poor legibility and participants' visual attention distribution. Based on our results, we recommend that eHMIs should be visible from multiple directions.

**Supplementary Materials:** The following are available online at http://www.mdpi.com/2078-2489/11/1/13/s1, Figure S1. Percentage of participants who pressed the spacebar during the videos of Traffic environment 1; Figure S2. Percentage of participants who pressed the spacebar during the videos of Traffic environment 2; Figure S3. Percentage of participants who pressed the spacebar during the videos of Traffic environment 3; Figure S4. Percentage of participants who pressed the spacebar during the videos of Traffic environment 4; Figure S5. Percentage of participants who pressed the spacebar during the videos of Traffic environment 5; Figure S6. Percentage of participants who pressed the spacebar during the videos of Traffic environment 6. Raw data, videos, and scripts are accessible here: https://www.dropbox.com/sh/egpd8kgk9bs9yee/AABi8sbwAvfbiyVxPhKVkuota? dl=0.

**Author Contributions:** Conceptualization, all authors; Methodology, all authors; Software, all authors; Validation, all authors; Formal analysis, all authors; Investigation, all authors; Resources, J.C.F.d.W.; Data curation, Y.B.E. & J.C.F.d.W.; Writing—original draft preparation, Y.B.E. & J.C.F.d.W.; Writing—review and editing, Y.B.E. &, J.C.F.d.W.; Visualization, Y.B.E. &, J.C.F.d.W.; Supervision, Y.B.E. & J.C.F.d.W.; Project administration, J.C.F.d.W.; Funding acquisition, J.C.F.d.W. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research was supported by the research program VIDI with grant number TTW 016.Vidi.178.047 (2018–2022; "How should automated vehicles communicate with other road users?"), which is financed by the Netherlands Organisation for Scientific Research (NWO).

**Conflicts of Interest:** The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

#### **References**


#### *Information* **2020**, *11*, 13


© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

*Article*

### **E**ffi**cient Paradigm to Measure Street-Crossing Onset Time of Pedestrians in Video-Based Interactions with Vehicles**

#### **Stefanie M. Faas 1,2,\*, Stefan Mattes 1, Andrea C. Kao <sup>3</sup> and Martin Baumann <sup>2</sup>**


Received: 28 May 2020; Accepted: 29 June 2020; Published: 11 July 2020

**Abstract:** With self-driving vehicles (SDVs), pedestrians can no longer rely on a human driver. Previous research suggests that pedestrians may benefit from an external Human–Machine Interface (eHMI) displaying information to surrounding traffic participants. This paper introduces a natural methodology to compare eHMI concepts from a pedestrian's viewpoint. To measure eHMI effects on traffic flow, previous video-based studies instructed participants to indicate their crossing decision with interfering data collection devices, such as pressing a button or slider. We developed a quantifiable concept that allows participants to naturally step off a sidewalk to cross the street. Hidden force-sensitive resistor sensors recorded their crossing onset time (COT) in response to real-life videos of approaching vehicles in an immersive crosswalk simulation environment. We validated our method with an initial study of *N* = 34 pedestrians by showing (1) that it is able to detect significant eHMI effects on COT as well as subjective measures of perceived safety and user experience. The approach is further validated by (2) replicating the findings of a test track study and (3) participants' reports that it felt natural to take a step forward to indicate their street crossing decision. We discuss the benefits and limitations of our method with regard to related approaches.

**Keywords:** pedestrians; self-driving vehicles; automated driving; external human-machine interface; test methods; evaluation; user studies

#### **1. Introduction**

Highly (SAE Level 4) and fully (SAE Level 5) automated vehicles no longer require a driver [1]. With self-driving vehicles (SDVs) and human road users sharing the road, a "mixed traffic" transition period will emerge, demanding pedestrians interact with both SDVs and conventional vehicles (CVs) [2]. The related complexity could negatively affect pedestrian safety [2]. In today's traffic, pedestrians rely on a set of elaborate communication strategies when a CV approaches to decide whether it is safe to cross, including vehicle speed [3–5], distance of the vehicle [6], and eye contact with the driver [5,7]. While pedestrians can rely on traffic lights at signalized crossings, right of way can be ambiguous at unsignalized crossings, where human drivers frequently fail to yield to pedestrians. As a consequence, pedestrians are more risk-averse and seek more eye contact with the driver at unsignalized crossings [8,9]. As a substitute to communicating with a human driver, equipping SDVs with an external Human–Machine Interface (eHMI) has been proposed, to provide information to surrounding traffic participants [10]. An eHMI may be particularly important to reduce pedestrians' uncertainty at ambiguous crossings [11]. Preceding studies showed that pedestrians feel uncomfortable when encountering a driverless vehicle [12–14]. Limiting the scope to pedestrians' crossing decisions, previous

research shows that the presence of an eHMI has positive effects on perceived safety [12,13,15–18], calmness [18], trust [12], comfort [19,20], user experience [12], and crossing decisions [13,17,21–23]. It can be argued that the necessity of an eHMI is demonstrated, but the type of information and means of conveying this information need to be further examined to reach the goal of a standardized eHMI. While subjective measures such as pedestrians' perceived safety can be assessed with a questionnaire after each trial (e.g. [12,13,24]), the assessment of eHMIs' effect on traffic flow poses a challenge. Traffic flow is an objective measure that can be quantified. The sooner a pedestrian initiates street crossing, the less time s/he has to wait on the curb and the less time a quickly approaching vehicle has to remain stopped, resulting in faster traffic for both the pedestrian and the approaching vehicle. In addition, to the improved time efficiency associated with a smooth flow of traffic, there are also environmental benefits such as decreased emissions and fuel consumption [25].

In the following, we will give an overview of the preceding applied methods to measure pedestrians' street crossing decision. Then, we will explain the motivation for our method.

#### *1.1. Previously Applied Research Methods to Capture Pedestrian Crossing Decisions*

In the following, we provide an overview of methodologies applied in preceding studies to capture pedestrians' street crossing decisions, discussing their benefits and limitations.

One approach is to capture the decision-making process of street crossing in terms of a function of the distance between the pedestrian and the approaching vehicle (e.g., [26–28]). For example, in a field study by Walker et al. [26], participants were instructed to express their feeling of safety to cross the road at any moment of time between 0 ("not at all willing to cross") to 100 ("totally willing to cross") on an input device that they hold in their hand while a vehicle approaches. While this approach is promising to form a better understanding of the underlying factors influencing a street crossing decision, we believe that it is not suitable to capture traffic flow. Pedestrians' street crossing is an actual behavior that has a binary character—either a pedestrian is waiting on the curb or crosses the street. Thus, traffic flow cannot be measured on a continuous scale.

A further approach is to measure the binary crossing decision (yes/no), i.e., whether pedestrians would be willing to cross the street in front of an approaching vehicle (e.g., [15,24,29,30]). For example, Song et al. [15] conducted an online survey in which pedestrians watched videos of a vehicle approaching from an ego-perspective and had to decide after each trial whether they want to cross (pressing the space key) or let the vehicle pass (not pressing the space key). We argue that this approach does not give any indication regarding traffic flow, since it fails to produce a relationship with the point of time participants would initiate street crossing.

Another approach is to compute crossing onset time (COT) by capturing the time a pedestrian decides to cross in relation to the vehicle's action. We argue that this is the only approach that can draw conclusions about eHMI effects on traffic flow. Regarding COT, the preceding methods can be divided into unnatural approaches, requiring participants to indicate their decision to cross in an explicit manner via pressing a button [11,17,22,23] or raising their hand [31], and natural approaches which allow participants to indicate their decision to cross with the actual behavior of taking a step forward [12,21,32,33]. We believe that methods requiring participants to imagine how they would act or feel make their decision explicit, which might limit their validity. We argue that, in terms of ecological validity, the natural behavior of stepping forward constitutes the best approach to measure COT. For example, in a test track study by Faas et al. [12], pedestrians watched an approaching vehicle coming to a stop at an intersection and had to cross the street as soon as they felt safe to do so. The vehicle encounters were video recorded for later analysis to estimate the time gap between the vehicle coming to a stop and the pedestrians' COT. Street crossing can be seen as an unreflective skillful action [34]. When crossing a street, pedestrians often act adequately, yet without deliberation. Street-crossing decisions are not guided by explicit reasoning, but constitute a form of embodied intelligence or cognition. Bodily processes or so-called "gut-feelings" might be of enormous importance for street crossing decision making [35]. It can be argued that pedestrians make their decision to cross

unconsciously as soon as they feel that it is safe to cross, which is usually as soon as they are sure that the vehicle intends to yield for them. Their embodied nature makes individuals' street crossing decisions sensitive to aspects of the situation [34], such as the presence of a visible driver or an eHMI. However, to date, only a few test track studies [12,33] and VR studies [21,32] have assessed COT by allowing pedestrians to take a step forward.

#### *1.2. Proposed Concept to Capture Street Crossing Onset Time (COT)*

In this paper, we propose a parsimonious, safe, and reproducible paradigm for video-based lab studies that can capture COT in a natural way to test the efficacy of eHMI concepts for SDV and pedestrian interaction. We present a method in which participants indicate their COT by actually stepping off a "sidewalk" onto a "crosswalk". We conducted the experiment in a lab environment where participants were immersed using two large TV screens for a panoramic street view. With adhesive tape, we sketched a sidewalk and a crosswalk onto the floor. Under the sidewalk, we hid two force-sensitive sensors to capture COT. When the participant stepped onto the sidewalk, the videos were triggered and the COT timer was started. The COT was recorded when the participant stepped off the sidewalk to enter the crosswalk, with force-sensitive resistor sensors making data analysis time-efficient.

For the experiment, we contrasted three eHMI variants (no eHMI, status eHMI, status+intent eHMI) to address the research question of which information an eHMI should communicate. We used two light-based eHMI concepts adapted from Faas et al. [12]. The status eHMI is a steady blue-green light indicating the automated driving mode, as recommended by the SAE [36]. For the status+intent eHMI, an additional slowly flashing blue-green light (adapted from [37]) indicated the SDV's intent to yield as soon as the vehicle was braking, thus resembling the frontal brake light concept of previous eHMI studies [13,18,24,38]. We put the encounters with a driverless SDV in relation to encounters with a CV steered by a driver. We conducted three measuring points to study the stability of eHMI effects. The results of the study are published in Faas et al. [39]. The study showed that pedestrians benefit from an eHMI communicating SDVs' status, and that additionally communicating SDVs' intent adds further value. These eHMI effects last (acceptance, user experience) or even increase (COT, perceived safety, trust, learnability, reliance) with time.

The present paper focuses on the description and validation of the applied research method. For the present paper, we specifically re-evaluated the data of the first measuring time of the longitudinal study of Faas et al. [39], since we argue that our method is able to compare the efficacy of eHMI variants with one measuring time only. Furthermore, the present paper includes additional procedures that were not reported in Faas et al. [39] to validate the applied research method. To this end, we compared participants' responses in the lab study of Faas et al. [39] with participants' responses in the test track study of Faas et al. [12] to investigate potential differences attributed to the applied experimental methodology. Additionally, we analyzed participants' self-reported naturalism in the study setup. In this paper, we provide a detailed description of our method to allow others to adopt it. We validate our method by showing that it is able to detect significant eHMI effects on COT (and thus traffic flow) and subjective measures of perceived safety and user experience. Our approach is further validated by replicating findings of a test track study. Finally, participants reported that it felt natural to take a step forward to indicate that they would cross the street. We conclude that our paradigm allows relative comparisons of eHMI variants.

#### **2. Materials and Methods**

#### *2.1. Participants*

Thirty-four pedestrians (19 male, 15 female) in the age range of 22 to 69 years (*M* = 41.5, *SD* = 15.8 years) took part in the study. A third-party agency recruited the participants. For screening, potential participants specified which modes of transportation they use during a typical work week by distributing the percentage out of 100% among driving, public transit, biking, walking, and other. Those, who distributed at least 20% to walking, received an invitation to participate in the study. All participants were living in the San Francisco Bay Area, CA, USA. All subjects gave their informed consent for inclusion before they participated in the study. The study was conducted in accordance with the Declaration of Helsinki, and the protocol was approved by the RD Ethical Clearing Committee of Daimler AG.

#### *2.2. Independent Variable*

Figure 1 gives an overview of the study procedure, including the independent variable, that is, vehicle type.

**Figure 1.** Procedure. The top row represents the study flow and the independent variable (IV). The driverless self-driving vehicle (SDV) (automated mode) is equipped with no eHMI, status eHMI or status+intent eHMI (test conditions 1, 2, 3). Both the SDV steered by a driver (conventional mode) and the conventional vehicle (CV) are either yielding (test conditions 4, 5) or non-yielding (filler test conditions 4b, 5b). In a randomized order, each participant experienced all seven test conditions once for habituation (wave 1) and once for data collection (wave 2). The bottom row represents the dependent variables (DVs) assessed in wave 2. The crossing onset time (COT) data were recorded by an Arduino Uno through the logs of two force-sensitive resistor sensors. For the subjective measures, participants filled in questionnaires after each trial. While perceived safety was measured for all yielding vehicle trials, we applied the user experience scales only after trials with a driverless SDV. Reproduced with permission from Faas, Kao and Baumann, A longitudinal video study on communicating status and intent for self-driving vehicle–pedestrian interaction, *Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '20)*; published by ACM, 2020, doi: 10.1145/3313831.3376484.

Three eHMI test conditions were contrasted without a driver, i.e., self-driving (Figure 2):


The driverless SDV was shown to always yield to pedestrians. We chose a driverless setup to resemble a future automated vehicle on its way to pick up a passenger. Furthermore, to realize a mixed traffic environment, we incorporated encounters with vehicles steered by a visible driver. The human-driven vehicles (Figure 3) were either yielding (test conditions 4, 5) or non-yielding (filler test conditions 4b, 5b):


**Figure 2.** The study compared three eHMI test conditions within a driverless self-driving vehicle (SDV). This figure shows the status+intent eHMI (test condition 3). Two steady lights at the fake sensors indicate the automated status; a slowly flashing light at the windshield indicates its intent to yield to the pedestrian. For the status eHMI (test condition 2), the two steady lights were engaged. Without an eHMI (test condition 1), no lights were engaged. Reproduced with permission from Faas, Kao and Baumann, A longitudinal video study on communicating status and intent for self-driving vehicle–pedestrian interaction, *Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '20)*; published by ACM, 2020, doi: 10.1145/3313831.3376484.

**Figure 3.** To provide a mixed-traffic environment, a visible driver steered the self-driving vehicle (SDV; top row) respectively a conventional vehicle (CV; bottom row). They were either (**a**) yielding to let the pedestrian cross first (test condition 4, 5); or (**b**) non-yielding (test condition 4b, 5b) so that the pedestrian has to wait for the vehicle to go first and crosses the empty street afterwards safely.

This study was designed to examine participants' responses when the car was yielding. Thus, the responses to the non-yielding vehicles were not analyzed (test conditions 4b, 5b). The non-yielding vehicles were included to ensure that participants would not habituate to all cars stopping for them, which might lower their attention and COT. We deliberately chose not to include any non-yielding driverless SDV encounters. While it can be argued that human drivers differ in their driving style, vehicle automation is programmed to adhere to traffic laws, thus always yielding at a pedestrian crossing.

#### *2.3. Materials and Equipment*

The experiment took place at the lab facilities of Mercedes-Benz Research and Development North America in Sunnyvale, CA, USA. We immersed participants with two large TV screens (25.5 inches (width) by 44 inches (length)) displaying the real-life video clips. The TV screens were set up at an angle of 60 degree to create a panoramic view. With adhesive tape and a mat, we sketched a "sidewalk" and a "crosswalk" onto the floor (Figure 4). Under the "sidewalk", we fixated two force-sensitive resistor sensors with the dimensions 44.45 × 44.45 nm (1.75 × 1.75 in). On the "sidewalk", we sketched two footprints at the same level as the force-sensitive resistor sensors. An Arduino Uno analog-to-digital converter was used to read the variable resistance of the force-sensitive resistor sensors. A 1k resistor was used to create a voltage divider. The software Arduino IDE (version 1.8.9) was used to code the data. A timer was added to display the elapsed time. When participants stepped onto the footprints (respectively putting force on each sensor), the COT timer started and the video clips were triggered starting with a three-second countdown. To provoke natural behavior, the participants' task was to cross the street when they felt safe to do so by entering the "crosswalk". When participants stepped off the "sidewalk" (respectively removing force on either sensor), the COT timer stopped.

**Figure 4.** Study setup. Reproduced with permission from Faas, Kao and Baumann, A longitudinal video study on communicating status and intent for self-driving vehicle–pedestrian interaction, *Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '20)*; published by ACM, 2020, doi: 10.1145/3313831.3376484.

For the real-life videos with the SDV, we created a Wizard-of-Oz setup [40]. On the roof of a silver Mercedes-Benz S-Class (Series W222), we mounted fake Lidar sensors similar to those of SDVs currently test-driving on public roads (e.g., [41,42]) as a reminder of the vehicle's ability to drive automated (see [43]). On the fake sensors, we attached LED light stripes to simulate the eHMIs. To create the deception of a driverless vehicle (test conditions 1, 2, 3), the driver controlling the vehicle wore a seat costume (adapted from [14]). For the videos in conventional driving mode (test conditions 4, 4b), the driver steering the vehicle was visible. For the videos with the CV and a visible driver (test conditions 5, 5b), we used three silver sedan models, namely a Chevrolet Impala, a Dodge Charger, and a Kia Optima. The occurrence of these models was randomized. All videos were cropped to a

length of 15 s. Five observers who were not associated with the study checked the videos to ensure that they all displayed the same driving behavior.

#### *2.4. Real-World Video Clips*

#### 2.4.1. Real-World Crossing Scenario

For the traffic scenario, we chose an intersection that requires pedestrians to cross an expressway exit lane while a vehicle approaches. The crossing has no traffic lights, but the request "YIELD" is written onto the street. In a preceding workshop, this traffic scenario was identified to be ambiguous for pedestrians. Workshop participants reported that, while the law states designated priority to pedestrians, the norm is that some approaching vehicles do not stop. In ambiguous traffic scenarios, communication strategies with the driver become especially prominent [9].

The video clips were recorded on a sunny day on a public highway. The camera perspective was from the viewpoint of a pedestrian standing on the sidewalk waiting to cross the road (see Figures 2 and 3). Specifically, the approaching vehicle was exiting Central Expressway to enter North Mary Avenue in Sunnyvale, CA, USA. Figure 5 shows the traffic scenario from a bird's eye-view.

**Figure 5.** Traffic scenario from a bird's-eye view. The crossing scenario at the exit lane is framed in black. Copyright: Imagery ©2020 Google, Map data ©2020.

#### 2.4.2. Video Flow

The experiment employed seven test conditions with yielding (test conditions 1, 2, 3, 4, 5) and non-yielding (test conditions 4b, 5b) vehicles in a within-subjects test design. Test conditions were randomized according to a Latin Square. Table 1 shows an overview of the video flow. The left TV screen showed the street with the approaching vehicles and the right screen showed the crosswalk. To allow time for participants to focus their attention back to the TV screens, each test condition started with a 3 s countdown on the left screen. Then, the video of the corresponding test condition was triggered. In each video, a vehicle approaches with a constant speed of 25 mph.


#### **Table 1.** Participants' task and video flow.

For the yielding videos (test conditions 1, 2, 3, 4, 5), the vehicle approached with a constant speed for 3 s, decelerated to come to a stop at the intersection for 8 s, and waited for the pedestrian to cross for 4 s. After participants stepped off the "sidewalk", we provided visual feedback on their crossing decision through a street crossing video from an ego perspective on the right screen. On the left screen, the vehicle was waiting for the pedestrian to cross.

For the non-yielding videos (test conditions 4b, 5b), the vehicle slightly decelerated to make a right turn, but did not yield to the pedestrian. If participants succeeded by waiting for the car to pass to enter the street, a video was triggered on the right screen showing a street crossing from an ego perspective, while on the left screen the road was empty. If participants entered the crosswalk while the vehicle was still approaching, a red screen with the message "not safe to cross!" (left screen) and a video of a passing car (right screen) was triggered. In this case, the test condition was repeated.

#### *2.5. Procedure and Participants' Task*

Prior to the experiment, participants provided written informed consent. Participants completed a demographic questionnaire. Then, participants were introduced with the definition of high driving automation (SAE Level 4). Participants were told that the SDV they will encounter *"has both an automated and a manual driving mode. The vehicle can thus either be self-driving or be controlled manually by a driver."* Next, the three eHMI concepts were explained to participants. Subsequently, participants' understanding of the eHMI concepts was tested by asking them *"What does the light signal indicate?"*

Following, participants were familiarized with the study setup by the experimenter going through the participants' task. Participants were shown an example scenario with the status+intent eHMI (test condition 3). First, they were asked to imagine that the mat is a "sidewalk".

Then, namely "The next slide lets you know that at this time, you can step on the sidewalk to begin the scenario. When you step on the sidewalk, please make sure your feet are aligned with the footprints. Once both feet are on these footprints, the scenario will begin." Participants were told that in each scenario a vehicle will be approaching, but not all vehicles are going to yield. The participants' task was "to safely cross the road at an intersection as a pedestrian while different vehicles approach. As soon as you feel safe to cross, please do so. You must cross for all scenarios. To cross, just step off the sidewalk as if you're going to enter the crosswalk." Thus, with each trial, participants indicated their COT by stepping of the "sidewalk" to enter the "crosswalk" (see Table 1). The field of view was panoramic in the way that pedestrian had to bend their head to the left to observe the approaching vehicle and step forward to initiate street crossing.

Subsequently, the room's light was dimmed to allow a better contrast for the participant to see the contents of the TV screens clearly. Participants encountered two waves consisting of seven trials, each with vehicles that yielded to the pedestrian in five trials (test conditions 1, 2, 3, 4, 5) and non-yielding vehicles in two trials (test conditions 4b, 5b). Participants experienced one wave for habituation. After habituation, the second wave followed for data acquisition. We assessed participants' COTs and subjective measures for all yielding vehicle trials. The crossing onset data were recorded by an Arduino Uno. After each trial, participants filled in a questionnaire to indicate subjective measures of perceived safety and user experience (see Figure 1).

After all trials, participants were asked to rate the naturalism of our paradigm. We informed participants that the encountered vehicle had not been driving automated at any time. Total testing time was about 30 min per participant.

#### *2.6. Dependent Variables*

In this paper, we report the following objective measure:

• Crossing Onset Time (COT): After each yielding vehicle trial (test conditions 1, 2, 3, 4, 5), we determined COT. COT indicates the time in seconds between the vehicle yielding and the pedestrian stepping off the "sidewalk". Hence, to calculate the COT, we have subtracted the time between the pedestrian entering the "sidewalk" and the vehicle yielding (3s countdown + 3s vehicle approaching at constant speed). We used COT as an index of traffic flow. Shorter times indicate an earlier crossing decision. The earlier pedestrians cross when it is safe to do so, the more efficient the traffic flows. We excluded extreme cases from data analysis, defined as more than three times the interquartile range (IQR) greater than the upper or lower quartile (2 values of *N* = 1 participant excluded).

Furthermore, we report the following subjective measures, all measured on a scale from −3 (very negative) to +3 (very positive):


#### *2.7. Data Analysis*

We used repeated measures ANOVAs to test the effect of vehicle type (test conditions 1, 2, 3, 4, 5) on COT and perceived safety. As an additional analysis, we performed cluster analyses to categorize the participating pedestrians into groups according to their COT obtained for each yielding test condition. To classify pedestrians into groups, we used Ward's method in combination with squared Euclidean distances (see [46,47]). As a hierarchical procedure, the Ward's method successively merges cases into clusters such that the variance within a cluster is associated with the smallest possible increase (see [46,47]).

Next, we used repeated measures ANOVAs to test the effects of eHMI type (test conditions 1, 2, 3) on UX qualities (HQ and PQ).

Finally, we compared the subjective responses to the PQ scale of our participants and the participants in the test track study of Faas et al. [12] to investigate potential differences attributed to the applied experimental methodology. For this purpose, we used the data of the no eHMI, status eHMI, and status+intent eHMI test conditions that were assessed with *N* = 30 participants at an intersection traffic scenario on a test track in Immendingen, Germany. We believe that this comparison is valuable, although the experiments differ regarding participants' nationality (U.S. vs. German) and traffic scenario (exit lane vs. four-way intersection). The study participants of this lab study and the test track study did not differ regarding age, *<sup>t</sup>*(57) <sup>=</sup> <sup>−</sup>0.37, *<sup>p</sup>* <sup>=</sup> 0.714, or gender, <sup>χ</sup>2(1) <sup>=</sup> 0.04, *<sup>p</sup>* <sup>=</sup> 0.838. We chose the PQ scale for the following comparison, since it is the only standardized questionnaire that has been applied in both studies. We used two-sample *t*-tests to investigate whether pedestrians' subjective PQ ratings of the three eHMI variants (no eHMI, status eHMI, status+intent eHMI) differ among experimental methodology (lab study vs. test track study).

For all ANOVAs, the data were checked for sphericity using Mauchly's test, and, where violated, Greenhouse–Geisser and Hyunh–Feldt corrections were applied (as recommended by [48]). Where needed, we used Bonferroni-corrected post-hoc *t*-tests.

#### **3. Results**

#### *3.1. Crossing Onset Time (COT)*

On COT, the one-way repeated measures ANOVA revealed a significant effect of vehicle, *F*(4, 128) = 12.47, *p* < 0.001, η*<sup>p</sup>* <sup>2</sup> = 0.28. Figure 6 shows the mean values. Post-hoc *t*-tests revealed that participants started crossing earlier if the driverless SDV (automated mode) was equipped with a status+intent eHMI (*M* = 6.74, *SD* = 2.17) compared to no eHMI (*M* = 7.86, *SD* = 1.40), *p* = 0.003, 95% CI [0.30–1.95]. However, there was no improvement from no-eHMI to the status eHMI (*M* = 7.46, *SD* = 1.35), *p* = 0.439, or from the status eHMI to the status+intent eHMI, *p* = 0.117. Regarding

human-driven vehicles, there is no difference in COT between an SDV steered by a driver (*M* = 8.14, *SD* = 1.34) and a CV steered by a driver (*M* = 8.08, *SD* = 1.35), *p* = 1.000. When comparing driverless vehicles and vehicles steered by a driver, participants initiated street crossing at the same time for encounters with a driverless SDV without eHMI and an SDV steered by a driver, *p* = 1.000, or a CV steered by a driver, *p* = 1.000. However, if the driverless SDV is equipped with a status eHMI, *p* = 0.005, 95% CI [0.15–1.21], or a status+intent eHMI, *p* < 0.001, 95% CI [0.62–2.19], participants initiated crossing earlier than if encountering a SDV steered by a driver. Analogously, if the driverless SDV is equipped with a status eHMI, *p* = 0.068, 95% CI [−0.03–1.27], or a status+intent eHMI, *p* < 0.001, 95% CI [0.51–2.18], participants (tended to) initiate crossing earlier than if encountering a CV steered by a driver.

To account for pedestrians' individual crossing strategies [12], we performed cluster analyses, classifying pedestrians into groups according to their COT obtained for each yielding test condition. A dendrogram graphically illustrates the formation of clusters at the individual fusion stages (Figure 7a). To determine the number of clusters into which pedestrians can be meaningfully clustered, we computed a structogram (Figure 7b). The stuctogram graphically illustrates that the fourth cluster contributes significantly less to the variance than the first three clusters. Because of the considerable drop in the Sum of Squared Errors (ΔSSE), it seems reasonable to assume a solution with three clusters. Figure 8 shows the individual COT for each participant sorted by the three derived clusters from cluster analyses. Visual inspection suggests the following description of the three clusters: The first cluster (*N* = 7) includes early crossers who cross before the vehicle comes to a stop and are strongly influenced by the test conditions, particularly by the presence of a status+intent eHMI. The second cluster (*N* = 20) describes intermediate crossers who initiate crossing at about the same time as the vehicle comes to a stop. They are slightly influenced by the test conditions and constitute the biggest cluster. The third cluster (*N* = 7) includes late crossers who wait for the vehicle to come to a stop before crossing the street. These late crossers are slightly influenced by the test conditions.

In summary, pedestrians initiated street-crossing the soonest with a status+intent eHMI. Compared to a CV or SDV steered by a driver, pedestrians initiated crossing at the same time if the driverless SDV was not equipped with an eHMI and sooner if it was equipped with an eHMI displaying the SDV's status and intent (see also: Faas et al. [39]). The significant effect of status+intent eHMI seems to be carried by a cluster of pedestrians, who are likewise characterized by a tendency to cross the street early, also with human-driven vehicles.

**Figure 6.** Mean crossing onset time (COT) for all yielding test conditions. Error bars: ±1 SE.

**Figure 7.** Cluster analyses. (**a**) Dendrogram. On the x-axis, the Sum of Squared Errors (ΔSSE) are plotted. (**b**) Structogram for the number of clusters ranging from 1 to 7. The structogram indicates that by splitting the 3 clusters into 4 clusters, the ΔSSE values do not decrease much. Thus, three clusters seem appropriate for this scheme.

**Figure 8.** Individual crossing onset time (COT) per participant for all yielding test conditions. Participants were sorted according to the three clusters derived from cluster analyses: (1) early crossers who are strongly influenced by the presence of the eHMIs, particularly the status+intent eHMI (*N* = 7) as well as (2) intermediate (*N* = 20) and (3) late crossers (*N* = 7), who are both slightly influenced by the eHMIs. Within each cluster, participants were sorted according to their average COT over all test conditions (e.g., within cluster 1, SP24 crosses the earliest, SP5 the latest over all yielding test conditions).

#### *3.2. Perceived Safety*

On perceived safety, the one-way repeated measures ANOVA found a significant effect of vehicle, *F*(2.59, 85.56) = 8.65, *p* < 0.001, η*<sup>p</sup>* <sup>2</sup> = 0.21. Figure 9 shows the results. Pedestrians feel significantly safer if the driverless SDV (automated mode) is equipped with a status eHMI (*M* = 0.31, *SD* = 1.67) than if it is without eHMI (*M* = −0.43, *SD* = 1.73), *p* = 0.011, 95% CI [0.12–1.37]. With a status+intent eHMI (*M* = 1.17, *SD* = 1.32), pedestrians feel safer than with a status eHMI, *p* = 0.026, 95% CI [0.07–1.65], and, thus, also safer than without eHMI, *p* < 0.001, 95% CI [0.69–2.51], drawing the following pattern: status+intent eHMI > status eHMI > no eHMI. Regarding human-driven vehicles, participants feel equally safe with an SDV steered by a driver (*M* = 1.06, *SD* = 1.46) and a CV steered by a driver (*M* = 1.06, *SD* = 1.51), *p* = 1.000. Compared to an SDV steered by a driver or a CV steered by a driver, participants felt less safe encountering a driverless SDV without eHMI, all *p*s < 0.01. However, if the driverless SDV was equipped with a status eHMI or a status+intent eHMI, participants felt as safe as with a human driven vehicle, all *p*s > 0.05.

**Figure 9.** Mean perceived safety scores for all yielding test conditions. Error bars: ±1 SE.

In summary, pedestrians felt safest with a status+intent eHMI. With any eHMI, pedestrians felt as safe as with human-driven vehicles. However, if the driverless SDV is not equipped with an eHMI, pedestrians felt less safe than with human-driven vehicles (see also: Faas et al. [39]).

#### *3.3. User Experience*

The one-way repeated measures ANOVAs found a significant effect of eHMI on PQ, *F*(2, 66) = 54.27, *p* < 0.001, η*<sup>p</sup>* <sup>2</sup> = 0.62, and HQ, *F*(1.60, 52.84) = 22.20, *p* < 0.001, η*<sup>p</sup>* <sup>2</sup> = 0.40. Figure 10 shows the results.

Pedestrians rate PQ significantly higher if the driverless SDV (automated mode) is equipped with the status eHMI (*M* = 1.03, *SD* = 1.37) than without eHMI (*M* = −0.49, *SD* = 1.30), *p* < 0.001, 95% CI [0.98–2.06]. With a status+intent eHMI (*M* = 1.93, *SD* = 0.86), pedestrians rate PQ higher than with a status eHMI, *p* = 0.001, 95% CI [0.32–1.48], and, thus, higher than without eHMI, *p* < 0.001, 95% CI [1.77–3.07], revealing the following pattern: status+intent eHMI > status eHMI > no eHMI.

Accordingly, pedestrians rate HQ significantly higher if the driverless SDV (automated mode) is equipped with the status eHMI (*M* = 1.43, *SD* = 1.34) than without eHMI (*M* = 0.76, *SD* = 1.67), *p* = 0.003, 95% CI [0.21–1.13]. With a status+intent eHMI (*M* = 2.11, *SD* = 0.86), pedestrians rate HQ higher than with a status eHMI, *p* = 0.001, 95% CI [0.27–1.09], and, thus, also higher than without eHMI, *p* < 0.001, 95% CI [0.71–1.98], leading to the same pattern: status+intent eHMI > status eHMI > no eHMI.

Based on Hinderks et al. [49], the UX scores can be interpreted as bad (PQ) and below average (HQ) for no eHMI, below average (PQ) and good (HQ) for the status eHMI and excellent (PQ, HQ) for the status+intent eHMI (see also: Faas et al. [39]).

**Figure 10.** Mean UX scores for all driverless self-driving vehicle (SDV; automated mode) test conditions, as shown by the subscales: (**a**) Pragmatic Quality (PQ) and (**b**) Hedonic Quality (HQ). Error bars: ±1 SE.

#### *3.4. Comparison of Participants' PQ Ratings in This Lab Study and a Test Track Study*

We compared the PQ responses of this lab study to the PQ results of the test track study of Faas et al. [12] to investigate whether the different experimental methodologies lead to different results. We used two-sample *t*-tests to investigate whether pedestrians' PQ ratings of the three eHMI variants (no eHMI, status eHMI, status+intent eHMI) differ among experimental methodology (this lab study vs. test track study of Faas et al. [12]). Levene's test for equality of variances was not violated for any *t*-test. Table 2 and Figure 11 show the results. For no eHMI, participants' PQ ratings were significantly lower in this lab study compared to the test track study of Faas et al. [12], *t*(62) = −2.10, *p* = 0.40, *r* = 0.26. However, both mean scores lead to the same interpretation of a bad user experience according to the benchmarks of Hinderks et al. [49]. Accordingly, for the status eHMI there was a trend implicating that participants' PQ ratings were lower in this lab study compared to the test track study of Faas et al. [12], *t*(62) = −1.71, *p* = 0.92, *r* = 0.21. For the status+intent eHMI, we found no significant difference between the studies, *p* = 0.822.

Furthermore, both studies revealed the same results regarding participants' PQ ratings of the three eHMI conditions: status+intent eHMI > status eHMI > no eHMI, leading to similar conclusions (see also: Faas et al. [12], Faas et al. [39]).


**Table 2.** Two-sample *t*-tests.

<sup>1</sup> in this lab study, *<sup>N</sup>* <sup>=</sup> 34 participants experienced the three eHMIs within-subject though real-world video clips; <sup>2</sup> in the test track study of Faas et al. [12], *<sup>N</sup>* <sup>=</sup> 30 participants experienced the three eHMIs within-subject at an intersection with a real vehicle. \* *p* < 0.05.

**Figure 11.** Pedestrians' Pragmatic Quality (PQ) ratings of the three eHMI variants (test conditions 1, 2, 3) in this lab study and in the test track study of Faas et al. [12]. Error bars: ±1 SE.

#### *3.5. Self-Reported Naturalism*

After all trials, participants rated the naturalism of the experiment on a scale from −3 ("not at all") to +3 ("extremely"). The mean score to the question "How immersive was the study setup?" was *M* = 0.62 (*SD* = 1.37), suggesting a fair immersion. The mean score to the question "How natural was it to take a step forward to indicate that you would cross the street?" was *M* = 1.82 (*SD* = 1.03), suggesting good validity.

#### **4. Discussion**

This paper presents an innovative method to study SDV–pedestrian interactions in a safe, reproducible, and a natural manner for video-based eHMI studies. We developed a cost-efficient concept that allows participants to show natural behavior (i.e., entering a street). Participants make an actual street-crossing decision; that is, they are instructed to take a step off a sketched "sidewalk" to enter a sketched "crosswalk" to measure COT as a means to assess traffic flow. In the following, we discuss how the eHMI effects, which have been brought to light by our approach, validate its application. Furthermore, we discuss our method with regard to related approaches as well as the limitations and further improvements of our methodology.

#### *4.1. Validation*

We showed that our method is able to detect statistically significant eHMI effects that are comparable to a real-life study on a test track, and further displays a good level of self-reported naturalism.

The results of the eHMI study, yielding significant and meaningful results, validate the use of our approach. We found that, compared to human-driven vehicles, pedestrians feel less safe encountering a driverless SDV if it has no eHMI. However, pedestrians feel as safe if the driverless SDV is equipped with an eHMI displaying its status and, eventually, intent. When comparing the eHMI variants, all subjective measures (perceived safety, HQ, PQ) revealed the same pattern: status+intent eHMI > status eHMI > no eHMI. On COT, we found that pedestrians make earlier (thus more efficient) crossing decisions with a status+intent eHMI than with no eHMI. The significant effect of status+intent eHMI seems to be carried by a cluster of participants, suggesting individual crossing strategies among pedestrians (comparable to different lane changing strategies among drivers, see for example, [50]). Thus, providing pedestrians with information on SDVs' automated status and imminent intent supports a feeling of safety and HQ. Pedestrians perceive an eHMI to be useful information (PQ),

supporting them in their decision to cross the road as observed in earlier COTs (for a textual discussion, see Faas et al. [39]).

The approach is further validated by the fact that the study outcomes confirm previous research showing eHMI effects on perceived safety [12,13,15–18] and crossing onset [13,17,21–23], suggesting that our method is as suitable as other approaches to detect eHMI effects. This becomes particularly clear as our method replicates the findings of a test track study by Faas et al. [12]. Both studies compared the effect of light-based eHMI concepts on PQ at an ambiguous crossing traffic scenario. Both studies revealed the same significant pattern regarding pedestrians' rating of PQ: status+intent eHMI > status eHMI > no eHMI. Thus, both studies showed that communicating an SDV's intent adds further benefit for pedestrians over just displaying the automated status. However, in the current lab study (Faas et al. [39]) pedestrians rated the no-eHMI test conditions as significantly worse, and the status eHMI test condition as slightly worse, than participants of the test track study (Faas et al. [12]). We believe that the worse ratings emerged because, in the lab study, a vehicle without an eHMI could mean a real disadvantage, potentially representing a non-yielding vehicle. On the contrary, in the test track study (Faas et al. [12]) all vehicles yielded, so the participants' safety was guaranteed. Further, a lab study is more controlled than a test track study. Thus, while showing the same pattern of eHMI ratings (status+intent eHMI > status eHMI > no eHMI), the lab study produced more variance in participants' ratings, leading to a more differentiated evaluation of the eHMIs variants.

Finally, participants reported that it felt natural to take a step forward to indicate their street-crossing decisions (*M* = 1.82 on a scale from −3 to +3), suggesting a good validity.

#### *4.2. Benefits with Regard to Related Approaches*

The benefits of our method are its natural approach to assess COT in a parsimonious, reproducible, and safe manner.

Most previous approaches assessed crossing decisions in an unnatural manner, instructing participants to indicate their decision via pressing a button [13,15,17,22,23,29,30], a slider [26–28], or raising their hand [31]. Those approaches make the participants' crossing decisions explicit, creating an intermediary step that may affect their behavior. Participants have to transfer their implicit crossing decision to an explicit motor decision with their hand. Furthermore, participants may have to look at the button or slider, so they cannot observe the approaching vehicle at all times. For example, in the study of Walker et al. [26], 29% of the participants reported that they were not able to use the slider naturally, thus not able to indicate their feeling of safety valid. Since street-crossing can be seen as an unreflective skillful action, which is a form of embodied intelligence or cognition [34,35], we argue that COT should be measured in a natural way, by actually stepping off a sidewalk onto a crosswalk. Our approach allows participants to show natural street-crossing behavior (i.e., entering a street) if they feel safe to cross. Thus, with our method, participants are closer to the processes that take place in real-world traffic situations, which improves ecological validity.

Only a few test track studies [12,33] and VR studies [21,32] allowed participants to indicate their decision to cross in a natural manner via the actual behavior of making a step forward. However, test track and VR studies require high-priced apparatus and materials as well as time-consuming data analysis. For example, the required resources for an eHMI study on a test track include a test track location, a real vehicle, a light setup (e.g., LED stripes), and a driver steering the vehicle, possibly in a seat costume. These resources are required for several days. For later analysis, videos of each vehicle encounter need to be visually analyzed to extract the crossing onset measure (e.g., [12,33,37]). Similarly, to conduct and analyze VR studies, researchers need technologically advanced software and hardware (for an overview, see [51]). Participants might suffer simulation sickness [52]. Compared to previous studies on a test track or in VR, our approach requires only a few materials. Video-based studies are cost-efficient in comparison. The material required for our approach include two TV screens, adhesive tape, two force-sensitive resistor sensors, an Arduino Uno analog-to-digital converter, and a laptop with the software Arduino IDE. For our real-world eHMI video clips, we needed a vehicle, fake

Lidar sensors with LED light stripes, and a seat costume to create the illusion of a driverless vehicle. If researchers do not have access to those materials, future studies could use animated videos instead, just as VR studies do (e.g., [17,21,32]). An advantage of animated videos is that they allow researchers to have absolute control of any variable they might want to manipulate. However, their physical accuracy is lower than real-life videos [53]. Data analysis of our approach is as time-efficient as the Arduino Uno records COT in real-time.

Furthermore, video-based studies allow for flexibility and variety in eHMI test conditions. Researchers need to conduct only one video of an approaching vehicle and can use animations to create eHMI variants. The study is reproducible.

Lastly, one advantage of video studies is the possibility of incorporating non-yielding vehicle encounters while ensuring participants' safety. In contrast, test track studies need to meet high ethical standards and safety provisions, limiting their representativeness for complex urban traffic scenarios. For example, to guarantee participants' safety, non-yielding vehicle encounters should not be incorporated. Our approach allows participants to experience safety critical situations without actually endangering them. Although non-yielding vehicle encounters are not of research interest, they prevent participants from habituating to all cars stopping for them, which might lower their attention and, thus, the validity of the study.

#### *4.3. Limitations and Recommendations*

While our approach is promising, we acknowledge that there are limitations that require further attention. The first one refers to the absence of a real safety risk. The fact that participants cannot be harmed ensures participants' safety, but it also limits the realism of our approach. Since pedestrians do not have to fear any real risks from non-yielding vehicles, they might behave in a riskier manner than in normal traffic. The second limitation refers to participants' fair evaluation of the approach's immersiveness (*M* = 0.62 on a scale from −3 to +3), which might be rooted in the participants' constrained field of view. While real-life videos from the perspective of a pedestrian exhibit a high level of physical accuracy, their operationalization is not as good as experiencing a traffic situation in a real environment [53]. Thus, our method is suitable for relative comparisons (i.e., detecting differences between eHMI concepts) but not to establish the true value of COT for a certain eHMI concept. However, this limitation applies to all research studies that use simulation. To make the setup more realistic, future studies could setup the "sidewalk" with a real curb so that participants need to take a step down onto the "crosswalk" compared to the current setup with a flat lab floor (suggestion made by Koojman et al. [21]). Moreover, the use of VR glasses instead of TV screens may increase the participants' degree of immersion. However, despite these limitations, our approach proved its sensitivity to detect eHMI effects on pedestrians' COT, perceived safety, and user experience.

#### **5. Conclusions**

This paper introduces a novel paradigm to study SDV–pedestrian interaction that is relatively easy to implement and can find a balance between a natural and parsimonious study setup. We propose the use of two TV screens and a simulated sidewalk with hidden force-sensitive resistor sensors as the input device. We believe that street crossing behavior should be grasped by the actual action of stepping off a sidewalk onto a street. We propose that the study design shows clear advantages, as opposed to an artificial design with participants watching videos on a screen in a sitting position and/or indicating their crossing decision with a button or slider. We believe that this experimental design can be valuable and effective for future video studies examining vehicle-pedestrian interaction.

Within the presented approach, it was possible to demonstrate the need for an eHMI for the communication between SDVs and pedestrians in an ambiguous traffic scenario. The eHMI concepts revealed significant differences in terms of COT, perceived safety, and User Experience (for a textual discussion, see Faas et al. [39]). Further, we validated our method's efficacy by showing that its results are not only comparable, but more differentiated than the results produced by a test track approach. Furthermore, our method displays a good level of self-reported naturalism. Thus, the presented method is validated as a suitable tool to make relative comparisons between eHMI concepts. We conclude that the method can be applied in future studies comparing eHMI concepts from a pedestrians' point of view.

**Author Contributions:** Conceptualization, S.M.F., S.M., A.C.K. and M.B.; Data curation, S.M.F. and A.C.K.; Formal analysis, S.M.F.; Investigation, S.M.F. and A.C.K.; Methodology, S.M.F., S.M., A.C.K. and M.B.; Project administration, S.M.F. and A.C.K.; Resources, S.M.F. and A.C.K.; Software, A.C.K.; Supervision, S.M.F.; Validation, S.M.F. and A.C.K.; Visualization, S.M.F.; Writing—original draft, S.M.F.; Writing—review and editing, S.M.F., S.M., A.C.K. and M.B. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Acknowledgments:** We would like to thank Juergen Arnold, Jeff Bertalotto, Michael Boehringer, Sean Cannone, Katarina Carlos, Edwin Danner, Kevin Gee, Peter Goedecke, Ulrich Hipp, Ralf Krause, Eric Larsen, Laura Neiswander, Frank Ruff, and Ellen Tyler for their help with the study, as well as all study participants for their time and feedback.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

*Review*
