Next Article in Journal
Elimination of Leaf Angle Impacts on Plant Reflectance Spectra Using Fusion of Hyperspectral Images and 3D Point Clouds
Previous Article in Journal
Research on Aviation Safety Prediction Based on Variable Selection and LSTM
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Impact of Physical Motion Cues on Driver Braking Performance: A Clinical Study Using Driving Simulator and Eye Tracker

Department of Vehicle Technology, Faculty of Transportation Sciences, Czech Technical University in Prague, Konviktská 20, 110 00 Prague, Czech Republic
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(1), 42; https://doi.org/10.3390/s23010042
Submission received: 2 December 2022 / Revised: 14 December 2022 / Accepted: 17 December 2022 / Published: 21 December 2022
(This article belongs to the Section Physical Sensors)

Abstract

:
Driving simulators are increasingly being incorporated by driving schools into a training process for a variety of vehicles. The motion platform is a major component integrated into simulators to enhance the sense of presence and fidelity of the driving simulator. However, less effort has been devoted to assessing the motion cues feedback on trainee performance in simulators. To address this gap, we thoroughly study the impact of motion cues on braking at a target point as an elementary behavior that reflects the overall driver’s performance. In this paper, we use an eye-tracking device to evaluate driver behavior in addition to evaluating data from a driving simulator and considering participants’ feedback. Furthermore, we compare the effect of different motion levels (“No motion”, “Mild motion”, and “Full motion”) in two road scenarios: with and without the pre-braking warning signs with the speed feedback given by the speedometer. The results showed that a full level of motion cues had a positive effect on braking smoothness and gaze fixation on the track. In particular, the presence of full motion cues helped the participants to gradually decelerate from 5 to 0 ms−1 in the last 240 m before the stop line in both scenarios, without and with warning signs, compared to the hardest braking from 25 to 0 ms−1 produced under the no motion cues conditions. Moreover, the results showed that a combination of the mild motion conditions and warning signs led to an underestimation of the actual speed and a greater fixation of the gaze on the speedometer. Questionnaire data revealed that 95% of the participants did not suffer from motion sickness symptoms, yet participants’ preferences did not indicate that they were aware of the impact of simulator conditions on their driving behavior.

1. Introduction

A driving simulator is a device with a highly sophisticated application of computer-aided kinematic and dynamic simulations that places the driver in an artificial environment intended to be a substitute for the main aspects of real driving [1]. This technology has the capacity to manipulate and control situations that are not feasible or dangerous in real traffic conditions [2], and it has prevalent use in a variety of applications, for instance, as a testbed for designing in-vehicle interfaces [3], validating autonomous vehicles [4,5], or studying drivers’ behavior [6].
Nowadays, driving simulators are increasingly being integrated by driving schools into various training processes involving four-wheels [7], trailers and rail vehicles [8], such as subways, tramways, and trains [9]. Nonetheless, research studies revealed that doubts exist regarding the objective validity of such tools in the context of transferring training to real driving [10,11,12,13]. To enhance simulator fidelity, efforts have been made in this research field to provide a more realistic sense of presence [14]. Thus, advanced artificial intelligence (AI) techniques, including deep neural network (DNN) [15], fuzzy logic [16,17,18], or genetic algorithm [19], have been exploited to optimize platform motion cueing in a high degrees-of-freedom (DOF) in the roll, pitch, and yaw axis. However, other studies underlined the high-cost issue of such developed software and hardware of motion platforms and intended to reduce simulators’ cost by decreasing the freedom to 3-DOF [20], 2-DOF [21], or even to completely static simulators [22,23,24,25,26]. This disparity between research concerning “fidelity” and “cost” is primarily due to a lack of understanding of the role of motion cues in training efficacy in the simulator. To fill this gap, it is first necessary to address the raised question of when, why, and how exactly physical motion cues could influence the driving performance of a trainee by a thorough evaluation of driver behavior.
Mclane et al. [27] conducted one of the earliest studies on motion cues and evaluated the effect of different combinations of yaw, pitch, and roll vibrations on lateral and velocity deviation in a primitive simulator while participants performed a mix of tasks involving changing velocity, changing lane, and decelerating. This study concluded that the presence of at least two motion cues could enhance driver performance, while the effect of omitting one of the three cues (i.e., roll) has an unremarkable effect. Focusing on the impact of the motion platform itself compared to a static simulator, Siegler et al. [28] evaluated driving behavior based on lateral acceleration, distance to the roadside, and linear velocity and revealed that turning the platform on prevented participants from performing overly unrealistic deceleration compared to switching the platform off. Nevertheless, the work of Colombet et al. [29] disproved the idea that classical or adaptive motion cues affected the ability of participants to perform a speed change task in a car following scenario compared to the static platform mode, but the experiments involved only three subjects, which can be considered as an invalid number of participants. In the same context, Denjean et al. [30] asked drivers to accelerate to a target speed with a hidden speedometer and the results highlighted that participants tended to underestimate their driving speed in the simulator, especially in daylight, when the acoustic feedback from the engine helped them to better perceive a vehicle motion and maintain a constant speed. Admitting that motion cues were not a concern in this study, it draws attention to the importance of gaze behavior assessment as a part of driving performance which was not included in previous motion cues related work. Moreover, another aspect that a motion platform may induce and thus affect driving performance is an increased risk of motion sickness, as stated in the work of Reinhard et al. [31], yet it was not considered in previous studies of motion cues.
Gaze estimation is a commonly used methodology [32] to obtain a better understanding of human cognition and behavior in various research fields [33], including driver’s distraction [34], safety [35], advanced driver assistance systems (ADAS) assessment [36], and development [37]. However, few papers have studied the distractions of overt visual attention of drivers in a simulator. Gomolka et al. [38] used Tobii Glasses to assess the visual attention of two distinct pilot groups with varying levels of flight training while performing flight tasks in the FNTP II MCC simulator. In the driving simulator context, Le et al. [39] used an eye tracker to observe and compare drivers’ eye movements in two different situations: in a simulator and on a real road and captured a higher level of cognitive distraction in the naturalistic case compared to the simulator. Nonetheless, eye sensors have not been employed in the context of studying the impact of simulator conditions, such as motion cues, on the trainee’s cognitive distraction.
All the surveyed papers had clear limitations, used a primitive simulator, included an insufficient number of participants (less than 10), did not consider the motion sickness effect, or used a complicated list of driving tasks which do not help to provide accurate explanations of how much and in which way motion cues impact driving performance. To the best of our knowledge, no previous work has presented a comprehensive study of motion cues’ effect on driving behavior that evaluates their importance during training in a simulator. To address this gap, we thoroughly study the impact of motion cues on braking at a target point as an elementary behavior [40] that reflects the overall drivers’ performance. The main purpose of our study is to evaluate if the feedback given by motion cues has a positive impact on the driver’s speed perception and hence on braking performance, gaze focus, and driving comfort compared to the insight provided by the speedometer and warning signs. The novelty of the present paper is the use of an eye-tracking device to assess gaze behavior in addition to evaluating driving performance in the simulator and considering participants’ feedback. Moreover, we compare the effect of different motion levels first without and then with the pre-braking warning signs.

2. Materials and Methods

2.1. Participants

A total of 24 drivers in the age range of 20–65 years (M = 29.83 years, SD = 11.30 years), 5 females and 19 males, participated in the experiment. The participants included both employees and students of CTU in Prague, as well as volunteers from non-academic backgrounds; all recruited were recruited via the university’s email service. Possession of a valid driving license of at least category B in Europe and an age between 19–70 years were required to participate in the study. The driving experience of the participants ranged from 0.5 to 50 years (M = 11.42 years, SD = 11.90 years), with 58.33% of the participants driving at least 4 times a week and the rest a few times a month. Further, 54.17% of the participants had previous simulator experience. One participant was unable to complete the experiment due to motion sickness; hence, the data of 23 participants were processed when evaluating the simulator data.

2.2. Instruments

2.2.1. Simulator and Motion Platform

The experiment was conducted in the laboratory simulator shown in Figure 1. The main hardware part consists of the front part of the Škoda Superb III with automatic transmission [41]. The car’s dynamics model is equivalent to a front-wheel drive European middle-class car and it was developed in our laboratory (R&D 4.0 LAB) at the Technical University in Prague [42]. The vehicle is placed on a motion platform that is optimized with respect to the driving scenario and copies road irregularities and curves as well as the acceleration with high accuracy. Three full HD TVs positioned to cover 100% of the visible area from the front and both side windows of the vehicle were used to project the scenario. These TVs are rigidly connected to the moving structure and thus replicate the movement of the entire driving simulator. The software part of the simulator consists of a virtual reality (VR) engine responsible for generating 3D graphics and spatial audio for the physical engine.
The motion platform, as described in Figure 2a, consists of six electric motors and six actuators mounted between the upper frame and the bottom frame of about two meters (Figure 2a). The combination of this hardware constitutes a platform able to move in the pitch, roll, and yaw axis, as illustrated in Figure 2c, enabling the simulator motion cues in six degrees of freedom 6-DOF. Specifically, it is a hydraulic platform supplied by Pragolet, s.r.o. [43] and its movements are controlled by an optimal motion cues algorithm consisting of the washout filter, the vehicle’s mathematical–physical model, and the kinematic transformation of the actuators’ position, as previously described by our colleagues in [44].

2.2.2. Eye Tracker

To observe visual behavior in our experiment, we used the Tobii 2 50 Hz wireless eye tracker equipped with four eye tracking cameras monitoring the participants’ eyes and one FullHD camera monitoring the environment in front of the participants, as illustrated in Figure 3 [45]. Tobii is a binocular eye tracker based on a video-oculography method that enables video detection of pupil position based on the reflection of infrared light shining on the surface of the eye. Accordingly, the proband’s gaze direction vector is calculated and mapped onto the image captured by the eye tracker’s front camera with a vector accuracy of approximately 0.6.
The Tobii Pro Lab application based on the velocity-thres hold identification (I-VT) fixation classification algorithm is used to process the raw data recorded by the eye tracker and classifies the individual states of the eye into saccade, fixation, and blink based on the velocity of the sampled eye movement compared to a threshold parameter. In this context, eye movements below the velocity threshold are classified as fixations, while eye movements with higher velocities are considered saccades. In our laboratory, the velocity threshold for all experiments is generally set to 30°/s, a value frequently used in research [46].

2.2.3. Questionnaire

The participants’ feedback was obtained via a structured questionnaire, and then their subjective opinions were evaluated. After each run of the experiment with a different combination of road scenario and motion cues level, we asked the participants to rate their comfort with motion level and the difficulty of the task on a five-point Likert scale. Moreover, the participants were asked to select the area that caught their attention the most during the run, either the speedometer, the warning signs, or the track lane. After completing three runs of the experiment, when the participants had tried all levels of the motion platform, they were asked how they perceived the effect of the motion cues and how they thought it helped them in the task. To verify if the driving performance was affected by any motion sickness, the participants selected the statement describing their condition after the experiment based on the MIsery SCale (MISC) [47] presented in Table 1.

2.3. Data Analysis

Data from the driving simulator, eye tracker, and questionnaire were used as input for analysis, applying both descriptive and inferential statistics to obtain outputs and meet the objective of the present study. Data from the driving simulator were processed, filtered from the motion-sick participant, and the mean speed profiles over distance were generated. The Tobii Pro Lab application was used to analyze the eye tracker data. The data were manually checked, including the AOI settings, and the proportion of total time in AOIs was obtained. Data from the questionnaire were used for general information about the participants, a descriptive overview of the participants’ opinions, and for statistical evaluation. Based on the characteristics of the data (group comparison, meeting normal distribution, homogeneity of variances, etc.), a parametric one-way analysis of variance (ANOVA) test was performed on the data regarding participants’ preferences.

2.4. Experimental Design

The primary concern of the present research is to study the effect of motion cues on driving behavior with a focus on the ability to accomplish smooth braking to a target point under different motion cues levels compared to static simulator driving. Accordingly, the braking behavior can reflect drivers’ ability to correctly estimate their current speed in the simulator and interpret the distance to a fixed point on the road displayed on 2D screens.
For our experiment, a driving scenario consisting of a two-lane, two-way rural road of 4200 m in length was designed, as illustrated in Figure 4a. In this scene, the participants performed a braking task at a target point from a driving speed of 120 km/h. As shown in Figure 4b, the target stopping point involves a railway crossing on the road, and the participants were instructed to stop the vehicle directly at the stop line located in front of the closed crossing barrier marked with a STOP sign and flashing lights.
In order to test the braking behavior under different road conditions, a second scenario was created with one difference—by placing warning signs, as shown in Figure 4c, to alert the drivers of an upcoming railway crossing and prepare them to start decreasing their speed. As shown in Figure 4d, three different “Countdown Markers” with stripes were placed at distances of 80 m (three stripes sign), 160 m (two stripes sign), and 240 m (one stripe sign) in front of the railway crossing.
The warning sign distances used towards the end of the scene provide the time required to gradually decelerate to the target from a high initial speed (120 km/h) according to the AASHTO [48] stopping sight distance Equation (1):
S = 0.278 × t × v + v ²   254 × f + G
where “S” is stopping distance in (m), “t” is perception–reaction time in seconds with 2.5 s for the worst-case scenario, “v” is speed of the vehicle in km/h, “G” is grade or slope of the road expressed as a decimal, and “f” is the coefficient of friction between the wheels and the road: 0.7 for a dry road and from 0.3 to 0.4 for a wet road.

2.5. Experiment Procedure

The experiment was carried out in a darkened and quiet room. Each participant was asked for general information at the beginning of the experiment, including age, driving experience, etc. Afterward, all the participants were given the necessary time to adapt to driving in the simulator, during which they tested all the relevant vehicle functions using a trial scenario, as recommended by previous research [49,50]. The inexperienced participants needed 5 to 10 min to adapt, while the experienced participants needed approximately 1 to 3 min. After the test drive, the participants were asked to put on the eye-tracking glasses, and the device was calibrated.
As aforementioned, the participants were asked to drive straight down the road (Figure 4a), accelerate to 120 km/h, and stop at the stop line in front of the railway crossing (Figure 4b). Each participant repeated the same task six times in each run under different road and motion conditions. As summarized in Figure 5, these six conditions were based on a combination of two traffic scenarios (with and without warning signs) and three different motion levels, namely, no motion cues, low level of motion, and high level of motion.
Before each round, the motion platform’s parameters were tuned to an appropriate level of motion according to the measurement plan. The platform’s motion is controlled by the scale function in Figure 6, defined by the maximum acceleration and gain (factor) in three axes and the maximum angular velocity in the x and y axes. As summarized in Table 2, the parameters were set to 0 for the static level (gain is not considered in this case). For the high level of motion, the parameters were set to the higher value recommended for this simulator with minimal motion sickness effect. These values were then halved for the low level of motion. Each participant’s measurement took an average of 40 min, including adaptation to the simulator, device calibration, six rounds of the driving task (2.5 min each), changing the motion parameters, and completing the questionnaire described in the following subsection.

3. Experimental Results

In this section, the results of six runs of the experiment (under different combinations of road and motion conditions) are presented, obtained from three different types of data: simulator data, eye tracker data, and questionnaire data.

3.1. Simulator Data

The participants’ speed behavior in the simulator was analyzed under six combinations of road and motion conditions. The mean speed profiles at different distances to the final stop line are plotted in Figure 7.
As shown in Figure 7a, the six mean speed profiles varied from the start point (4200 m) to the end point (0 m) following the same general pattern corresponding to the task design, which consisted of accelerating to ≅120 km/h (>33, 33 ms−1), maintaining speed and stopping at the stop line.
Nevertheless, Figure 7b zooms in on the last 1000 m before the stop line, during which the different profiles started slowing down and preparing for full braking and shows the difference between the adopted speed behaviors under different conditions. The graph shows that both “No motion”, either with signs or without signs, as well as the “Mild motion without signs” profiles, tended to gradually and significantly reduce speed, while the three other profiles maintained a high speed, leading to low braking performance. Overall, the “Full motion” profiles showed the smoothest change in speed between 1000 m to 240 m to the stop line. “Full motion with signs” achieved slightly better performance, which refers to the positive impact of the feedback given by both: the high level of motion cues and warning signs on the driver’s speed perception.
To further compare different braking behaviors, we focused on the last 240 m before the stop line (as presented in Figure 7c), considered as the actual braking distance based on Equation (1), and where the warning signs were placed. As expected, both “Full motion” profiles continued to decelerate gradually and achieved smooth braking, but with equivalent performance in this area with no impact of warning signs. This means that the positive effect of high motion cues on speed behavior outperforms the feedback provided by warning signs. Similarly, the “Mild motion” profiles also showed a smooth change in speed but of higher values compared to “Full motion”.
Unexpectedly, we noticed that the “Mild motion with signs” profile was less smooth (changing speed from 22 to 0 ms−1) than the “Mild motion without signs” profile (changing speed from 10 to 0 ms−1), indicating that warning signs had a negative impact on speed behavior under the mild motion conditions. The worst speed performance was observed for both “No motion” profiles. In fact, the “No motion with signs” profile produced the worst change of speed (from >25 to 0 ms−1) over a short distance of 240 m, while the “No motion without signs” profile started from a lower speed (20 ms−1) but delayed a significant reduction in speed until the last 80 m before the final stopping point, leading to hard braking. Overall, the braking performance increased significantly with the level of motion; the warning signs had no significant impact in the presence of the high motion level and had a negative impact with the mild motion level.

3.2. Eye Tracker Data

The participants’ gaze behavior was evaluated at three levels of motion by analyzing the eye tracker data, separately for the “Without signs” and “With signs” scenarios.

3.2.1. “Without Signs” Scenario

For the first scenario, “Without signs”, our methodology consisted of classifying the gaze interest into the “Track” area representing the focus on the road, and the “Dashboard” area representing the focus on the speedometer according to the areas of interest (AOI) map presented in Figure 8a. Therefore, we extracted gaze fixation maps of 23 participants during the driving task under “No motion” (Figure 8c), “Mild motion” (Figure 8d), and “Full motion” (Figure 8d) conditions. The three gaze maps show a slight increase in focus on the “Track” compared to the “Dashboard” area as the motion cue level increases.
To quantitatively evaluate the gaze interest, we analyzed the share of average fixation time between the “Track” and the “Dashboard” areas, as presented in Figure 8b. As anticipated in our research assumptions, the graph shows a slight and gradual increase in fixation time for the “Track” area compared to the “Dashboard” area as the motion level increases, respectively: 81.89% under “No motion”, 83.29% under “Mild motion”, and 85.36% under “Full motion” conditions. Correspondingly, the increase in the motion level negatively and gradually impacted the fixation time on the “Dashboard” area. The results indicate that the participants’ perception of speed rather relied on the motions cues feedback, especially when at a high level, compared to the feedback provided by the speedometer. Consequently, the presence of high-level motion cues may help improve safety by reducing the distraction caused by monitoring the HMI and allowing the driver to concentrate more on the track.

3.2.2. “With Signs” Scenario

For the “With signs” scenario, our methodology relied on the AOI classification according to the map presented in Figure 9a consisting of the “Warning signs” area in addition to the “Dashboard” and the “Track” areas. Similarly to the first scenario, we extracted the gaze maps of the warning signs scenario under the three levels of motion cues, respectively under “No motion” (Figure 9c), “Mild motion” (Figure 9d), and “Full motion” (Figure 9d) conditions.
As expected, the graph indicates a slight gradual decrease in fixation time in the “Warning signs” area, respectively: 6.25% under “No motion”, 6.15% under “Mild motion”, and 5.01% under “Full motion” conditions, meaning that the stronger the motion level, the lower the driver’s interest in signs’ feedback; hence the best eye fixation on the “Track” (82.92%) was achieved under full motion conditions. However, the presence of warning signs had a negative impact on the focus (eye fixation) on the track under the “Mild motion” condition. In other words, the low level of motion cues is not strong enough to help the drivers correctly perceive their actual speed, and with the presence of the warning signs, the participants tended to check the actual speed on the speedometer more.

3.3. Questionnaire Data

We evaluated the feedback of ≈24 participants on the experiment by analyzing data from the questionnaire. As aforementioned, one of the participants was not able to complete the six rounds of the experiment, and hence we considered the answers concerning the part that was accomplished. Figure 10 presents the mean scores given by the participants for the different motion cues’ levels using a five-point Likert scale according to their approximation to the real car motion cues. Unlike the simulator data, the graph of the mean scores shows a major preference for the “Mild motion” level with a mean score of 3.57 pts with a standard deviation (SD) = 0.992. Consequently, the “Full Motion” level had a lower score of 2.65 pts with an SD = 1.26, while the “No Motion” level had the lowest score of 2.30 pts with an SD = 1.19. In other words, the participants did not prefer driving in the static simulator, although they did not favor the high level of motion cues.
To assess the independence of the participants’ preferences for the motion level (the null hypothesis H0 (2)), we conducted a one-way ANOVA test on one independent variable, which is the participants’ preference for “No Motion” µN–M, “Mild Motion” µM–M, and “Full Motion” µF–M presented in Figure 10:
H0: µN–M = µM–M = µF–M
According to the results of the ANOVA test summarized in Table 3, the probability of confirming the null hypothesis (H0) is very low, (F = 7337) and the p-value is less than the threshold of 5% (p-value = 0.00133 < 0.05). The latter means that there was a difference between the opinions of the drivers about the levels of motion and their approximation to reality based on their subjective feeling. Consequently, the subjects’ perception and preference of the motion level were not correlated with their performance under these conditions.
The motion sickness state of the participants was checked during each run and at the end of the experiment, the drivers rated their condition based on the MISC 10-point scale [47]. Figure 11 presents the distribution of the total number of participants according to their scores of motion sickness and shows that most participants (95%) were assigned a low MISC score (≤3 pts), with only two participants having a score higher. It is worth noting that the highest score was obtained by the participant who did not complete the experiment, and therefore the simulator and eye tracker data were excluded. In addition, 90% of the participants reported that their state of comfort had improved with each run of the experiment, while the others reported that their state had not changed. Therefore, we can conclude that the output of our motion cues evaluation was not affected by motion sickness.

4. Discussion

During our experiment, the participants achieved smoother braking in the “Full motion” conditions compared to the “Mild motion” and the “No motion” levels, both in the presence and absence of warning signs, especially in the last 240 m before the final stop line. As expected, the high-level motion cues provide the participants with sufficient feedback about the actual driving speed and have a positive effect on driving performance in the simulator, confirming the findings of previous studies [28,29]. Moreover, the “Full motion” mode also had a slightly positive impact on the gaze behavior of the drivers in terms of focusing on the road and reduced their need to look at the speedometer or warning signs, suggesting that motion feedback is more effective on drivers’ perception compared to the speed information provided by a speedometer. These results confirm the positive impact of the driving simulator conditions on the visual concentration of the trainees, which is related to the findings of an eye-tracker-based study conducted by Le et al. [39], showing that participants had a lower level of visual distraction in the simulator compared to the real world situation on the road. However, eye sensors have not been widely exploited in studies concerning driver behavior in the simulator.
Contrarily, the braking performance of the participants at the “Mild motion” was better in the first scenario “Without warning signs” compared to the second “With warning signs”. Similarly, “Mild motion” cues had a negative impact on the participants’ gaze behavior in the presence of warning signs and increased their interest in watching the speedometer. One explanation for the low performance under these conditions is that the mild level of motion cues causes an underestimation of the actual driving speed, and with the warning signs, the participants tend to check the HMI to make sure they are driving at an adequate speed.
The platform in the static mode had the worst impact on both braking performance and the gaze behavior of the drivers, especially in the “Without warning signs” scenario. Likewise, the “No motion” level had the lowest score in terms of its approximation to reality according to the subjective assessment of the participants. Nevertheless, the “Mild motion” level was rated better by the participants than the “Full motion” level, which is not correlated with their driving performance under these conditions. Thus, we can state that the drivers have an incorrect awareness of the impact of driving conditions on their performance, and their subjective preferences do not represent a correct perception of the situation in the simulator. This statement is supported by the findings of the study of Talsma et al. [47], in which they recommended basing the evaluation on objective or physiological data of the participants.
The evaluation of the state of sickness showed that most of the participants (95%) had an extremely low level of sickness (MISC score ≤ 3 pts). In addition, 90% of the participants reported that their comfort in the simulator improved with each run. As a result, we concluded that driver behavior was not affected by motion sickness. Nonetheless, it should be noted that the eye-tracking outputs were not sufficiently compared with the findings because, to our knowledge, no previous research has evaluated the effect of motion cues on drivers’ gaze behavior. In conclusion, the results of our study can have a broad impact on research and industries related to various driving simulators by increasing the knowledge of the impact of simulator conditions and parameters on the quality of training, precisely to support the use and improvement of motion platforms with a full level of motion cues.

5. Conclusions

The present paper aimed to study the impact of the feedback given by the simulator platform’s motion cues on the driver’s behavior. In our experiment, we asked 24 participants to brake at a target point after reaching a certain speed under different motion levels (“No motion”, “Mild motion” and “Full motion”) and scenarios (“With warning signs” and “Without warning signs”). In addition to the participants’ subjective feedback and their motion sickness state from the questionnaire, we evaluated the participants’ driving performance in the simulator in terms of speed change behavior during braking, and the gaze behavior from the eye tracker in terms of fixation time on three areas of interest, namely “Track”, “Speedometer”, and “Warning signs”. The results showed that the participants achieved the smoothest braking with greater eye focus on the track in the “Full motion” level without any significant impact of the feedback given by the speedometer and warning signs. However, the “Mild motion” level outperformed the “No motion” level in the “Without warning signs” scenario but had a negative effect with the presence of warning signs due to underestimation of the actual speed and the need to have a proper overview of the speedometer. Questionnaire data revealed that most participants did not suffer from motion sickness symptoms, yet participants’ preferences did not indicate that they were aware of the impact of simulator conditions on their driving behavior. As none of the previous studies have used an eye tracker in a similar context, we consider further investigating the impact of motion cues on gaze behavior in more complex driving scenarios.

Author Contributions

S.E.H.: design of the study, development of the paper, writing of the original draft, data interpretation, and project administration. P.B.: conception of the research topic, design of the methodology, and supervision. T.K.: execution of the experiments, analysis of the questionnaire and eye tracker data, and editing of the manuscript. D.L.: analysis of the simulator data. All authors have read and agreed to the published version of the manuscript.

Funding

This research has been supported from the Global Postdoc Fellowship Program of the Czech Technical University in Prague.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bruck, L.; Haycock, B.; Emadi, A. A Review of Driving Simulation Technology and Applications. IEEE Open J. Veh. Technol. 2021, 2, 1–16. [Google Scholar] [CrossRef]
  2. Wynne, R.A.; Beanland, V.; Salmon, P.M. Systematic review of driving simulator validation studies. Saf. Sci. 2019, 117, 138–151. [Google Scholar] [CrossRef]
  3. Chen, J.; Wang, X.; Cheng, Z.; Gao, Y.; Tremont, P.J. Evaluation of the optimal quantity of in-vehicle information icons using a fuzzy synthetic evaluation model in a driving simulator. Accid. Anal. Prev. 2022, 176, 106813. [Google Scholar] [CrossRef]
  4. Yeo, D.; Kim, G.; Kim, S. Toward Immersive Self-Driving Simulations: Reports from a User Study across Six Platforms. In Proceedings of the CHI’20: 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 25–30 April 2020; pp. 1–12. [Google Scholar] [CrossRef]
  5. Hock, P.; Kraus, J.; Babel, F.; Walch, M.; Rukzio, E.; Baumann, M. How to design valid simulator studies for investigating user experience in automated driving—Review and hands-on considerations. In Proceedings of the 10th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, AutomotiveUI 2018, Toronto, ON, Canada, 23–25 September 2018; pp. 105–117. [Google Scholar] [CrossRef]
  6. Fares, A.; Wickens, C.M.; Mann, R.E.; Di Ciano, P.; Wright, M.; Matheson, J.; Hasan, O.S.M.; Rehm, J.; George, T.P.; Samokhvalov, A.V.; et al. Combined effect of alcohol and cannabis on simulated driving. Psychopharmacology 2022, 239, 1263–1277. [Google Scholar] [CrossRef]
  7. Goode, N.; Salmon, P.M.; Lenné, M.G. Simulation-based driver and vehicle crew training: Applications, efficacy and future directions. Appl. Ergon. 2013, 44, 435–444. [Google Scholar] [CrossRef] [PubMed]
  8. Naweed, A. Simulator integration in the rail industry: The Robocop problem. Proc. Inst. Mech. Eng. Part F J. Rail Rapid Transit 2013, 227, 407–418. [Google Scholar] [CrossRef]
  9. Nikitenko, A.; Shvets, A. Software and hardware simulators for train drivers training: Overview of possibilities and effects of application. Prz. Elektrotechniczny 2020, 96, 198–201. [Google Scholar] [CrossRef]
  10. Allen, R.W.; Park, G.D.; Cook, M.L. Simulator fidelity and validity in a transfer-of-training context. Transp. Res. Rec. 2010, 2185, 40–47. [Google Scholar] [CrossRef]
  11. Rosenbloom, T.; Eldror, E. Effectiveness evaluation of simulative workshops for newly licensed drivers. Accid. Anal. Prev. 2014, 63, 30–36. [Google Scholar] [CrossRef]
  12. Freeman, P.; Neyens, D.M.; Wagner, J.; Switzer, F.; Alexander, K.; Pidgeon, P. A video based run-off-road training program with practice and evaluation in a simulator. Accid. Anal. Prev. 2015, 82, 1–9. [Google Scholar] [CrossRef]
  13. Meuleners, L.; Fraser, M. A validation study of driving errors using a driving simulator. Transp. Res. Part F Traffic. Psychol. Behav. 2015, 29, 14–21. [Google Scholar] [CrossRef]
  14. Cleij, D.; Venrooij, J.; Pretto, P.; Katliar, M.; Bülthoff, H.; Steffen, D.; Hoffmeyer, F.; Schöner, H.-P. Comparison between filter- and optimization-based motion cueing algorithms for driving simulation. Transp. Res. Part F Traffic. Psychol. Behav. 2019, 61, 53–68. [Google Scholar] [CrossRef]
  15. Koyuncu, A.B.; Ercelik, E.; Comulada-Simpson, E.; Venrooij, J.; Kaboli, M.; Knoll, A. A Novel Approach to Neural Network-based Motion Cueing Algorithm for a Driving Simulator. In Proceedings of the IEEE Intelligent Vehicles, Las Vegas, NV, USA, 19 October–13 November 2020; pp. 2118–2125. [Google Scholar] [CrossRef]
  16. Asadi, H.; Bellmann, T.; Mohamed, S.; Lim, C.P.; Khosravi, A.; Nahavandi, S. Adaptive Motion Cueing Algorithm using Optimized Fuzzy Control System for Motion Simulators. IEEE Trans. Intell. Veh. 2022, 8858, 1–13. [Google Scholar] [CrossRef]
  17. Khusro, Y.R.; Zheng, Y.; Grottoli, M.; Shyrokau, B. MPC-Based Motion-Cueing Algorithm for a 6-DOF Driving Simulator with Actuator Constraints. Vehicles 2020, 2, 625–647. [Google Scholar] [CrossRef]
  18. Asadi, H.; Lim, C.P.; Mohamed, S.; Nahavandi, D.; Nahavandi, S. Increasing motion fidelity in driving simulators using a fuzzy-based washout filter. IEEE Trans. Intell. Veh. 2019, 4, 298–308. [Google Scholar] [CrossRef]
  19. Asadi, H.; Lim, C.P.; Mohammadi, A.; Mohamed, S.; Nahavandi, S.; Shanmugam, L. A genetic algorithm–based nonlinear scaling method for optimal motion cueing algorithm in driving simulator. Proc. Inst. Mech. Eng. Part I J. Syst. Control Eng. 2018, 232, 1025–1038. [Google Scholar] [CrossRef]
  20. Mohellebi, H.; Espié, S.; Kheddar, A.; Arioui, H.; Amouri, A. Design of Low-Clearance Motion Platform for Driving Simulators. In Mechatronics Safety, Secur Dependability a New Era; Elsevier: Amsterdam, The Netherlands, 2007; pp. 401–404. [Google Scholar] [CrossRef]
  21. Arioui, H.; Hima, S.; Nehaoua, L.; Bertin, R.J.V.; Espié, S. From design to experiments of a 2-DOF vehicle driving simulator. IEEE Trans. Veh. Technol. 2011, 60, 357–368. [Google Scholar] [CrossRef] [Green Version]
  22. Khadeir, A.M.; Saehood, Z.A.; Mutar, H.S.; Abduljabbar, A.S.; Al-Dahwi, A.M.; Abdulameer, R.H.; Mohammed, A.A. Building and validation of a low-cost driving simulator. In Journal of Physics: Conference Series; IOP Publishing: Bristol, UK, 2021; p. 1973. [Google Scholar] [CrossRef]
  23. De Frutos, S.H.; Castro, M. Assessing sim racing software for low-cost driving simulator to road geometric research. Transp. Res. Procedia 2021, 58, 575–582. [Google Scholar] [CrossRef]
  24. Llopis-Castelló, D.; Camacho-Torregrosa, F.J.; Marín-Morales, J.; Pérez-Zuriaga, A.M.; García, A.; Dols, J.F. Validation of a low-cost driving simulator based on continuous speed profiles. Transp. Res. Rec. 2016, 2602, 104–114. [Google Scholar] [CrossRef] [Green Version]
  25. Huang, A.R.W.; Chen, C. A low-cost driving simulator for full vehicle dynamics simulation. IEEE Trans. Veh. Technol. 2003, 52, 162–172. [Google Scholar] [CrossRef]
  26. Mecheri, S.; Lobjois, R. Steering Control in a Low-Cost Driving Simulator: A Case for the Role of Virtual Vehicle Cab. Hum. Factors 2018, 60, 719–734. [Google Scholar] [CrossRef] [PubMed]
  27. Mclane, R.C.; Wierwille, W.W. The Influence of Motion and Audio Cues on Driver Performance in an Automobile Simulator. Hum. Factors 1975, 17, 488–501. [Google Scholar] [CrossRef] [Green Version]
  28. Siegler, I.; Reymond, G.; Kemeny, A.; Berthoz, A. Sensorimotor Integration in a Driving Simulator: Contributions of Motion Cueing in Elementary Driving Tasks. DSC Europe 2001, 21–32. Available online: https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=66a310c99462f7652afb9ba3beb94c9b369771db (accessed on 16 December 2022).
  29. Colombet, F.; Dagdelen, M.; Kemeny, A. Motion Cueing: What’s the Impact on the Driver’s Behaviour? In Proceedings of the Driving Simulation Conference; 2008; pp. 171–181. Available online: https://www.researchgate.net/profile/Frederic-Merienne/publication/237305826_Motion_Cueing_what%27s_the_impact_on_the_driver%27s_behaviour/links/0deec52ebc25740a16000000/Motion-Cueing-whats-the-impact-on-the-drivers-behaviour.pdf (accessed on 16 December 2022).
  30. Denjean, S.; Roussarie, V.; Kronland-Martinet, R.; Velay, J.L. How does interior car noise alter driver’s perception of motion? Multisensory integration in speed perception. In Proceedings of the Acoustic 2012 Nantes Conference, Nantes, France, 23–27 April 2012. [Google Scholar]
  31. Reinhard, R.T.; Kleer, M.; Dreßler, K. The impact of individual simulator experiences on usability and driving behavior in a moving base driving simulator. Transp. Res. Part F Traffic. Psychol. Behav. 2019, 61, 131–140. [Google Scholar] [CrossRef]
  32. Pathirana, P.; Senarath, S.; Meedeniya, D.; Jayarathna, S. Eye gaze estimation: A survey on deep learning-based approaches. Expert Syst. Appl. 2022, 199, 116894. [Google Scholar] [CrossRef]
  33. Tanoubi, I.; Tourangeau, M.; Sodoké, K.; Perron, R.; Drolet, P.; Bélanger, M.; Morris, J.; Ranger, C.; Paradis, M.-R.; Robitaille, A.; et al. Comparing the visual perception according to the performance using the eye-tracking technology in high-fidelity simulation settings. Behav. Sci. 2021, 11, 31. [Google Scholar] [CrossRef] [PubMed]
  34. Ojsteršek, T.C.; Topolšek, D. Eye tracking use in researching driver distraction: A scientometric and qualitative literature review approach. J. Eye Mov. Res. 2019, 12, 1–30. [Google Scholar] [CrossRef] [PubMed]
  35. Carr, D.B.; Grover, P. The role of eye tracking technology in assessing older driver safety. Geriatrics 2020, 5, 36. [Google Scholar] [CrossRef]
  36. Khan, M.Q.; Lee, S. Gaze and eye tracking: Techniques and applications in ADAS. Sensors 2019, 19, 5540. [Google Scholar] [CrossRef] [Green Version]
  37. Ledezma, A.; Zamora, V.; Sipele, Ó.; Sesmero, M.; Sanchis, A. Implementing a gaze tracking algorithm for improving advanced driver assistance systems. Electronics 2021, 10, 1480. [Google Scholar] [CrossRef]
  38. Gomolka, Z.; Kordos, D.; Zeslawska, E. The application of flexible areas of interest to pilot mobile eye tracking. Sensors 2020, 20, 986. [Google Scholar] [CrossRef] [Green Version]
  39. Le, A.S.; Suzuki, T.; Aoki, H. Evaluating driver cognitive distraction by eye tracking: From simulator to driving. Transp. Res. Interdiscip. Perspect. 2020, 4, 100087. [Google Scholar] [CrossRef]
  40. Boer, E.R. A Multi-Sensory Cybernetic Driver Model of Stopping Behavior: Comparing Reality Against Simulators with Different Cue-Rendering Fidelities. IFAC-PapersOnLine 2016, 49, 349–354. [Google Scholar] [CrossRef]
  41. Bouchner, P.; Novotny, S. Development of advanced driving simulator: Steering wheel and brake pedal feedback. In Proceedings of the 2nd International Conference on Circuits, Systems, Control, Signals, CSCS’11, Prague, Czech Republic, 26–28 September 2011; pp. 170–174. [Google Scholar]
  42. Bouchner, P.; Novotný, S. Car dynamics model—Design for interactive driving simulation use. In Proceedings of the 2nd International Conference on Applied Informatics and Computing Theory, AICT’11, Prague, Czech Republic, 26–28 September 2011; pp. 285–289. [Google Scholar]
  43. Pragolet. Your Real Simulator; Pragolet s.r.o.: Mnichovice, Czech Republic, 2022; Available online: http://www.pragolet.cz (accessed on 16 December 2022).
  44. Wei, M.Y.; Chen, S.W. Optimal Control-based Motion Cueing Algorithm Design for 6DOF Motion Platform. In Proceedings of the 4th International Conference on Knowledge Innovation and Invention 2021, ICKII 2021, Taichung, Taiwan, 23–25 July 2021; pp. 216–222. [Google Scholar] [CrossRef]
  45. Tobii Pro Glasses 3. 2022. Available online: https://www.tobii.com/products/eye-trackers/wearables/tobii-pro-glasses-3 (accessed on 16 December 2022).
  46. Olsen, A. The Tobii I-VT Fixation Filter: Algorithm description. Tobii. Technol. 2012, 21, 4–19. [Google Scholar]
  47. Reuten, A.J.C.; Bos, J.E.; Smeets, J.B.J. The metrics for measuring motion sickness. Actes (IFSTTAR) 2020, 1, 183–186. [Google Scholar]
  48. Fitzpatrick, K.A.Y.; Mason, J.M. Review of AASHTO Green Book Procedures for Sight Distance at Ramp Terminals Vehicle Acceleration from a Stopped Position. Transp. Res. Rec. 1984, 1280, 190–198. [Google Scholar]
  49. Sahami, S.; Sayed, T. How drivers adapt to drive in driving simulator, and what is the impact of practice scenario on the research? Transp. Res. Part F Traffic. Psychol. Behav. 2013, 16, 41–52. [Google Scholar] [CrossRef]
  50. Ronen, A.; Yair, N. The adaptation period to a driving simulator. Transp. Res. Part F Traffic. Psychol. Behav. 2013, 18, 94–106. [Google Scholar] [CrossRef]
Figure 1. The driving simulator used for the present study developed in our laboratory (R&D 4.0 LAB) at the Faculty of Transportation Sciences, CTU in Prague.
Figure 1. The driving simulator used for the present study developed in our laboratory (R&D 4.0 LAB) at the Faculty of Transportation Sciences, CTU in Prague.
Sensors 23 00042 g001
Figure 2. Illustration of the motion platform design with six degrees of freedom of the used simulator: (a) illustrates the hardware components of the platform; (b) illustrates the size of the platform; (c) illustrates the pitch, roll, yaw axes enabling 6-DOF motion of the platform.
Figure 2. Illustration of the motion platform design with six degrees of freedom of the used simulator: (a) illustrates the hardware components of the platform; (b) illustrates the size of the platform; (c) illustrates the pitch, roll, yaw axes enabling 6-DOF motion of the platform.
Sensors 23 00042 g002
Figure 3. Illustration of the construction of the eye tracker used in our experiment [45].
Figure 3. Illustration of the construction of the eye tracker used in our experiment [45].
Sensors 23 00042 g003
Figure 4. The driving scenario used for our experiment: (a) illustrates the track road; (b) illustrates the stop line representing the target point; (c) illustrates the warning signs used as a deceleration signal in the scenario with warning signs and (d) illustrates the three “Countdown Markers” placed at distances to the Stop line.
Figure 4. The driving scenario used for our experiment: (a) illustrates the track road; (b) illustrates the stop line representing the target point; (c) illustrates the warning signs used as a deceleration signal in the scenario with warning signs and (d) illustrates the three “Countdown Markers” placed at distances to the Stop line.
Sensors 23 00042 g004aSensors 23 00042 g004b
Figure 5. Summary of the six tasks’ conditions based on a combination of different track designs and motion cues levels used in our experiment.
Figure 5. Summary of the six tasks’ conditions based on a combination of different track designs and motion cues levels used in our experiment.
Sensors 23 00042 g005
Figure 6. Representation of the scale function (defined by the parameters in Table 1) which scales the input signal from the mathematical model of the simulated vehicle to the motion platform.
Figure 6. Representation of the scale function (defined by the parameters in Table 1) which scales the input signal from the mathematical model of the simulated vehicle to the motion platform.
Sensors 23 00042 g006
Figure 7. Mean speed profiles at various distances from the stop line: (a) 4200 m before Stop line, (b) 1000 m before Stop line and (c) 240 before Stop line. The vertical lines in red were inserted as reference points to indicate a significant change in speed behavior in the same area.
Figure 7. Mean speed profiles at various distances from the stop line: (a) 4200 m before Stop line, (b) 1000 m before Stop line and (c) 240 before Stop line. The vertical lines in red were inserted as reference points to indicate a significant change in speed behavior in the same area.
Sensors 23 00042 g007
Figure 8. Eye tracker data analysis in the “Without signs” scenario. (a) The areas of interest (AOI) map consisting of the “Track” area highlighted in yellow and the “Dashboard” area in green. (b) The total share of time of gaze focus on the different AOIs under different motion conditions. (ce) are the “Gaze Maps”, respectively, under “No Motion”, “Mild Motion”, and “Full Motion” conditions.
Figure 8. Eye tracker data analysis in the “Without signs” scenario. (a) The areas of interest (AOI) map consisting of the “Track” area highlighted in yellow and the “Dashboard” area in green. (b) The total share of time of gaze focus on the different AOIs under different motion conditions. (ce) are the “Gaze Maps”, respectively, under “No Motion”, “Mild Motion”, and “Full Motion” conditions.
Sensors 23 00042 g008aSensors 23 00042 g008b
Figure 9. Eye tracker data analysis in the “With signs” scenario. (a) The area of interest (AOI) map consisting of the “Track” area highlighted in pink, the “Dashboard” area in blue and “Warning signs” angle in green. (b) The total share of time of gaze focus on the different AOIs under different motion conditions. (ce) are the “Gaze Maps”, respectively, under “No Motion”, “Mild Motion”, and “Full Motion” conditions.
Figure 9. Eye tracker data analysis in the “With signs” scenario. (a) The area of interest (AOI) map consisting of the “Track” area highlighted in pink, the “Dashboard” area in blue and “Warning signs” angle in green. (b) The total share of time of gaze focus on the different AOIs under different motion conditions. (ce) are the “Gaze Maps”, respectively, under “No Motion”, “Mild Motion”, and “Full Motion” conditions.
Sensors 23 00042 g009
Figure 10. Mean scores given by the participants to the different levels of the motion platform’s cues.
Figure 10. Mean scores given by the participants to the different levels of the motion platform’s cues.
Sensors 23 00042 g010
Figure 11. Distribution of the participants according to their motion sickness state based on MISC score.
Figure 11. Distribution of the participants according to their motion sickness state based on MISC score.
Sensors 23 00042 g011
Table 1. MIsery SCale (MISC) included in the questionnaire for evaluating the motion sickness state of the participants in our experiment.
Table 1. MIsery SCale (MISC) included in the questionnaire for evaluating the motion sickness state of the participants in our experiment.
Symptom Score
No problems 0
Uneasiness (no typical symptoms) 1
Dizziness, warmth, headache, stomach awareness, sweating …vague 2
slight3
fairly4
severe 5
Nauseaslight6
fairly7
severe8
retching9
Vomiting 10
Table 2. Summary of the motion platform parameters that define scale function (Figure 6) tuned for different motion conditions, namely “No motion”, “Mild motion”, and “High motion” levels of motion cues.
Table 2. Summary of the motion platform parameters that define scale function (Figure 6) tuned for different motion conditions, namely “No motion”, “Mild motion”, and “High motion” levels of motion cues.
Motion ParameterDescriptionAxisMotion Level
No MotionMild MotionHigh Motion
Maximum Acceleration [rad/s2]Limit for input linear
acceleration, which defines the Scale function.
xAcc00.71.5
yAcc00.551.1
zAcc00.71.5
Maximum Angular Velocity [rad/s]Limit for input angular velocity, which defines the Scale function.xAV00.070.15
yAV00.070.15
Gain [-]Multiplication factor (gain) of
the linear acceleration that defines the Scale function.
xG-0.450.9
yG-0.10.2
zG-0.150.3
Table 3. Summary of the results of the one-way ANOVA test conducted on the mean scores assigned to three levels of motion in Figure 10.
Table 3. Summary of the results of the one-way ANOVA test conducted on the mean scores assigned to three levels of motion in Figure 10.
DfSum SqMean SqF p-Value 1
Motion Level Score219.519.7547.3370.00133
Residuals6687.741.329
1p-value (<0.05) is statistically significant.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

El Hamdani, S.; Bouchner, P.; Kunclova, T.; Lehet, D. The Impact of Physical Motion Cues on Driver Braking Performance: A Clinical Study Using Driving Simulator and Eye Tracker. Sensors 2023, 23, 42. https://doi.org/10.3390/s23010042

AMA Style

El Hamdani S, Bouchner P, Kunclova T, Lehet D. The Impact of Physical Motion Cues on Driver Braking Performance: A Clinical Study Using Driving Simulator and Eye Tracker. Sensors. 2023; 23(1):42. https://doi.org/10.3390/s23010042

Chicago/Turabian Style

El Hamdani, Sara, Petr Bouchner, Tereza Kunclova, and David Lehet. 2023. "The Impact of Physical Motion Cues on Driver Braking Performance: A Clinical Study Using Driving Simulator and Eye Tracker" Sensors 23, no. 1: 42. https://doi.org/10.3390/s23010042

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop