Next Article in Journal
Mass, Centre of Gravity Location and Inertia Tensor of Electric Vehicles: Measured Data for Accurate Accident Reconstruction
Previous Article in Journal
Estimation of Road Adhesion Coefficient Based on Camber Brush Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Assessment of User Preferences for In-Car Display Combinations during Non-Driving Tasks: An Experimental Study Using a Virtual Reality Head-Mounted Display Prototype

1
Design Course, Graduate School of Chiba University, Yayoi-cho 1-33, Inage-ku, Chiba 263-8522, Japan
2
Huizhou Desay SV Automotive Co., Ltd., No. 6, Huitai North Road, Huinan Technology Park, Huizhou 516006, China
3
College of Mechanical and Vehicle Engineering, Hunan University, Lushan Road (S), Changsha 410082, China
*
Author to whom correspondence should be addressed.
World Electr. Veh. J. 2024, 15(6), 264; https://doi.org/10.3390/wevj15060264
Submission received: 9 May 2024 / Revised: 9 June 2024 / Accepted: 11 June 2024 / Published: 17 June 2024

Abstract

:
The goal of vehicular automation is to enhance driver comfort by reducing the necessity for active engagement in driving. This allows for the performance of non-driving-related tasks (NDRTs), with attention shifted away from the driving process. Despite this, there exists a discrepancy between current in-vehicle display configurations and the escalating demands of NDRTs. This study investigates drivers’ preferences for in-vehicle display configurations within highly automated driving contexts. Utilizing virtual reality head-mounted displays (VR-HMDs) to simulate autonomous driving scenarios, this research employs Unity 3D Shape for developing sophisticated head movement tracking software. This setup facilitates the creation of virtual driving environments and the gathering of data on visual attention distribution. Employing an orthogonal experiment, this experiment methodically analyses and categorizes the primary components of in-vehicle display configurations to determine their correlation with visual immersion metrics. Additionally, this study incorporates subjective questionnaires to ascertain the most immersive display configurations and to identify key factors impacting user experience. Statistical analysis reveals that a combination of Portrait displays with Windshield Head-Up Displays (W-HUDs) is favored under highly automated driving conditions, providing increased immersion during NDRTs. This finding underscores the importance of tailoring in-vehicle display configurations to individual needs to avoid distractions and enhance user engagement.

1. Introduction

With the advancement of autonomous driving technology, modern vehicles are gradually transforming from traditional means of transportation into multifunctional mobile spaces. This transformation aims not only to enhance road safety and reduce traffic congestion, but also to enhance driving comfort and the passenger experience through vehicle automation [1,2]. Especially under the influence of advancing autonomous driving technologies, Level 4 high-automation vehicles permit drivers to undertake non-driving-related tasks (NDRTs), like watching videos or engaging in social interactions, most of the time, which imposes new requirements and challenges on in-vehicle display systems [3]. Alongside the increasing functionalities of autonomous vehicles, in-vehicle display technologies are swiftly transitioning from single to multiple screens, aiming to satisfy users’ desires for customization and enhanced in-cabin experiences [4,5]. The challenge lies in the fact that the current in-vehicle display configurations and their update rates are still not able to keep up with the pace of development in autonomous driving technology, especially in terms of supporting NDRTs. Therefore, whether it is possible to assess the existence of an optimal in-vehicle display configuration in an autonomous driving environment becomes a central issue. To address such issues, this study focuses on exploring the applicability and optimization strategies of in-vehicle display systems in autonomous driving environments, innovatively using a relatively inexpensive sensor (a virtual reality headset without eye-tracking) to determine the driver’s state and assess the distribution of the driver’s attention. Several challenges arise: first, the construction of the experimental scenarios through the Cities: Skylines software enhances scenario diversity, but its immersion and realism may differ from those of a real autonomous driving closed-road environment. Secondly, this study primarily measures visual attention through head movement trajectories, which, although less precise algorithmically, fluctuates compared to traditional eye-tracking technologies. However, looking forward, by integrating kinematic sensors, the findings of this research could offer generalized and practical recommendations.
In autonomous driving, firms like Uber, Tesla, and Waymo have achieved notable commercial progress by conducting open-road tests and developing virtual systems [6,7]. These technological advances have not only improved the functionalities of autonomous vehicles but have also accelerated the development of in-vehicle display technologies to meet the increasing demands for multitasking within driving environments. While existing research has somewhat examined the relationship between configurations of in-vehicle display systems and user experience, a systematic assessment of user satisfaction with NDRTs in autonomous driving contexts is still missing [8]. Moreover, the existing literature frequently does not adequately take into account the speed of technological updates and market deployment in discussions about the optimization strategies of in-vehicle display configurations and their effects on user interaction experiences [9]. Despite considerable research comparing in-vehicle displays, some critical research gaps still urgently need to be filled. For instance, evaluating the efficacy of various in-vehicle display configurations using highly simulated virtual driving environments, particularly those that enhance the user experience of NDRTs. This research initially concentrates on determining if an optimal number of in-vehicle display configurations exists, by performing virtual driving experiments that analyze these configurations in depth within an autonomous driving context.

2. Literature Review

2.1. In-Vehicle Display Configurations

As vehicles increasingly become the third space for users, in-vehicle displays aim to provide drivers with comprehensive internal and external information in an autonomous driving environment; it has become a key research direction in the field of intelligent vehicles [4]. The rapid increase in data from both inside and outside the vehicle has led to in-vehicle display content surpassing traditional vehicle dashboard information. A significant characteristic of the trend towards multi-screen display combinations is the diversity and dynamism of their display locations and methods. This reflects the continuous evolution of technological advancements and user demands [10]. Despite continuous technological advancements, given the large existing stock of automobiles, the form of in-vehicle displays is expected to remain relatively stable in the foreseeable future. YAN explored multiple layout studies of vehicle information display interfaces in the smart driving environment providing a foundation for the rational layout of individual in-vehicle display areas [11]. Olaverri-Monreal investigated the human–machine interaction issues of automotive glass projection display systems examining their application in intelligent cockpits. This has significant implications for the layout of display information in autonomous driving environments [12].
Ristina’s research analyzed the impact of different in-vehicle display position preferences on driver performance and gaze behavior offering crucial insights for the in-vehicle display configurations of interest to us [13]. However, most research on in-vehicle displays focuses on comparisons between the performance and layout of single display devices. Research on the collaborative functioning of in-vehicle display configurations and the provision of a consistent experience is relatively scarce. Therefore, this study will further explore from the perspective of in-vehicle display configuration combinations, filling the gaps in existing research.
According to Tan et al. [14], in the context of researching the effects of in-vehicle display size within autonomous driving environments, it was noted that larger monitors can offer improved visual experiences and information rendering outcomes. Development trends in the in-vehicle display market from 2016 to 2023 include the following: The market share of small-sized displays has drastically declined, while that of large-sized displays ranging from 10 to 17 inches has significantly increased. This indicates that larger-sized displays are becoming increasingly popular in the design of autonomous vehicles. Miller believes that simply increasing the size of the display is not the optimal solution [15]. Chih-Chun Lai’s research, starting from the perspective of operational usability found that the 12.3-inch display model is most popular among users within a shorter operation time [16]. Considering this, this study will specifically focus on display sizes with a higher market share using the most popular sizes as experimental samples to ensure the general applicability and relevance of the research findings.

2.2. Head-Up Display

The head-up display (HUD) is a critical technology for minimizing driving distractions, encompassing different HUD types including Combiner HUDs (C-HUDs), Windshield HUDs (W-HUDs), and Augmented Reality HUDs (AR-HUDs) [17]. However, some views affirm that the semi-transparent characteristic of HUDs can have a negative impact on drivers [18]. Given that AR-HUDs have not been extensively studied, this paper selects a 7-inch Wide Viewing Angle W-HUD available in the market as the subject of study, to evaluate its performance and driver preferences in the autonomous driving mode. Furthermore, given the modern automotive design trend of customizing dashboards according to user preferences, this study specifically considers the importance of the monitor system size in electric vehicles. Moreover, considering the trend in modern vehicle design to customize dashboards based on user preferences, this research pays special attention to the significance of monitor system size in vehicles. Drawing on industry sales figures and the related literature, this study posits that an 8-inch size for the monitoring system is preferable [19]. Automotive research firm SBD Automotive has disclosed in their report that there are various configurations of instrument panels, central flat displays, and HUDs in the current market. Initially, the industry utilized a 6-inch display embedded in the central console; presently, various car manufacturers and suppliers have implemented more than ten advanced configurations, featuring multiple sizes and display options [15]. While most HUD initiatives focus on enhancing driving safety and efficiency, research on display configurations for NDRTs in advanced autonomous driving environments is relatively scarce. The purpose of this study is to fill the research gaps in in-vehicle displays for non-driving-related tasks by evaluating the display configurations of different vehicle models.

2.3. Non-Driving-Related Tasks

Since the six-level autonomous driving classification system was released by SAE International in 2014, it has been widely recognized as a key reference standard within the automotive industry [20]. In this system, the transition from the L2 to L3 level is seen as a milestone in autonomous driving technology, indicating that in vehicles in the L3 level and above, the driver does not need to continuously monitor the driving environment. This level of automation not only enables drivers to engage in NDRTs, but also aims to significantly improve their comfort and overall driving experience by reducing the demand for driving tasks [3]. In a highly automated driving environment, although direct control of the vehicle is not necessary, drivers engaging in NDRTs such as watching videos may experience increased fatigue [21]. Moreover, even in assisted driving modes where drivers are required to monitor the autonomous driving system, they may be tempted by non-driving tasks such as reading, grooming, using electronic devices, talking, or listening to music. As the level of autonomous driving technology increases, these activities should not only be designed not to distract the driver’s attention but also be considered as everyday activities that drivers might voluntarily engage in under autonomous driving conditions [22]. When assessing drivers’ obstacle avoidance performance after taking over control, activities such as writing emails and watching videos are used as NDRTs [23]. Watching videos has been chosen for this study as the non-driving related task to research, as it represents a common activity that drivers are thought to frequently engage in within autonomous driving environments to pass the time.

2.4. Virtual Driving Platform

VR head-mounted displays (VR-HMDs) have revolutionized the field of driving simulation, providing researchers with the capability for convenient control over experimental environments to cater to a diverse range of research requirements [24]. This advancement is crucial for propelling the development of autonomous driving technologies. Particularly in the realms of simulating in-vehicle environments and appraising passenger experiences, VR-HMDs have unveiled their considerable prospects in the preliminary evaluation of user interface designs [25]. Research on preferences for augmented reality selection technologies has revealed the complexity and diversity of user needs, emphasizing the importance of considering diverse user preferences [26]. The potential of VR technology in capturing behavioral data, as demonstrated by Robin et al. [27], is particularly noteworthy, such as using head movement data collected with VR-HMDs to accurately estimate the driver’s gaze area. Additionally, VR-HMDs also have the capability and potential to capture behavioral data. Mungyeong Choe used head movement data to estimate the driver’s gaze area with an accuracy of approximately 72.1%, thereby determining the driver’s state through cost-effective sensors [28]. Amamra utilized VR technology for head tracking to simulate driver’s line of sight movements, proposing a method that employs VR-HMDs to mimic autonomous driving environments and evaluate user preferences [29]. Andrew developed a VR system that adjusts the display content based on the user’s head and eye movement information, thereby evaluating the flight experience [30]. Virtual driving has fundamentally transformed the driving simulation field, offering a controllable experimental setting to meet diverse research requirements and facilitating simulations of autonomous driving. However, no studies have yet investigated the necessity of optimal in-vehicle display configurations in the field of virtual driving. The use of a virtual driving platform to validate the importance of optimal in-vehicle display settings can address this void, creating a new branch in smart vehicle research and opening new possibilities for improving the accuracy of simulated in-vehicle environments and assessments of passenger experiences.

3. Experiment of Driver Immersion with In-Vehicle Display Configurations

Following our summary of in-vehicle display configurations, we selected various configurations for recombination and experimentation. Ensuring their collaborative operation and experience consistency, our goal was to quantify the differences in experience between display configurations, test the potential benefits and limitations of each combination, and identify the optimal in-vehicle display configuration for non-driving tasks in high-driving-automation scenarios.

3.1. Method and Parameters

This experimental design adopted a method combining subjective experience with objective data. Previous research has demonstrated that head movement data can predict the driver’s gaze area with an accuracy of approximately 72.1%. Moreover, head movement tracking not only captures the direction of the driver’s gaze but also provides information on how the driver directs attention in space [28]. Based on these insights, the most significant immersive metric in this study is visual attention mapping data, which are generated by tracking the motion of the head’s central point. These data were selected as the metric for assessing the driver’s immersion while watching videos. These data are quantified by recording the total instances of a driver’s attention being focused on specific display areas. The higher the rate of video annotations, the more immersed the driver is considered in that area, thus deducing the strengths of the in-vehicle display combination in specific situations.
To efficiently assess the impact of multiple factors on the driving experience, this study utilized an orthogonal design method for experimental grouping. The application of orthogonal design significantly reduced the number of tests required to assess multiple factors, while ensuring the accuracy and reliability of the experimental results. Through this method, we were able to systematically evaluate the impact of different in-vehicle display configurations on the driving experience with the minimum number of experiments [29]. Following each experimental group, participants were required to complete a preference-based experience survey to subjectively corroborate the experimental data. The integration of subjective evaluation and objective data is designed to offer a holistic view, aiming for a deeper comprehension and verification of drivers’ preferences for in-vehicle display configurations and their influence on immersive experiences.

3.2. Apparatus and Environment

3.2.1. Apparatus

This study was conducted in a usability laboratory. Participants were asked to sit in an ergonomically designed chair with adjustable height at one side of the table. In front of them was a MacBook Pro (2.2 GHz M2, 16 GB 2400 MHz DDR4, and a 516 GB hard drive, Cupertino, CA, USA), with a Meta Quest2 (Menlo Park, CA, USA) mounted on their heads as the device for creating an immersive driving environment, this simulator was a VR-HMD officially released in September 2020 at Facebook Connect, capable of tracking the user’s head and hand movements without the need for external sensors, providing a visual experience with a 120-degree field of view.

3.2.2. Virtual Driving Tasks and Environment

To enhance the immersion of participants, the experiment utilized Unity 3D and Unity Vehicle Tools to create a highly realistic virtual autonomous driving environment, as shown in Figure 1a. The VR simulation ensures accurate physical simulation and driving dynamics, emulating the real driving experience and increasing the immersive quality of the experiment [31]. Using Cities: Skylines, a virtual environment incorporating various driving scenes from industrial outskirts to city centers was created. Figure 1b shows the digital assets, including the converted 3D car data, that were integrated into the VR scene using a 120-degree curved screen and a 1:1 urban model to guarantee the realism of the environment. The simulated driving experience features routes, timing, traffic, and driving styles, resulting in a two-minute driving recording. The conditions of the experiment can be dynamically modified to finely regulate the study environment.

3.2.3. Classification of In-Vehicle Display Configurations

To classify existing in-vehicle display setups, we gathered common configuration names from the market and marked the displays. The information compiled is not all-encompassing, and not all listed brands provide every type of display; it is based on SBD Automotive’s report [15], which analyzes current in-vehicle display configurations on the market. As illustrated in Appendix A, these are broadly categorized into three types, disregarding variations in energy sources.
First, indicative of the trend towards simplified dashboards, is a configuration similar to the Tesla Model 3, which eliminates the traditional dashboard and consolidates all information (including entertainment and essential instrument data, such as the speedometer, tachometer, and power meter) into a single floating mega display, referred to here as the center console display. Talia Lavie’s research also points out that in-vehicle displays are evolving towards simplified dashboards [32]. Secondly, for configurations that maintain two display devices, the Volvo XC90 is selected as a typical example, based on sales and market trends. The portrait display and instrument cluster display of this model, merging entertainment and control features, exemplify this type of configuration. Finally, models equipped with three or more displays, represented by range-extended EVs such as Ideal Automobile and Xiaopeng Automobile, demonstrate a more varied display setup, incorporating a W-HUD. This configuration, markedly distinct from Tesla’s single large display, features direct coordination between display devices; thus, this research considers such configurations as well. Through the analysis of various types of in-vehicle display configurations, this study aims to propose more suitable display solutions for NDRT scenarios in advanced autonomous driving environments.

3.3. Experiment Design

3.3.1. Orthogonal Texting

Orthogonal experimental design is an efficient method for designing experiments that can arrange multi-factor experiments and seek optimal configurations [33]. Orthogonal experimental design is commonly applied in various scientific areas like product design [33], aesthetic design [34], and architectural design [35], as it allows for learning from just a small selection of the total experiments, unlike other design methods. In our research, we selected the L4(32) orthogonal table (refer to Table 1), which facilitates experimenting with three factors (center console display, HUD, and dashboard), each having two levels and thus halving the number of experimental trials. Additionally, as interactions are evenly distributed across each factor, it allows for the determination of main effects’ rankings without interference from these interactions [36]. Four test cases have been designed as four in-vehicle display configurations to be tested in virtual display driving experiments. Table 1 illustrates how, through the use of orthogonal design, the research has built an experimental framework to evaluate the effects of various in-vehicle display combinations on NDRTs. In this framework, whether the instrument cluster display and a W-HUD are present and the specifications of the center console display (12.3-inch floating display and 12.3-inch portrait display) are viewed as crucial factors. Additionally, the settings for factors also consider the dashboard configuration (including or excluding the instrument cluster display) and whether a head-up display is included as a variable. Originally, considering standard configuration combinations, the experiment was to include nine distinct configurations. Notably, the orthogonal design’s reconfiguration of in-vehicle displays includes the setups shown in Appendix A, reflecting trends from leading car manufacturers, such as Tesla’s move from a dashboard to a floating display, Li Auto’s introduction of a HUD, and the distinctive portrait display from Volvo. The 3x2 configuration allows the orthogonal design to efficiently determine which factors have the most substantial effects on the experience of NDRTs.

3.3.2. Prototype of Interface Interaction

After the prior orthogonal experiments, adapting the interfaces and configurations to interactive prototypes is required. This type of interactive prototype underscores the significance of consistency in Human–Machine Interface (HMI) prototypes [37]. Coordinating interactions among HMI elements, unifying information and entertainment functionalities, and offering system feedback helps preserve driver situational awareness [38]. As shown in Appendix B, the primary task involves watching videos, evaluating the effects of tasks unrelated to driving, integrating with navigation and data viewing tasks, and aligning with the content of HMI configurations in four groups. Figure 2 shows that the design of the interface accounts for drivers’ operational habits and visual focus, guaranteeing logical information presentation and consistency in visual elements like fonts, colors, and icons. Both the dashboard and HUD concurrently present speed and navigation details, with the center console display using a split-screen format to simultaneously offer video viewing and essential navigation information. Simulating typical scenarios of autonomous driving, the experiment gathers data to assess driver preferences.

3.3.3. Questionnaire

The questionnaire employs an innovative perspective to evaluate participant preferences, applying Dual-factor Theory to guarantee the thoroughness of the assessment. Divided into rational and emotional dimensions, the questionnaire focuses on functionality, efficiency, and usability in the former and on pleasure and comfort in the latter. Question design delves into the analysis of experiences and their interactions across both dimensions, using a 5-point scale to comprehensively collect feedback. Employing the expert method, automotive specialists from DESAY SV are invited to precisely review and refine the questionnaire content, removing redundancies to ensure alignment with the theoretical core and efficiency. This measure aims to accurately capture experiential preferences, providing support for the consistency of experimental results.

3.4. Participants and Experiment Procedure

This study involved 26 participants, including 15 males and 11 females, all right-handed and with no history of repetitive injuries to the upper limbs or hands. Every participant possessed a valid driving license and at least one year of driving experience, with recruitment conducted via social media. Prior to the experiment, each participant signed an informed consent form, verifying their voluntary participation in this study, and were informed that all gathered data would be compiled and reported anonymously.
To ensure participants could quickly familiarize themselves with the experimental tasks, a 30 s practice session was provided before the experiment, aimed at adjusting the alignment between the visual focal point and the head-mounted device. Moreover, participants were comprehensively informed about the experimental procedure, including possible discomfort in the virtual reality setting and the right to withdraw from the experiment at any moment. The design and execution of this study did not encroach upon areas that necessitate an ethical review; nonetheless, the research strictly followed ethical principles and industry norms.
During the experiment, participants experienced four different in-vehicle display configurations in a virtual driving environment, with each configuration lasting for 2 min. The experimental tasks included watching videos in fully autonomous driving mode, while monitoring driving mileage information and responding to abnormal traffic conditions. During standard tasks, participants were required to stay informed about the speed and focus on the map while partaking in NDRTs. It is noteworthy that participants did not need to take control of the vehicle, even in emergency scenarios. Upon completion of the experiment, preferences for each configuration were gathered from participants via a satisfaction questionnaire. The full experiment comprised preparation, experiencing the configurations, interruption resets, short breaks, and questionnaire completion, amounting to a total of about 0.5 h.

3.5. Data Collection and Analysis

As shown in Figure 3, using hardware and developed with Unity3D, the data from head movements are transformed into tracking data (visual attention mapping data), The principle involves capturing data based on a central point extended in front of the VR-HMD, with data sampled approximately 7 times per second in a VR environment; determining whether the user’s gaze intersects with three-dimensional objects is crucial [39]. Thus, each point collected generates three-dimensional coordinate data. By collecting accurate behavioral data, automatic visual generation and processing of head behavior data are simultaneously numbered in real time. While capturing accurate data, the data are also converted into a heatmap. The specific formula is expressed in Appendix C, where the gaze duration relative to the total time (percentage) corresponds to colors; then, through color mapping such as “red green blue”, areas with more concentrated and longer gaze durations are shown in warm colors (such as red), whereas the opposite areas are shown in cool colors [40].
By analyzing participants’ head movement data, visual attention mapping data were generated, visually presenting the attention distribution under each configuration. Specifically, with sampling seven times per second for two minutes, 840 data coordinate points are generated. Then, dividing the number of dwell points in the video area of each configuration by the total data coordinate points yields the percentage of the video area, representing the cumulative distribution of user attention. This represents the number of gaze points from multiple users divided by the total number of visual points, which can also be understood as the percentage of time that gaze points persist relative to total time.
Regarding data analysis, the obtained visual attention mapping data are analyzed using range analysis of orthogonal experiments. This method assesses the impact of different factors on experimental outcomes and identifies the optimal combination of factor levels.
(1)
Calculate the sum K for each level: For each level of each factor, calculate the sum of all corresponding response values y. For the factors in this study, the sum for A1 is as follows:
KA1 = ΣyA1;
(2)
Calculate the average k for each level: Divide the total sum for each level by the number of occurrences of that level to obtain the average.
kA1 =ΣyA1/2;
(3)
Calculate range R: Range R represents the difference between the highest and lowest average values across different levels of each factor, indicating how much the factor influences the experimental outcomes. The formula for calculation is as follows:
R = max(k) − min(k);
(4)
Compare the ranges: Compare the range R of different factors; the greater the range, the greater the impact on the experimental results, thus determining the ranking of factors based on their effectiveness;
(5)
Select the optimal levels: Based on the range analysis and the objective of the experiment, compare the average k to select the best level combination for each factor to achieve the best in-car display combinations.

4. Results

4.1. Analysis of Ranges

As shown in Figure 4, this study reveals the significant impact of center console display (Factor A), HUD (Factor C), and instrument cluster display (Factor B) on visual attention attraction under different configurations. Specifically, as shown in Figure 4a, the visual attention mapping data from the portrait display (A2) in configuration 4 is 90.934%, significantly higher than the 73.400% of the floating display (A1) in configuration 1 (p < 0.05), indicating that configuration 4 has a significant advantage and higher immersion in attracting visual attention. Figure 4b shows there is a significant difference in the gaze point ratio on the dashboard (Factor B) between configuration 1 and configuration 2. The gaze point ratio of the instrument cluster display (B1) in configuration 1 is 9.58%, while in configuration 2 it is 6.42% (p < 0.05), showing a significant advantage of configuration 2 in this factor. As shown in Figure 4c, for the comparison of HUDs (Factor C), the gaze point ratio of the W-HUD in configuration 4 is 2.1%, significantly lower than the 6.5% in configuration 1 (p < 0.05), further proving that the HUD device in configuration 4 can more effectively reduce attention interference. * indicates p < 0.05. In summary, this study shows that configuration 4 performs excellently in attracting and maintaining visual attention with the portrait display and W-HUD, while configuration 2 also has a significant advantage in the instrument cluster display.
This study analyzed the impact of four configurations on visual attention in an autonomous driving environment through orthogonal experiments, ensuring the data conformed to statistical principles. The range analysis (as shown in Table 2) indicated that the order of influence of the three factors on the evaluation index was C > A > B, with the HUD (Factor C) having the most significant impact on the evaluation index (R value of 9.257), while instrument cluster display (Factor B) had the least impact (R value of 5.721). Based on the analysis of the average effect value (k value), the optimal configuration was A2B2C2.
To further demonstrate the performance of each factor, the analysis in Figure 5 shows that, among all tested configurations, using the A2 portrait display 12.3-inch center console screen (configuration 4) had the best effect, while the configuration A2B2C2 without a HUD and instrument cluster displayed the best visual attention effect. However, since the A2B2C2 configuration was not included in the orthogonal experiment design table, further supplementary validation is needed to confirm its superiority. These results indicate that the HUD is most significant in reducing attention interference, while the influence of the instrument cluster is relatively minor. The A2B2C2 configuration excels in attracting and maintaining visual attention. Although this configuration was not included in the initial design table, its superior performance warrants further validation.

4.2. Supplementary Experiment

To further verify the superiority of configuration 5 (A2B2C2), we recalled 26 subjects and utilized the advantages of the virtual driving platform to validate configuration 5. Table 3 shows the comparative experimental results of configurations 4 and 5. In the comparison of Factor A, the head movement gaze point ratio of configuration 5 is 83.7592%, while that of configuration 4 (A2B2C1) is 90.9342%, indicating that configuration 4 still has a significant advantage in attracting visual attention.
The difference test results show that through the analysis of multiple independent samples and comprehensive consideration of various factors, configuration 4 was selected as the best combination, with p < 0.05, indicating a statistically significant difference. This result suggests that although configuration 5 is theoretically considered the optimal configuration, actual verification results show that configuration 4 performs better in attracting visual attention.

4.3. Heatmap

By organizing the visual attention mapping data of the subjects, we conducted an overlap analysis of the heatmaps, excluding inconsistent parts. This process revealed areas of common focus among the subjects, which usually correspond to key content or salient elements in the video, helping to identify critical elements of the video. The more concentrated the color, the more immersed the subjects were in the video-watching task. In comparison, configuration 4 in Figure 6d showed a higher concentration of attention, indicating that configuration 4 has an advantage in attracting and maintaining attention compared to other configurations, which is consistent with the results of non-parametric tests.
Specifically, in Figure 6a, the attention hotspots of configuration 1 are more scattered, with B1 and C1 occupying a large amount of visual resources, dispersing attention to multiple areas of the video. In Figure 6c, although configuration 3 shows some concentration of attention, the road environment disperses a portion of the visual attention. In Figure 6b, the visual distribution of configuration 2 is also relatively scattered, mainly concentrated in front of the driver. Therefore, configuration 4 performs the best in attracting and maintaining visual attention, as verified by the visual attention heatmaps and statistical analysis results.

4.4. User Experience

Table 4 shows that the median scores (P25, P75) for the four configurations are as follows: 22 (20, 24.5), 33 (31, 36), 28 (24, 31), and 36 (29.5, 41). A non-parametric test yielded H = 64.181 with p < 0.001, showing statistical significance. There are statistically significant differences among the scores for the four configurations. The results of pairwise comparisons indicate statistical differences between configuration 1 and the other three configurations, p < 0.05; and significant differences in scores between configuration 3 and both configuration 2 and configuration 4 are present, p < 0.001. Looking at the median values and corresponding quartiles, configuration 4 has a better score. This aligns with the findings from the user behavior experiments.

5. Discussion

5.1. Center Console Display

In an advanced autonomous driving environment, subjects performed a video-watching task under four different configurations. When comparing individual configuration factors, we found significant differences in visual attention attraction between the central control display (Factor A) in configuration 4 (A2B2C1) and configuration 1 (A1B1C1). This result may be due to the portrait display providing a more immersive experience than the floating display during video watching in advanced autonomous driving, especially for entertainment tasks. This finding aligns with the research by Pankok et al., who noted that the portrait display is designed to minimize visual clutter, thereby improving driver attention and task performance by reducing cognitive load and visual distraction [41]. However, this sharply contrasts with the findings of Paridhi et al., who argued that the portrait display screen design is more beneficial for driving safety, concluding that the floating display is more efficient in these aspects. This difference may be due to the different research contexts and task types [42]. Our study focuses on entertainment tasks, whereas Paridhi’s research primarily targets driving safety and visual search time. Therefore, entertainment tasks might depend more on immersion, while driving or visual search tasks necessitate faster information acquisition and processing. Furthermore, when comparing the four display quantity configurations for Factor A, we discovered that the three displays in configuration 1 (A1B1C1) were less immersive than configurations with one or two screens. This phenomenon can be explained by the significant influence of the number and type of displays on driver attention. J.-b. Ryu et al. also support this perspective, noting that multiple screens increase driver distraction behaviors [43].

5.2. Dashboard

In the comparison of the dashboard (Factor B), a significant difference in the gaze point distribution ratio was found between configuration 1 and configuration 2, suggesting that the combination of a W-HUD and the dashboard did not provide a satisfactory experience. Previous studies have shown that the combination of a HUD and the dashboard might cause conflicts in user preference and satisfaction [44]. On the other hand, comparing configurations with and without the dashboard revealed that configurations lacking the dashboard allowed subjects to achieve better immersion in advanced autonomous driving scenarios. This finding may be partially supported by Conaghan et al., who reported that the dashboard increases mental load when performing navigation tasks during driving, thus presenting challenges in reducing visual interference and enhancing immersion [45].

5.3. HUD

In the single-factor analysis, the effect of the HUD showed divergence: configurations 2 and 3 without the W-HUD exhibited the greatest and best factor influence in range analysis, but in actual experience, configuration 4 with the W-HUD had the highest visual attention. This may be attributed to the better combination effect of the W-HUD and portrait display in configuration 4. This is similar to Weinberg et al.’s view on user driving preferences, where drivers prefer the W-HUD over traditional dashboards. This study further provides evidence that the combination of the W-HUD and portrait display can offer better immersion and reduce distraction [46]. This is consistent with the study by Lagoo et al., which emphasizes that a HUD can minimize distractions [47]. However, this sharply contrasts with the findings of Conaghan et al., who focused on driving risks [45]. These results suggest that user preference for specific HUDs may be influenced by the usage environment.

5.4. User Experience

This study measured subjective ratings, including comfort, satisfaction, and usability. In comparing the ratings of the four configurations, the vertical display and W-HUD configuration performed the best. This finding is consistent with our visual attention mapping measurements, as the vertical display and W-HUD configuration showed higher immersion in video-watching tasks. Additionally, supplementary experiments also confirmed the superior experience of this configuration. From the heatmap, it can be seen that the distribution of configuration 1 is more scattered compared to the concentrated distribution of configuration 4. This may be because, in advanced autonomous driving stages, multi-screen displays increase the cognitive load on drivers to process information, thereby affecting the concentration of visual attention. This finding is supported by the research of Johan et al. [48]. In contrast, the advantage of configuration 4 lies in the W-HUD’s ability to effectively integrate road information and display information, ensuring readability and minimizing interference with the driver’s line of sight [17]. The observed superiority of this configuration can be attributed to the fact that users are influenced by their previous driving experiences and technological perceptions, especially as traditional electronic dashboards have been the primary driving information display for the past twenty years (Westin et al.; Kim et al.) [49,50]. However, regardless of the development of autonomous driving technology, the HUD has always been superior to dashboards in driving.

6. Implication

This study contributes to the theory of human–computer interaction in highly automated driving environments. By exploring the influence of different in-vehicle display configurations on drivers’ immersion and attention during non-driving-related tasks (NDRTs), this study fills an important gap in the literature. Previous studies by Pankok et al. emphasized the importance of reducing cognitive load and visual clutter to improve driver performance [41], but this study extends this understanding by specifically focusing on entertainment tasks in autonomous vehicles. This study found that portrait display configurations and a W-HUD are more popular in enhancing user immersion, providing new insights for optimal display design in autonomous driving scenarios. This emphasizes that display design should be customized according to specific user activities, enriching the design discourse on personalized and situational awareness interfaces in automotive environments.
From a practical application perspective, the results of this study are significant for the design and implementation of in-vehicle display systems in highly automated driving vehicles. This research indicates that drivers prefer portrait displays when engaging in entertainment tasks, suggesting that automotive manufacturers should consider integrating this display mode to enhance user experience and satisfaction. This aligns with the trend of creating more personalized and user-centered in-vehicle environments. Furthermore, this study found that fewer display screens sometimes leads to higher immersion, indicating that automotive manufacturers can achieve cost savings and simplified design without sacrificing user experience.

7. Conclusions

In this study, we conducted a comprehensive comparative experiment to evaluate the differences between various in-vehicle display configurations in enhancing the execution of non-driving-related tasks (NDRTs). The main conclusions drawn from this study results are as follows:
We confirmed that the experimental parameters could effectively differentiate between the four types of in-vehicle displays. The results indicated that, compared to multi-screen configurations, the setup using a portrait display and a W-HUD was more effective in attracting and maintaining the driver’s visual attention, thereby enhancing the immersive experience for non-driving-related tasks. This finding emphasizes that, in the context of highly autonomous driving, one should not merely seek to increase the number of display devices but should focus on precise configurations based on specific driving scenarios and needs, as excessive displays may instead reduce visual immersion and degrade user experience. Additionally, the orientation of the displays (horizontal vs. vertical) significantly affects the optimization of visual experiences. Although horizontal configurations provide a better immersion for activities such as videos, from an overall configuration perspective, the portrait display promotes a more effective user immersion, scored higher, and showed a better performance virtually, suggesting that the portrait display might be a better choice for non-driving tasks, such as in the design of future smart cockpits. This finding challenges the conventional understanding of in-vehicle display configurations. This study also emphasized that considering the display orientation and quantity is crucial in designing in-vehicle display configurations, affecting user experience and satisfaction significantly.
Our study has some limitations that should be addressed in the future. Firstly, our research focused on a specific set of in-vehicle display configurations; although orthogonal experimental design provides a convenient method to assess multivariate configurations, supplementary experiments revealed possibilities beyond the scope of orthogonal designs, highlighting the importance of using flexible and open experimental approaches in the design of display systems within autonomous vehicles. Secondly, in terms of experimental equipment, future studies should integrate eye-tracking technology to use more detailed metrics for creating a more comprehensive visual attention assessment system. Thirdly, our study only evaluated short-term performance and user experience metrics. To more comprehensively understand users’ attitudes towards in-vehicle display configurations, conducting long-term evaluations during the pre-popularity phase of autonomous driving would be very valuable. Fourth, future research should further explore the performance of these configurations in different driving scenarios and evaluate their combined impact on driving safety and user experience. By deeply studying the interactions between various factors, we can better optimize in-vehicle display systems, enhance the driving experience, and improve road safety.

Author Contributions

Conceptualization, L.L. and K.O.; methodology, L.L. and K.O.; software, L.L. and C.Q.J.C.; validation, L.L. and Z.Y. and C.Q.J.C.; formal analysis, L.L.; investigation, L.L.; resources, L.L. and C.Q.J.C.; data curation, L.L.; writing—original draft preparation, L.L.; writing—review and editing, L.L.; visualization, L.L.; supervision, C.Q.J.C.; project administration, L.L. and C.Q.J.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

This experiment received approval from the Ethics Review Committee of Huizhou Desay SV Automotive Co., Ltd. (Huizhou, China).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are available in this article.

Conflicts of Interest

Zijiang Yang is an employee of Huizhou Desay SV Automotive Co., Ltd. The paper reflects the views of the scientists, and not the company.

Appendix A

Figure A1. Overview of mainstream in-vehicle display configuration types by current automakers.
Figure A1. Overview of mainstream in-vehicle display configuration types by current automakers.
Wevj 15 00264 g0a1

Appendix B

Figure A2. Prototypes of interface interaction for four configurations.
Figure A2. Prototypes of interface interaction for four configurations.
Wevj 15 00264 g0a2

Appendix C

T i = T i 1 + S D S · F i H · B B   , i f   ( D S ) 0 , e l s e
Note: n represents the number of gaze points, and the transparency of a single pixel on the heatmap after overlay at the ith gaze point is Ti (i = 1, 2, ⋯, n), and the distance from each pixel to the ith gaze point is D; Fi is the duration of the ith gaze point; B is the minimum duration of gaze points for complete transparency of the heatmap; H represents the degree of “red-green-blue” coverage over the background image in the heatmap (H = 100% when completely obscured); and S is the sensitivity (the maximum radius at which a single pixel affects surrounding pixels). B, H, S, and D have been predefined according to the actual experiment.

References

  1. Leander, K.M.; Phillips, N.C.; Taylor, K.H. The Changing Social Spaces of Learning: Mapping New Mobilities. Rev. Res. Educ. 2010, 34, 329–394. [Google Scholar] [CrossRef]
  2. Bellet, T.; Cunneen, M.; Mullins, M.; Murphy, F.; Pütz, F.; Spickermann, F.; Braendle, C.; Baumann, M.F. From Semi to Fully Autonomous Vehicles: New Emerging Risks and Ethico-Legal Challenges for Human-Machine Interactions. Transp. Res. Part F Psychol. Behav. 2019, 63, 153–164. [Google Scholar] [CrossRef]
  3. Dattatreya, P.H. 22-1: Invited Paper: Future Automotive Interiors—The 3rd Living Space. SID Symp. Dig. Tech. Pap. 2016, 47, 263–266. [Google Scholar] [CrossRef]
  4. Choi, J.; Yoo, S.; Park, H.; Lee, C.-G. Performance Analysis of an Embedded Chipset on a Multi-Screen Based Automotive Applications Environment. In Proceedings of the 2022 5th International Conference on Information and Computer Technologies (ICICT), New York, NY, USA, 4–6 March 2022; IEEE: New York, NY, USA, 2022; pp. 39–42. [Google Scholar]
  5. Ran, L.; Luo, H.; Yan, Y.; Yu, M.; Zhang, X. Intelligent Driving Interface Layout and Design Research. In Advances in Human Aspects of Transportation; Stanton, N.A., Ed.; Springer International Publishing: Los Angeles, CA, USA, 2018; pp. 407–414. [Google Scholar]
  6. Hawkins, A.J. Waymo Is First to Put Fully Self-Driving Cars on US Roads without a Safety Driver—The Verge. Available online: https://www.theverge.com/2017/11/7/16615290/waymo-self-driving-safety-driver-chandler-autonomous (accessed on 8 January 2024).
  7. Lawson, G.; Salanitri, D.; Waterfield, B. Future Directions for the Development of Virtual Reality within an Automotive Manufacturer. Appl. Ergon. 2016, 53, 323–330. [Google Scholar] [CrossRef] [PubMed]
  8. Coppola, R.; Morisio, M. Connected Car: Technologies, Issues, Future Trends. ACM Comput. Surv. 2017, 49, 46. [Google Scholar] [CrossRef]
  9. Akamatsu, M.; Green, P.; Bengler, K. Automotive Technology and Human Factors Research: Past, Present, and Future. Int. J. Veh. Technol. 2013, 2013, 526180. [Google Scholar] [CrossRef]
  10. Jarosch, O.; Kuhnt, M.; Paradies, S.; Bengler, K. It’s out of Our Hands Now! Effects of Non-Driving Related Tasks during Highly Automated Driving on Drivers’ Fatigue. In Proceedings of the 9th International Driving Symposium on Human Factors in Driver Assessment, Training, and Vehicle Design: Driving Assessment 2017, Manchester Village, VT, USA, 26–29 June 2017; University of Lowa: Manchester Village, VT, USA, 2017; pp. 319–325. [Google Scholar]
  11. Zhu, Y.; Geng, Y.; Huang, R.; Zhang, X.; Wang, L.; Liu, W. Driving towards the Future: Exploring Human-Centered Design and Experiment of Glazing Projection Display Systems for Autonomous Vehicles. Int. J. Hum. Comput. Interact. 2023. [Google Scholar] [CrossRef]
  12. Olaverri-Monreal, C.; Hasan, A.E.; Bulut, J.; Korber, M.; Bengler, K. Impact of In-Vehicle Displays Location Preferences on Drivers’ Performance and Gaze. IEEE Trans. Intell. Transp. Syst. 2014, 15, 1770–1780. [Google Scholar] [CrossRef]
  13. Tan, D.S.; Gergle, D.; Scupelli, P.; Pausch, R. Physically Large Displays Improve Performance on Spatial Tasks. ACM Trans. Comput. Hum. Interact. 2006, 13, 71–99. [Google Scholar] [CrossRef]
  14. Tan, H.; Sun, J.; Wenjia, W.; Zhu, C. User Experience & Usability of Driving: A Bibliometric Analysis of 2000-2019. Int. J. Hum. Comput. Interact. 2021, 37, 297–307. [Google Scholar] [CrossRef]
  15. Miller, B. The Big Screen Dilemma. Available online: https://www.sbdautomotive.com/post/the-big-screen-dilemma-vpp (accessed on 2 October 2023).
  16. Lai, C.-C.; Wu, C.-F. Display and Device Size Effects on the Usability of Mini-Notebooks (Netbooks)/Ultraportables as Small Form-Factor Mobile PCs. Appl. Ergon. 2014, 45, 1106–1115. [Google Scholar] [CrossRef] [PubMed]
  17. Park, H.S.; Park, M.W.; Won, K.H.; Kim, K.; Jung, S.K. In-vehicle AR-HUD System to Provide Driving-safety Information. ETRI J. 2013, 35, 1038–1047. [Google Scholar] [CrossRef]
  18. Oh, H.J.; Ko, S.M.; Ji, Y.G. Effects of Superimposition of a Head-up Display on Driving Performance and Glance Behavior in the Elderly. Int. J. Hum. Comput. Interact. 2016, 32, 143–154. [Google Scholar] [CrossRef]
  19. Perisoara, L.A.; Sacaleanu, D.L.; Vasile, A. Instrument Clusters for Monitoring Electric Vehicles. In Proceedings of the 2017 IEEE 23rd International Symposium for Design and Technology in Electronic Packaging (SIITME), Constanta, Romania, 26–29 October 2017; IEEE: Constanta, Romania, 2017; pp. 379–382. [Google Scholar]
  20. Charissis, V.; Papanastasiou, S. Human–Machine Collaboration through Vehicle Head up Display Interface. Cogn. Technol. Work 2010, 12, 41–50. [Google Scholar] [CrossRef]
  21. Marberger, C.; Mielenz, H.; Naujoks, F.; Radlmayr, J.; Bengler, K.; Wandtner, B. Understanding and Applying the Concept of “Driver Availability” in Automated Driving. In Advances in Human Aspects of Transportation; Stanton, N.A., Ed.; Advances in Intelligent Systems and Computing; Springer International Publishing: Cham, Switzerland, 2018; Volume 597, pp. 595–605. ISBN 978-3-319-60440-4. [Google Scholar]
  22. Li, L.; Yang, Z.; Zeng, J.; Carlos, C.Q.J. Evaluating Driver Preferences for In-Vehicle Displays during Distracted Driving Using Driving Simulators. Electronics 2024, 13, 1428. [Google Scholar] [CrossRef]
  23. Dogan, E.; Honnêt, V.; Masfrand, S.; Guillaume, A. Effects of Non-Driving-Related Tasks on Takeover Performance in Different Takeover Situations in Conditionally Automated Driving. Transp. Res. Part F Psychol. Behav. 2019, 62, 494–504. [Google Scholar] [CrossRef]
  24. Ma, R.H.Y.; Morris, A.; Herriotts, P.; Birrell, S. Investigating What Level of Visual Information Inspires Trust in a User of a Highly Automated Vehicle. Appl. Ergon. 2021, 90, 103272. [Google Scholar] [CrossRef]
  25. Brewster, S. Building Virtual and Augmented Reality Passenger Experiences. In Proceedings of the Companion of the 2022 ACM SIGCHI Symposium on Engineering Interactive Computing Systems, Sophia Antipolis, France, 21–24 June 2022; ACM: Sophia Antipolis, France, 2022. [Google Scholar]
  26. McGill, M.; Wilson, G.; Medeiros, D.; Brewster, S.A. PassengXR: A Low Cost Platform for Any-Car, Multi-User, Motion-Based Passenger XR Experiences. In Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology, Bend, OR, USA, 29 October–2 November 2022; ACM: Bend, OR, USA, 2022; p. 2. [Google Scholar]
  27. Schramm, R.C.; Sasalovici, M.; Hildebrand, A.; Schwanecke, U. Assessing Augmented Reality Selection Techniques for Passengers in Moving Vehicles: A Real-World User Study. In Proceedings of the 15th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Ingolstadt, Germany, 18–21 September 2023; ACM: Ingolstadt, Germany, 2023; pp. 22–31. [Google Scholar]
  28. Choe, M.; Choi, Y.; Park, J.; Kim, J. Head Mounted IMU-Based Driver’s Gaze Zone Estimation Using Machine Learning Algorithm. Int. J. Hum. Comput. Interact. 2023. [Google Scholar] [CrossRef]
  29. Qin, R. The Applied Progress of Orthogonal Experiment in Wastewater Treatment. IOP Conf. Ser. Earth Environ. Sci 2018, 170, 32041. [Google Scholar] [CrossRef]
  30. Duchowski, A.T.; Medlin, E.; Cournia, N.; Gramopadhye, A.; Melloy, B.; Nair, S. 3D Eye Movement Analysis for VR Visual Inspection Training. In Proceedings of the Symposium on Eye Tracking Research & Applications—ETRA’02, New Orleans, LA, USA, 25 March 2002; pp. 103–110. [Google Scholar]
  31. Amamra, A. Smooth Head Tracking for Virtual Reality Applications. Signal Image Video Process. 2017, 11, 479–486. [Google Scholar] [CrossRef]
  32. Lavie, T.; Oron-Gilad, T.; Meyer, J. Aesthetics and Usability of In-Vehicle Navigation Displays. Int. J. Hum. Comput. Stud. 2011, 69, 80–99. [Google Scholar] [CrossRef]
  33. Liu, H.; Cui, T.; He, M. Product Optimization Design Based on Online Review and Orthogonal Experiment under the Background of Big Data. Proc. Inst. Mech. Eng. Part E J. Process Mech. Eng. 2021, 235, 52–65. [Google Scholar] [CrossRef]
  34. Guo, F.; Li, M.; Hu, M.; Li, F.; Lin, B. Distinguishing and Quantifying the Visual Aesthetics of a Product: An Integrated Approach of Eye-Tracking and EEG. Int. J. Ind. Ergon. 2019, 71, 47–56. [Google Scholar] [CrossRef]
  35. Liu, Z.; Hou, J.; Zhang, L.; Dewancker, B.J.; Meng, X.; Hou, C. Research on Energy-Saving Factors Adaptability of Exterior Envelopes of University Teaching-Office Buildings under Different Climates (China) Based on Orthogonal Design and EnergyPlus. Heliyon 2022, 8, e10056. [Google Scholar] [CrossRef] [PubMed]
  36. Yang, S.; Zhou, D.; Wang, Y.; Li, P. Comparing Impact of Multi-Factor Planning Layouts in Residential Areas on Summer Thermal Comfort Based on Orthogonal Design of Experiments (ODOE). Build. Environ. 2020, 182, 107145. [Google Scholar] [CrossRef]
  37. Li, W.; Zhou, Y.; Luo, S.; Dong, Y. Design Factors to Improve the Consistency and Sustainable User Experience of Responsive Interface Design. Sustainability 2022, 14, 9131. [Google Scholar] [CrossRef]
  38. Kircher, K.; Ahlstrom, C. Minimum Required Attention: A Human-Centered Approach to Driver Inattention. Human factors. 2017, 59, 471–484. [Google Scholar] [CrossRef] [PubMed]
  39. Ryabinin, K.V.; Belousov, K.I. Visual Analytics of Gaze Tracks in Virtual Reality Environment. Sci. Vis. 2021, 13, 50–66. [Google Scholar] [CrossRef]
  40. Špakov, O.; Miniotas, D. Visualization of Eye Gaze Data Using Heat Maps. Elektron. Elektrotech. 2007, 74, 55–58. [Google Scholar]
  41. Pankok, C.; Kaber, D. The Effect of Navigation Display Clutter on Performance and Attention Allocation in Presentation- and Simulator-Based Driving Experiments. Appl. Ergon. 2018, 69, 136–145. [Google Scholar] [CrossRef]
  42. Mathur, P.; Moallem, A. Tesla Model 3: Impact of Vertical Segmentation on Visual Search Time. In Proceedings of the 13th International Conference on Applied Human Factors and Ergonomics (AHFE 2022), New York, NY, USA, 24–28 July 2022; Volume 39, pp. 1–9. [Google Scholar]
  43. Ryu, J.-B.; Sihn, Y.-K.; Yu, S.-B. Assessment of Risky Driving Caused by the Amount of Time Focused on an In-Vehicle Display System. Int. J. Automot. Technol. 2013, 14, 259–264. [Google Scholar] [CrossRef]
  44. Beck, D.; Jung, J.; Park, J.; Park, W. A Study on User Experience of Automotive HUD Systems: Contexts of Information Use and User-Perceived Design Improvement Points. Int. J. Hum. Comput. Interact. 2019, 35, 1936–1946. [Google Scholar] [CrossRef]
  45. Conaghan, M.; Colwill, I.; Elton, E.; Charchalakis, P.; Stipidis, E. Measuring the Cognitive Demands of In-Vehicle Dashboard and Centre Console Tasks. In Advances in Human Aspects of Transportation; Stanton, N.A., Ed.; Advances in Intelligent Systems and Computing; Springer International Publishing: Cham, Germany, 2018; Volume 597, pp. 503–510. ISBN 978-3-319-60440-4. [Google Scholar]
  46. Weinberg, G.; Harsham, B.; Medenica, Z. Investigating HUDs for the Presentation of Choice Lists in Car Navigation Systems. In Proceedings of the 6th International Driving Symposium on Human Factors in Driver Assessment, Training, and Vehicle Design: Driving Assessment 2011, Olympic Valley, CA, USA, 27–30 July 2011; University of Iowa: Olympic Valley, CA, USA, 2011; pp. 195–202. [Google Scholar]
  47. Lagoo, R.; Charissis, V.; Harrison, D.K. Mitigating Driver’s Distraction: Automotive Head-Up Display and Gesture Recognition System. IEEE Consum. Electron. Mag. 2019, 8, 79–85. [Google Scholar] [CrossRef]
  48. Engström, J.; Markkula, G.; Victor, T.; Merat, N. Effects of Cognitive Load on Driving Performance: The Cognitive Control Hypothesis. Hum. Factors J. Hum. Factors Ergon. Soc. 2017, 59, 734–764. [Google Scholar] [CrossRef] [PubMed]
  49. Westin, M.; Dougherty, R.; Depcik, C.; Hausmann, A.; Sprouse, C. Development of an Adaptive Human-Machine-Interface to Minimize Driver Distraction and Workload. In Proceedings of the ASME 2013 International Mechanical Engineering Congress and Exposition, San Diego, CA, USA, 15–21 November 2013; Safety, Reliability and Risk; Virtual Podium (Posters). American Society of Mechanical Engineers: San Diego, CA, USA, 2013; Volume 15, p. 65141. [Google Scholar]
  50. Kim, S.; Dey, A.K.; Lee, J.; Forlizzi, J. Usability of Car Dashboard Displays for Elder Drivers. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Vancouver, BC, Canada, 7–12 May 2011; ACM: Vancouver, BC, Canada, 2011; pp. 493–502. [Google Scholar]
Figure 1. Virtual driving environment. (a) Map and environment of virtual driving experience: commuter driving scene sections and functions. (b) Integration into unity 3D virtual driving platform and experiment environment.
Figure 1. Virtual driving environment. (a) Map and environment of virtual driving experience: commuter driving scene sections and functions. (b) Integration into unity 3D virtual driving platform and experiment environment.
Wevj 15 00264 g001
Figure 2. A sample of the interface item.
Figure 2. A sample of the interface item.
Wevj 15 00264 g002
Figure 3. The principle of generating visual attention mapping through head movements using VR-HMD.
Figure 3. The principle of generating visual attention mapping through head movements using VR-HMD.
Wevj 15 00264 g003
Figure 4. Attention distribution data for in-vehicle display combinations (a) Fact A center console display, (b) fact B dashboard, and (c) fact C HUD.
Figure 4. Attention distribution data for in-vehicle display combinations (a) Fact A center console display, (b) fact B dashboard, and (c) fact C HUD.
Wevj 15 00264 g004
Figure 5. Trends of factors in in-vehicle display configurations.
Figure 5. Trends of factors in in-vehicle display configurations.
Wevj 15 00264 g005
Figure 6. Heatmap of in-vehicle display configurations. (a) Configuration 1. (b) Configuration 2 (c) Configuration 3. (d) Configuration 4.
Figure 6. Heatmap of in-vehicle display configurations. (a) Configuration 1. (b) Configuration 2 (c) Configuration 3. (d) Configuration 4.
Wevj 15 00264 g006
Table 1. Factors and levels of the orthogonal design L4(32).
Table 1. Factors and levels of the orthogonal design L4(32).
FactorABC
Center console displayDashboardHUD
Level121212
Floating displayPortrait displayInstrument cluster
display
WithoutW-HUDWithout
Wevj 15 00264 i001
Table 2. Analysis of range in orthogonal testing for in-vehicle display configurations.
Table 2. Analysis of range in orthogonal testing for in-vehicle display configurations.
FactorPercentage of fixations mapped by head movement for visual attention (%)
NoABC
111173.400
212288.378
322187.398
421290.934
K14206.2104272.6804180.730
K24636.6304570.1604662.110
k180.88982.16780.399
k289.16687.88889.656
R8.2775.7219.257
Factor orderC > A > B
Optimum combinationA2B2C2
Note: “K” denotes the sum of response values at each level, whereas “k” denotes the average response value at each level, calculated as the K value for that level divided by the number of experiments at that level. “R” is the range of each factor, calculated as the difference between the maximum and minimum k-values across the factor’s levels, used to evaluate the factor’s significance. The range “R” is used to evaluate the significance of factors, with a larger “R” value indicating a more significant impact of that factor on the experimental results.
Table 3. Independent samples test for immersive indicators between A2B2C1 and A2B2C2.
Table 3. Independent samples test for immersive indicators between A2B2C1 and A2B2C2.
GroupNumber of CasesMeanSDtp
A2B2C12690.93426.590213.0690.003
A2B2C22683.75929.93348
Table 4. Non-parametric test of user evaluation feedback data.
Table 4. Non-parametric test of user evaluation feedback data.
Configuration 1 (N = 26)Configuration 2 (N = 26)Configuration 3 (N = 26)Configuration 4 (N = 26)Hp
Score22 (20, 24.5)33 (31, 36) ac28 (24, 31) a36 (29.5, 41) ac64.1810.000
Note: The symbol “a” denotes a significant difference compared to configuration 1, “c” denotes a significant difference compared to configuration 3, with p < 0.05, and all comparisons are Bonferroni adjusted.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, L.; Carlos, C.Q.J.; Yang, Z.; Ono, K. Assessment of User Preferences for In-Car Display Combinations during Non-Driving Tasks: An Experimental Study Using a Virtual Reality Head-Mounted Display Prototype. World Electr. Veh. J. 2024, 15, 264. https://doi.org/10.3390/wevj15060264

AMA Style

Li L, Carlos CQJ, Yang Z, Ono K. Assessment of User Preferences for In-Car Display Combinations during Non-Driving Tasks: An Experimental Study Using a Virtual Reality Head-Mounted Display Prototype. World Electric Vehicle Journal. 2024; 15(6):264. https://doi.org/10.3390/wevj15060264

Chicago/Turabian Style

Li, Liang, Chacon Quintero Juan Carlos, Zijiang Yang, and Kenta Ono. 2024. "Assessment of User Preferences for In-Car Display Combinations during Non-Driving Tasks: An Experimental Study Using a Virtual Reality Head-Mounted Display Prototype" World Electric Vehicle Journal 15, no. 6: 264. https://doi.org/10.3390/wevj15060264

Article Metrics

Back to TopTop