Next Article in Journal
Application Layer Software Design of Vehicle Comfort Braking Based on Brake-by-Wire System
Previous Article in Journal
Analysis of Energy Flow in a Mid-Sized Electric Passenger Vehicle in Urban Driving Conditions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Usability Evaluation of Co-Pilot Screen Based on Fuzzy Comprehensive Evaluation Method

1
School of Automotive Studies, Tongji University, Shanghai 201804, China
2
College of Design and Innovation, Tongji University, Shanghai 200092, China
*
Author to whom correspondence should be addressed.
World Electr. Veh. J. 2023, 14(8), 219; https://doi.org/10.3390/wevj14080219
Submission received: 9 July 2023 / Revised: 31 July 2023 / Accepted: 7 August 2023 / Published: 15 August 2023

Abstract

:
In this study, the usability evaluation model is constructed for a co-pilot screen, and an analysis of the impact factors and optimization recommendations is made based on the evaluation results. Firstly, based on the usability design principles, interaction ease, interaction efficiency, visual comfort, driving safety, and their corresponding secondary indicators are defined, and the subjective weight of each indicator is determined using the analytic hierarchy process (AHP). Then, usability evaluation is carried out on four vehicles via vehicle driving simulated experiments and driving experiments on the road, and the objective weight of the indicators is determined using the CRITIC method. Finally, the usability evaluation model for co-pilot screens is established by applying the fuzzy comprehensive evaluation method. The results indicate that the overall usability comprehensive score of co-pilot screens is convergent and is mainly concentrated in the range of 50–65 points, with two vehicles having excellent affiliation and two vehicles having average affiliation. However, there is a great distance still to reach when compared to an excellent level. The usability evaluation model of co-pilot screens established in this article can quantify the HMI usability design of co-pilot screens. The results of this study are significant for the four tested vehicles in terms of guiding the usability design of co-pilot screens and in promoting the rapid iteration of co-pilot screen development. And a production vehicle that connects a driving simulation platform and the usability evaluation model can be used to test and evaluate more screen designs, interaction models, tasks, and infotainment applications, thus guiding further user experience designs.

1. Introduction

With the boom in the intelligent cockpit industry, users are placing higher demands on in-vehicle information systems; therefore, in the current day, the functions and interaction layouts of the cockpit should not only focus on the driver but should also take into account the needs of the co-pilot. In line with this trend, vehicle manufacturers are launching several vehicles with co-pilot screens.
The co-pilot screen not only changes the layout of the screen in the cockpit, bringing users the most direct sense of technology but also reflects that the new hardware and software architecture can further enhance the user experience. However, due to the low level of maturity, the co-pilot screen has not yet formed relevant design principles and industry standards. The design style varies greatly from vehicle to vehicle, resulting in a wide range of individual experiences. At the same time, the video and audio played on the co-pilot screen may take up the driver’s visual and auditory resources, leading to driving distractions and increasing driving safety hazards.
Currently, the research on cockpit screens has focused on the usability of the human-machine interface (HMI) of the center screen, the HUD, etc. It is important to evaluate the usability of the prototype HMI design before it is fitted into a production vehicle [1]. Yating Su et al. [2] established a research framework, using the literature research method, for the usability of automated vehicle HMIs from criteria such as usability issues and tools. Tinga et al. [3] designed prototype HMIs in different automation modes based on ambient light, icons, LED bar lengths, and timers; this was achieved using virtual reality and the RITE (rapid iterative testing and evaluation) method to evaluate and improve the HMI design, which was verified to have good system usability. Jiajie He [4] selected the ease of operation, operation load, user satisfaction, and other indicators to measure the usability of the HMI of the center screen. Rui Li et al. [5] found that menu architecture impacted the usability of in-vehicle information systems by using the completion time of tasks, error rate, NASA-TLX, and the system usability scale (SUS). In addition, the content of the HMI display can also lead to changes in the usability of the HMI [6].
Keeping driving safe is a fundamental design principle for vehicle usability. Many papers have studied the effect of performing secondary tasks such as touchscreen interaction [7], the use of mobile phones [8], in-vehicle information system engagement [9], etc., on driving distraction. Strayer et al. [10] studied the visual and cognitive demands of vehicle information systems under secondary tasks such as audio entertainment, calling and dialing, text messaging, and navigation. Grahn et al. [11] pointed out that the user interface design has a relatively high impact on visual demand and visual distraction potential compared with screen size. Different types of interaction modes can be used to reduce the effects of driving distraction in tasks of varying complexity and difficulty [12,13]. Driving data and visual data are important objective indicators in the field of HMI and driver distraction research. Driving performance, such as speed deviation, lane departure standard deviation, etc., is commonly combined with the support vector machine (SVM), random forest, or other models to evaluate the degree of driving distraction [14,15,16]. The visual data of drivers are collected via eye trackers, such as areas of interest, glance, gaze, etc., and these are widely used to evaluate the usability of the HMI, determine the state of driving distraction, and guide HMI design in the field of the automotive and aerospace industries [17,18,19]. Furthermore, most of the data were obtained under simulation or a lab environment [20].
As a trend in intelligent cockpit development, the design of co-pilot screens needs to ensure the effectiveness, efficiency, and satisfaction of HMIs while minimizing the impact of driving distractions; thus, it is important to build a comprehensive usability evaluation model of co-pilot screens. However, there are gaps in the research on co-pilot screens in two respects. Firstly, there is no evaluation indicator system for the HMIs of co-pilot screens in terms of assessing the ease of use and efficiency. Secondly, whether the images and sounds of a co-pilot screen can lead to driving distractions and to threatening driving safety remains unstudied. Therefore, in order to solve the research problem, this article first built a multi-layer evaluation indicator system, then calculated the indicator weights using the AHP and CRITIC methods. Finally, the fuzzy comprehensive evaluation method was applied to build a usability evaluation model.

2. Methodology

A comprehensive evaluation indicator system is the core of the usability evaluation model, and this can reflect the advantages and shortcomings of the co-pilot screen from multiple aspects, as well as guide the optimization of co-pilot screens. After the multi-layer evaluation indicators are determined, this article adopted the comprehensive weight method based on the AHP and CRITIC methods to calculate the evaluation indicator weights. This method can reflect the objective data value of each evaluation indicator and can reduce the subjective influence. The calculation steps are shown in Figure 1. The fuzzy comprehensive evaluation is widely used in industrial production evaluation. Qiong Zhang et al. [21] established evaluation indicators based on the principles of comparability, scientificity, unity, and practicability, as well as adopted the fuzzy comprehensive evaluation method to conduct safety evaluations on the whole process of C919 single-test flights. Shuai Wang et al. [22] adopted the fuzzy comprehensive evaluation to evaluate the application effectiveness of the connected vehicle system in tunnel scenarios. Zhijie Zhu et al. [23] developed a prediction method for coal burst that was based on the AHP and fuzzy comprehensive evaluation methods, and this made coal burst prevention more effective. Therefore, this article uses a fuzzy comprehensive evaluation to evaluate the usability of the co-pilot screen.
The final co-pilot screen usability evaluation model is shown in Figure 1, based on which experimental validation of the co-pilot screen usability is carried out.

2.1. Evaluation Indicator System

According to ISO 9241 [24], the usability of the co-pilot screen is defined as the degree of effectiveness, efficiency, and user satisfaction when using the co-pilot screen to complete the specified tasks in the specified scenarios. Considering that the HMI design of the vehicle must be based on the premise of driving safety, the usability design principles of the co-pilot screen are summarized as follows:
  • Clear and easy to operate: appropriate layout of visual elements, clear images without error texts, timely provision of adequate feedback and guidance for users, and it facilitates users to perform the target tasks correctly;
  • Efficiency: the most concise operation steps, the shortest time to complete the target tasks;
  • Visual comfort: optimal screen size and position to reduce the sense of dizziness in the process of completing the target task;
  • Safety: low driving distraction and stable driving performance.
Based on the usability design principles of the co-pilot screen, the evaluation indicator system is categorized into four primary indicators: ease of interaction, efficiency of interaction, visual comfort, and driving safety. Then, several secondary indicators are defined, as shown in Table 1.

2.1.1. Ease of Interaction

To facilitate users’ use of the product, Hao Yang et al. [25] argued that HMI interface designs should follow the four principles, including high resolution, clear design features, all elements being included in the image as much as possible, and high similarity between images. Jun Ma et al. [26] comprehensively evaluated ease of use using subjective evaluation indicators such as reasonable information layout and icon readability, as well as objective evaluation indicators such as the operation error rate. Due to the high similarity between the design of the co-pilot screen and the center screen, some habits of using the center screen could be carried over to the co-pilot screen. During the process of completing the target task, the user’s learning cost and the familiarity of such system will also affect the convenience. Therefore, five indicators are chosen to evaluate the ease of interaction: the clarity of information display, the reasonability of information layout, the user habituation, the perceptibility of feedback, and the smoothness of operations.

2.1.2. Efficiency of Interaction

Jinfei Huang [27] used subjective evaluation indicators of the SUS, such as the interface complexity and the operation complexity, to measure efficiency. Shucong Yu et al. [28] evaluated the efficiency in terms of five indicators such as dynamic operation time, the number of operation steps, and the operation displacement. Because the co-pilot screen is currently installed far away from the passenger seat in vehicles, the user has to maintain a forward-leaning and unstable posture when using the co-pilot screen. And the efficiency of completing the target task will be affected by too many operation steps and too scattered interaction layout. Therefore, three indicators are used to evaluate the efficiency of interaction: operation displacements, operation steps, and task elapsed time.

2.1.3. Visual Comfort

While the vehicle is in motion, watching videos or pictures may induce motion sickness due to conflicting human visual and vestibular perceptions [29]. Chengming Chen et al. [30] used a combination of the eye tracker data, the electrocardiogram, and a streamlined version of the simulator sickness questionnaire (SSQ) to investigate the effects of LCD and OLED on visual fatigue. Cai Li et al. [31] conducted a twenty-participant experiment using the SSQ and postural stability tests to verify that the consistency of motion and visual state had a large effect on the degree of motion sickness. In order to measure the user’s viewing experience of the co-pilot screen in a vehicle motion situation, this article uses the SSQ scale to evaluate visual comfort.

2.1.4. Driving Safety

Although the driver does not directly operate the co-pilot screen while the vehicle is in motion, the video and audio played on the co-pilot screen may cause visual and cognitive distraction for the driver, threatening driving safety. Yang Zhou et al. [32] used the vehicle data such as the speed and lane position deviation, longitudinal and lateral acceleration, and the distance to the front vehicle, as well as the steering wheel data such as the steering wheel angle and steering wheel slew rate, combined with eye and head movement indicators. Through this approach, they built a random forest model that can identify different types of distraction. Strayer et al. [33] introduced the detection response task (DRT) to assess the cognitive demand of different information systems by analyzing the reaction time. Huimin Ge et al. [34] comprehensively summarized driving distraction recognition indicators and analyzed their strengths and weaknesses through literature research, which included not only driving performance, eye tracker, and reaction time, but also physiological-psychological indicators such as electrocardiogram, electroencephalogram, and electrodermal. However, the physiological-psychological indicators are greatly influenced by the users’ physical quality status, and the individual data variability is high. Therefore, the standard deviation of lane departure and vehicle speed departure are selected as driving performance indicators. The number of gaze points, number of sweeps, and average sweep time as eye tracker indicators, combined with DRT reaction time indicators are selected to evaluate driving safety.

2.2. Evaluation Indicator Weights

2.2.1. Subjective Weights

The AHP is a subjective assignment method proposed by Saaty, which decomposes complex problems with multiple objectives and elements into objective, criterion, and indicator layers and analyzes them layer by layer [35]. Although the AHP can reflect the importance that experts attach to different evaluation indicators, they are influenced by the degree of information received and the degree of professionalism. The subjective randomness is too strong, so the AHP cannot fully reflect the advantages of each evaluation indicator.

2.2.2. Objective Weights

The CRITIC method is an objective assignment method proposed by Diakoulaki that takes into account the comparative strength and conflicting nature of the data [36]. Although the CRITIC method can reflect the intrinsic value of the data itself, it is also limited in that it relies entirely on the content of the sample data for analysis.

2.2.3. Comprehensive Weights

Xinglai Xu [37] proposed the comprehensive assignment method that combined the AHP with the CRITIC method, which considers the advantages and disadvantages of subjective and objective weights. Therefore, the comprehensive assignment method is chosen for this article, and the calculation formula is shown in Equation (1).
W = αWAHP + (1 − α) WCRITIC (0 ≤ α ≤ 1)
where W is the comprehensive weight, α is the subjective weight preference coefficient, WAHP is the subjective weight calculated using the AHP, and WCRITIC is the objective weight calculated using the CRITIC method. By analyzing the relevant literature [26] and the actual situation, α = 0.5 is taken.

2.3. The Fuzzy Comprehensive Evaluation Model

Since fuzzy theory enables decision makers to make interval judgments while considering uncertainty or fuzziness, the fuzzy comprehensive evaluation method combined with the AHP is widely used [38]. Considering the fuzzy boundaries and subordination relationships between indicators at all levels in this article, the fuzzy comprehensive evaluation model is constructed using the subordination function. The formula for calculating the affiliation function is shown in Equation (2).
r 1 c = 1                                               0 c 0.1 1 2 sin c 0.1 0.2 + 1 2 π + 1   0.1 < c < 0.3 0                                                       c 0.3 r 2 c = 0                                               0 c 0.1 1 2 sin c 0.3 0.2 + 1 2 π + 1       0.1 < c < 0.5 0                                                       c 0.5 r 3 c = 0                                               0 c 0.3 1 2 sin c 0.5 0.2 + 1 2 π + 1   0.3 < c < 0.7 0                                                       c 0.5 r 4 c = 0                                               0 c 0.5 1 2 sin c 0.7 0.2 + 1 2 π + 1   0.5 < c < 0.9 0                                                       c 0.9 r 5 c = 0                                               0 c 0.7 1 2 sin c 0.9 0.2 + 1 2 π + 1   0.7 < c < 0.9 1                                                       c 0.9
where r1 is the indicator affiliation ‘poorer’, r2 is the indicator affiliation ‘poor’, r3 is the indicator affiliation ‘normal’, r4 is the indicator affiliation ‘good’, r5 is the indicator affiliation ‘excellent’, and c is the indicator value.

3. Usability Experiment of Co-Pilot Screens

3.1. Participants

In this experiment, twelve participants aged between twenty and twenty-six years were selected, with a male-to-female ratio of 1:1. They were required to hold valid driving licenses for at least one year, to be in good health, and to have a normal or corrected-to-normal vision. The eye tracker had corresponding glasses that were provided to participants wearing glasses. The experiment was conducted in accordance with the Declaration of Helsinki and approved by the Science and Technology Ethics Committee of Tongji University (tjdxsr012) for studies involving humans.

3.2. The Design of Experiments

3.2.1. Vehicle Simulated Driving Experiment

The simulated driving experiment can not only collect the required data quickly and accurately but also explore more extreme driving conditions while ensuring driver safety [39]. Čubranić-Dobrodolac et al. [40] used a simulator with three LCD monitors, three connected computers, and a driver cockpit to study driving behaviors. Yujia Liu et al. [41] built a detachable small and medium-sized test rig based on the Unity3D platform and a driving simulator. Jun Ma et al. [26] innovatively built a simulated driving platform consisting of a ring screen, a data acquisition system, and a vehicle, which further improved the rationality of the simulated driving experiment. To ensure the accuracy of the experimental data and take into account the safety and reproducibility of the experiment, this article used a production vehicle connected to the driving simulation platform to collect the driver-oriented driving safety data, as shown in Figure 2.
The platform consisted of a simulated driving environment, a vehicle, and a data acquisition system as follows:
  • The driving simulation scenario: a circular three-lane highway experimental route was built based on the SCANeR platform, with multiple high-definition projectors projecting simulated driving scenarios onto the curved screen. The diameter of the screen is six meters, which provides a horizontal field of vision of two hundred and forty degrees.
  • The vehicle: the front wheels of the production vehicle were positioned on the steering base, which was connected to the angle sensor to output the vehicle steering signal. The accelerator and brake pedals were equipped with synchronous sensors to output data. The signals from the steering wheel, the accelerator pedal, and the brake pedal were transmitted into the ACQUISITION module of the SCANeR platform to control the host vehicle.
  • The data acquisition system: the driving data were collected via a vehicle-mounted angle sensor. The data of an auditory detection response task (DRT) were obtained using a microswitch attached to the left of the steering wheel. The participants were required to wear eye trackers to collect the visual data. The product model of the eye tracker is Tobii Pro Glasses 3 with a sampling frequency of fifty Hz.

3.2.2. Driving Experiments on the Road

Because the frequency use scenario of the co-pilot screen is the operation and viewing of the screen by the passenger during the driving process, and this behavior does not pose a threat to driving safety, this article used driving experiments on the road to collect the data on ease of interaction, efficiency of the interaction, and visual comfort for the passenger, in order to ensure the accuracy and reliability of the data.

3.3. The Experimental Process

The experimental process was divided into three stages as follows:
  • The pre-experiment preparation: after checking the participant’s identity information, the tester introduced the background of the experiment, the process, and the devices and asked the participant to complete the adaptation training of the co-pilot screen, the simulated driving platform, the eye tracker, and the DRT device until they could use them proficiently.
  • Vehicle simulated driving experiments: on the production vehicle connected driving simulation platform, the tester was positioned in the passenger seat to record the data Then, the co-pilot screen played an audio video, and the participant wore an eye tracker and maintained sixty km/h while completing the DRT task; the duration was two minutes. After the simulated driving was over, the participants answered the questions related to the video played on the co-pilot screen.
  • Driving experiments on the road: the tester was positioned in the driver seat at sixty km/h, and the participant opened the video app by clicking on the co-pilot screen, entered the name of the specified video in the search box, searched, and played the video in full screen, then gave the relevant subjective evaluation score; the participant filled in the SSQ before viewing and again after ten minutes of watching the video.

4. Results

In this article, three automotive HMI industry experts, four doctors and postgraduates in the field of driving distraction, and three drivers with five years of driving experience were invited to fill in the AHP questionnaire based on the one-to-nine scale method. The users were asked to compare their scores two by two to obtain the judgment matrix for each level of indicator weights, and the weight vector was obtained based on the matrix. After passing the consistency test, the weight vector W was normalized to obtain the AHP-based primary indicator weight vector WAHP1 = [20.64% 11.61% 4.82% 62.93%]T, and the secondary indicator weight vector WAHP2 = [5.81% 2.85% 1.66% 0.64% 0.64% 2.11% 3.55% 14.98% 4.82% 11.55% 4.68% 11.55% 4.25% 28.85% 2.07%]T.
In this article, twelve participants were invited to conduct experiments on four vehicles equipped with co-pilot screens, and the test data of fifteen evaluation indicators were obtained in forty-eight sets of experiments. Differences in the data dimension could lead to different results. To eliminate the impact of varying data dimensions and ensure comparability, the data of each evaluation indicator was standardized to make them consistent. The positive indicators were processed by applying Equation (3). Negative indicators were processed by applying Equation (4).
x′ = (x − min(x))/(max(x) − min(x))
x′ = (max(x) − x)/(max(x) − min(x))
where x′ is the standardized indicator data, min(x) is the minimum value of the data for the same indicator, and max(x) is the maximum value of the data for the same indicator.
The secondary indicator weights WCRITIC = [7.71% 6.42% 6.96% 7.16% 7.83% 10.9% 0 6.73% 5.53% 7.37% 7.37% 6.05% 6.13% 7.54% 6.30%]T, which were calculated using SPSSPRO based on the CRITIC method.
The comprehensive weights of the indicators at each level were calculated based on Equation (1), as shown in Table 2.
From the affiliation function Equation (2), the affiliation degree of indicators at each level was calculated and the fuzzy judgment matrixes for four vehicles (A1, A2, A3, and A4) were obtained, as shown in Table 3, Table 4, Table 5 and Table 6.
Since the comprehensive weights of the indicators at each level were clearly defined, the multiplicative and bounded operator M (*, ⨁), which took into account the magnitude of all factors, was chosen to calculate the usability evaluation vector of the co-pilot screen. To obtain a more intuitive comprehensive usability score, the collection of comments was assigned the corresponding score N = [0 25 50 75 100] and combined with the evaluation vector to calculate the comprehensive score value S. The obtained fuzzy comprehensive evaluation results of co-pilot screens for A1, A2, A3, and A4 are shown in Table 7.

5. Discussion

Overall, the comprehensive usability scores for the co-pilot screens of the four vehicles tend to be the same, concentrated in the 50-to-65-point range.
The main affiliation degree of the A1 is 0.389, with an affiliation grade of good. By analyzing the fuzzy judgment matrix of the A1, its affiliation grade is good or excellent in several indicators of ease of interaction, efficiency of interaction, and visual comfort, which indicates that the HMI design of the co-pilot screen of the A1 is reasonable and in line with the user’s usage habits. In several indicators of driving safety, the affiliation grade is normal, with the number of gaze points and the number of sweeps scoring significantly lower than other vehicles. The root of the problem lay in the limited anti-peeping effect of the co-pilot screen film, which only reduced the brightness of the screen from the drivers’ viewpoint but did not completely block the content of the screen. When the video attracted the driver to watch, the driver could not fully access the information of the video played on the co-pilot screen in a short period of time through a single sweep, so the number of gazing points and sweeps increased.
The main affiliation degree of the A2 is 0.400, with an affiliation grade of good. By analyzing the fuzzy judgment matrix of the A2, its affiliation grade is good or excellent in several indicators of driving safety, which is significantly better than other vehicles, indicating that the A2’s co-pilot screen has less influence on driving distraction. The affiliation grade is poor or normal in several indicators of ease of interaction and efficiency of interaction. Interaction design flaws such as small fonts resulted in unclear viewing from a distance. The lack of color, sound, and vibration feedback and the lack of a back button did not match the user’s usage habits. During the participants’ execution of the tasks, the slow response time of the co-pilot screen and the long time taken for the tasks resulted in their low subjective evaluation scores.
The main affiliation degree of the A3 is 0.462, with an affiliation grade of normal. By analyzing the fuzzy judgment matrix of the A3, most of the indicators have normal or good affiliation grades, which indicates that the HMI design of the co-pilot screen is reasonable and has a slight impact on driving distraction. This is due to its low functional content and simple design, with low learning costs for the user. However, due to the small display area of the co-pilot screen, the clarity of the information display is poor.
The main affiliation degree of the A4 is 0.607, with an affiliation grade of normal. By analyzing the fuzzy judgment matrix of the A4, its affiliation grade is good in the indicator of visual comfort. This is probably due to the large display area of its co-pilot screen, which made it easy for participants to quickly capture key information displayed on the co-pilot screen, with a strong sense of audio-visual immersion and a better viewing experience. However, it scored poorly in the task elapsed time of efficiency of interaction because of the lack of fluency of its vehicle information system.

6. Conclusions

This article combines the AHP and the CRITIC method for comprehensive weights, and a usability evaluation model for the co-pilot screen is established based on the fuzzy comprehensive evaluation method. A vehicle driving simulated experiment and driving experiments on the road were carried out with twelve participants. A comprehensive usability evaluation of four vehicles with the co-pilot screen was also carried out, leading to the following conclusions:
  • The usability evaluation model of co-pilot screens established in this article can quantify the usability of the HMI design of the co-pilot screen of each vehicle. For the four tested vehicles, there are relatively small differences in the usability of the co-pilot screen, but the overall usability is still far from the excellent level. Compared to the more mature design of the center screen, some of the weaker indicators such as the clarity of information display, the reasonability of information layout, and user habituation still need further improvement.
  • The smoothness of the vehicle information system is likely to be a key factor in the smoothness of operation and task elapsed time, which determines the level of ease of interaction and efficiency of interaction on the co-pilot screen of each vehicle.
  • The visual comfort aspect remains relatively consistent across the four tested vehicles. And user motion sickness was less affected by the co-pilot screen.
  • The driving safety of the co-pilot screen should remain a key concern in the HMI design. A purely visual anti-peep design is not a reliable way to reduce driving distractions. It is recommended to adjust the size of the co-pilot screen and its depth of insertion into the cockpit, combined with headrest speakers or Bluetooth headphones to reduce visual and cognitive distraction.
From the research aspect, this article innovatively selects the co-pilot screen as the study object to provide a new study direction for the HMI field. The production vehicle-connected driving simulation platform and the usability evaluation model can be used to test and evaluate more screen designs, interaction models (voice, gesture control, and buttons), interaction tasks, and infotainment applications to improve the evaluation system of intelligent cockpits. From the practice aspect, the results and optimization recommendations of this study for the four tested vehicles can guide these car companies to update and iterate their co-pilot screens, as well as provide scientific and systematic usability design principles for other car companies to develop new models equipped with co-pilot screens.
Although this article provides some new insights, there are still some research limitations. Firstly, the number of tested vehicles was limited because there were fewer production vehicles equipped with co-pilot screens nowadays. Second, the participants and tested vehicles in this article were all from mainland China, and the conclusions obtained may not be applicable to the global market. Finally, due to the length limitation of this paper, the effects of human segmentation attributes such as the participant’s gender, personality, and driving style on the experimental results were not analyzed in this article.
Future research can further enrich the evaluation indicator system, collect the physiological data such as electrocardiogram, electroencephalogram, heart rate, etc., and combine with interaction tasks such as voice and gesture control to improve the evaluation experiments. In practice, car companies can combine the usability evaluation of multiple screens in the car, including the co-pilot screen, the center screen, and the rear screen, with a large number of user surveys to improve the user experience design.

Author Contributions

Conceptualization and methodology, J.M.; Investigation and writing—original draft, W.W. and J.L.; Writing—review & editing, W.X. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Spyridakos, P.D.; Merat, N.; Boer, E.R.; Markkula, G.M. Behavioural Validity of Driving Simulators for Prototype HMI Evaluation. IET Intell. Transp. Syst. 2020, 14, 601–610. [Google Scholar] [CrossRef]
  2. Su, Y.; Tan, Z.; Dai, N. Changes in Usability Evaluation of Human-Machine Interfaces from the Perspective of Automated Vehicles. In Advances in Usability, User Experience, Wearable and Assistive Technology, Proceedings of the AHFE 2021 Virtual Conferences on Usability and User Experience, Human Factors and Wearable Technologies, Human Factors in Virtual Environments and Game Design, and Human Factors and Assistive Technology, Online, 25–29 July 2021; Ahram, T.Z., Falcao, C.S., Eds.; Springer International Publishing AG: Cham, Switzerland, 2021; Volume 275, pp. 886–893. [Google Scholar]
  3. Tinga, A.M.; van Zeumeren, I.M.; Christoph, M.; van Grondelle, E.; Cleij, D.; Aldea, A.; van Nes, N. Development and Evaluation of a Human Machine Interface to Support Mode Awareness in Different Automated Driving Modes. Transp. Res. Part F Traffic Psychol. Behav. 2023, 92, 238–254. [Google Scholar] [CrossRef]
  4. He, J. Comparative Research on User Experience Evaluation of Different Types of HMI Interface Based on Multiple Indicators. Master’s Thesis, Zhejiang Sci-Tech University, Hangzhou, China, 2018. [Google Scholar]
  5. Li, R.; Chen, Y.V.; Sha, C.; Lu, Z. Effects of Interface Layout on the Usability of In-Vehicle Information Systems and Driving Safety. Displays 2017, 49, 124–132. [Google Scholar] [CrossRef]
  6. Fu, R.; Liu, W.; Zhang, H.; Liu, X.; Yuan, W. Adopting an HMI for Overtaking Assistance-Impact of Distance Display, Advice, and Guidance Information on Driver Gaze and Performance. Accid. Anal. Prev. 2023, 191, 107204. [Google Scholar] [CrossRef] [PubMed]
  7. Zhao, X.; Li, Z.; Zhao, C.; Wang, C.; Fu, R. Distraction Pattern Classification and Comparisons under Different Conditions in the Full-Touch HMI Mode. Displays 2023, 78, 102413. [Google Scholar] [CrossRef]
  8. Tejero, P.; Roca, J. Messages beyond the Phone: Processing Variable Message Signs While Attending Hands-Free Phone Calls. Accid. Anal. Prev. 2021, 150, 105870. [Google Scholar] [CrossRef]
  9. Cooper, J.M.; Wheatley, C.L.; McCarty, M.M.; Motzkus, C.J.; Lopes, C.L.; Erickson, G.G.; Baucom, B.R.W.; Horrey, W.J.; Strayer, D.L. Age-Related Differences in the Cognitive, Visual, and Temporal Demands of In-Vehicle Information Systems. Front. Psychol. 2020, 11, 1154. [Google Scholar] [CrossRef]
  10. Strayer, D.L.; Cooper, J.M.; Goethe, R.M.; McCarty, M.M.; Getty, D.J.; Biondi, F. Assessing the Visual and Cognitive Demands of In-Vehicle Information Systems. Cogn. Res. 2019, 4, 18. [Google Scholar] [CrossRef]
  11. Grahn, H.; Kujala, T. Impacts of Touch Screen Size, User Interface Design, and Subtask Boundaries on In-Car Task’s Visual Demand and Driver Distraction. Int. J. Hum. Comput. Stud. 2020, 142, 102467. [Google Scholar] [CrossRef]
  12. Ma, J.; Li, J.; Gong, Z. Evaluation of Driver Distraction from In-Vehicle Information Systems: A Simulator Study of Interaction Modes and Secondary Tasks Classes on Eight Production Cars. Int. J. Ind. Ergon. 2022, 92, 103380. [Google Scholar] [CrossRef]
  13. Graichen, L.; Graichen, M.; Krems, J.F. Effects of Gesture-Based Interaction on Driving Behavior: A Driving Simulator Study Using the Projection-Based Vehicle-in-the-Loop. Hum. Factors 2022, 64, 324–342. [Google Scholar] [CrossRef]
  14. Ma, Y.; Gu, G.; Gao, Y.; Ma, Y. Driver Distraction Judging Model Under In-vehicle Information System Operation Based on Driving Performance. China J. Highw. Ang. Transp. 2016, 29, 123–129. [Google Scholar] [CrossRef]
  15. Ma, J.; Gong, Z.; Tan, J.; Zhang, Q.; Zuo, Y. Assessing the Driving Distraction Effect of Vehicle HMI Displays Using Data Mining Techniques. Transp. Res. Part F Traffic Psychol. Behav. 2020, 69, 235–250. [Google Scholar] [CrossRef]
  16. Papantoniou, P. Structural Equation Model Analysis for the Evaluation of Overall Driving Performance: A Driving Simulator Study Focusing on Driver Distraction. Traffic Inj. Prev. 2018, 19, 317–325. [Google Scholar] [CrossRef]
  17. Carter, B.T.; Luke, S.G. Best Practices in Eye Tracking Research. Int. J. Psychophysiol. 2020, 155, 49–62. [Google Scholar] [CrossRef] [PubMed]
  18. Mao, R.; Li, G.; Hildre, H.P.; Zhang, H. A Survey of Eye Tracking in Automobile and Aviation Studies: Implications for Eye-Tracking Studies in Marine Operations. IEEE Trans. Hum. Mach. Syst. 2021, 51, 87–98. [Google Scholar] [CrossRef]
  19. Azimian, A.; Catalina Ortega, C.A.; Maria Espinosa, J.; Angel Mariscal, M.; Garcia-Herrero, S. Analysis of Drivers’ Eye Movements on Roundabouts: A Driving Simulator Study. Sustainability 2021, 13, 7463. [Google Scholar] [CrossRef]
  20. Koay, H.V.; Chuah, J.H.; Chow, C.-O.; Chang, Y.-L. Detecting and Recognizing Driver Distraction through Various Data Modality Using Machine Learning: A Review, Recent Advances, Simplified Framework and Open Challenges (2014–2021). Eng. Appl. Artif. Intell. 2022, 115, 105309. [Google Scholar] [CrossRef]
  21. Qiong, Z.; Junjie, W.; Hao, Z.; Chun, J. Research on Quantitative Evaluation Method of Test Flight Risk Based on Fuzzy Theory. In Proceedings of the International Conference on Computer Graphics, Artificial Intelligence, and Data Processing (ICCAID 2022), Guangzhou, China, 23–25 December 2022; SPIE: Bellingham, WA, USA, 2023; Volume 12604, pp. 1114–1122. [Google Scholar]
  22. Wang, S.; Wen, J.; Li, H.; Rao, C.; Zhao, X. A Novel Fuzzy Comprehensive Evaluation Model for Application Effect of Connected Vehicle System in a Tunnel Scenario. Int. J. Fuzzy Syst. 2022, 24, 1986–2004. [Google Scholar] [CrossRef]
  23. Zhu, Z.; Wu, Y.; Han, J. A Prediction Method of Coal Burst Based on Analytic Hierarchy Process and Fuzzy Comprehensive Evaluation. Front. Earth Sci. 2022, 9, 834958. [Google Scholar] [CrossRef]
  24. ISO 9241-210:2019; Ergonomics of Human-System Interaction-Part 210: Human-Centered Design for Interactive Systems. International Standards Organization: Geneva, Switzerland, 2019.
  25. Yang, H.; Zhang, J.; Wang, Y.; Jia, R. Exploring Relationships between Design Features and System Usability of Intelligent Car Human–Machine Interface. Robot. Auton. Syst. 2021, 143, 103829. [Google Scholar] [CrossRef]
  26. Ma, J.; Pan, W.T.; Xu, W.X. Usability Evaluation of In-Car Instant Messaging Applications with Multimodal Interactions. Trans. Beijing Inst. Technol. 2022, 42, 1264–1272. [Google Scholar] [CrossRef]
  27. Huang, J.F. Research on HMI Interactive Design of Electric Vehicle under the Background of Automatic Driving. Master’s Thesis, East China Normal University, Shanghai, China, 2022. [Google Scholar]
  28. Yu, S.C.; Meng, J.; Hao, B. Research on Ergonomic Evaluation of Driver-based Intelligent Cabin. Automot. Eng. 2022, 44, 36–43. [Google Scholar] [CrossRef]
  29. Bos, J.E.; Ledegang, W.D.; Lubeck, A.J.A.; Stins, J.F. Cinerama Sickness and Postural Instability. Ergonomics 2013, 56, 1430–1436. [Google Scholar] [CrossRef]
  30. Chen, C.M.; Yu, L.J.; Li, J.; Zhang, L.Z. Research on Comparison Experiment of Visual Fatigue Caused by LCD and OLED Tablet PCs. Ind. Eng. Manag. 2014, 19, 100–106. [Google Scholar] [CrossRef]
  31. Cai, L.; Weng, D.D.; Zhang, Z.L.; Yu, X.Y. Impact of Consistency Between Visually Perceived Movement and Real Movement on Cybersickness. J. Syst. Simul. 2016, 28, 1950–1956. [Google Scholar] [CrossRef]
  32. Zhou, Y.; Fu, R.; Liu, Z.F. Distracted Driving Recognition Considering Distraction Types. J. Transp. Syst. Eng. Inf. Technol. 2022, 22, 132–139. [Google Scholar] [CrossRef]
  33. Strayer, D.L.; Cooper, J.M.; McCarty, M.M.; Getty, D.J.; Wheatley, C.L.; Motzkus, C.J.; Goethe, R.M.; Biondi, F.; Horrey, W.J. Visual and Cognitive Demands of CarPlay, Android Auto, and Five Native Infotainment Systems. Hum. Factors 2019, 61, 1371–1386. [Google Scholar] [CrossRef]
  34. Ge, H.M.; Zheng, M.Q.; Lv, N.C.; Lu, Y.; Sun, H. Review on driving distraction. J. Traffic Transp. Eng. 2021, 21, 38–55. [Google Scholar] [CrossRef]
  35. Liu, Y.; Eckert, C.M.; Earl, C. A Review of Fuzzy AHP Methods for Decision-Making with Subjective Judgements. Expert Syst. Appl. 2020, 161, 113738. [Google Scholar] [CrossRef]
  36. Diakoulaki, D.; Mavrotas, G.; Papayannakis, L. Determining Objective Weights in Multiple Criteria Problems: The Critic Method. Comput. Oper. Res. 1995, 22, 763–770. [Google Scholar] [CrossRef]
  37. Xinglai, X. Performance Evaluation System of Automatic Target Recognition Algorithm Based on AHP-CRITIC Comprehensive Weighting; Huazhong University of Science & Technology: Wuhan, China, 2021. [Google Scholar]
  38. Ho, W.; Ma, X. The State-of-the-Art Integrations and Applications of the Analytic Hierarchy Process. Eur. J. Oper. Res. 2018, 267, 399–414. [Google Scholar] [CrossRef]
  39. Ali, Y.; Sharma, A.; Haque, M.M.; Zheng, Z.; Saifuzzaman, M. The Impact of the Connected Environment on Driving Behavior and Safety: A Driving Simulator Study. Accid. Anal. Prev. 2020, 144, 105643. [Google Scholar] [CrossRef] [PubMed]
  40. Čubranić-Dobrodolac, M.; Švadlenka, L.; Čičević, S.; Trifunović, A.; Dobrodolac, M. A Bee Colony Optimization (BCO) and Type-2 Fuzzy Approach to Measuring the Impact of Speed Perception on Motor Vehicle Crash Involvement. Soft Comput. 2022, 26, 4463–4486. [Google Scholar] [CrossRef]
  41. Liu, Y.J.; Wang, J.M.; Wang, W.J.; Zhang, X.L. Experiment research on HMI Usability test environment based on driving simulator. Trans. Beijing Inst. Technol. 2020, 40, 949–955. [Google Scholar] [CrossRef]
Figure 1. Usability evaluation model of co-pilot screens.
Figure 1. Usability evaluation model of co-pilot screens.
Wevj 14 00219 g001
Figure 2. A production vehicle connected driving simulation platform.
Figure 2. A production vehicle connected driving simulation platform.
Wevj 14 00219 g002
Table 1. Usability evaluation indicators for co-pilot screens.
Table 1. Usability evaluation indicators for co-pilot screens.
Primary IndicatorSecondary IndicatorDefinition
Ease of interactionThe clarity of information display (C1)Evaluate the clarity of information display based on the brightness and contrast of the screen, the size, the spacing, and the color of the characters and other elements
The reasonability of information layout (C2)Evaluate the reasonableness of the information layout based on the principle of user-friendliness
The user habituation (C3)Whether the system layout and interaction settings are consistent with the user’s usage habits
The perceptibility of feedback (C4)The user perceptibility of system feedback when users are performing target tasks
The smoothness of operations (C5)The timeliness of feedback without delay or frustration
Efficiency of interactionOperation displacements (C6)Total linear distance between the bottom right corner of the co-pilot screen and the bottom right corner of the corresponding function button
Operation steps (C7)The number of steps taken by the user between the start and the end of the target task
Task elapsed time (C8)Time taken by the user between the start and the end of the target task
Visual comfortVisual comfort (C9)The difference in the user’s visual comfort score before and after performing the target task
Driving safetyLane keeping (C10)The standard deviation of lane departure
Speed keeping (C11)The standard deviation of vehicle speed departure
The number of gaze points (C12)The number of points at which the user’s eyes are directed to the co-pilot screen
The number of sweeps (C13)The number of times the user’s eyes drifted to the co-pilot screen
Average sweep time (C14)Average time spent by the user in each eye drift
Reaction time (C15)Time is taken by the user from hearing the warning tone to pressing the button
Table 2. The comprehensive weights of the indicators at each level.
Table 2. The comprehensive weights of the indicators at each level.
Secondary IndicatorsEase of InteractionEfficiency of InteractionVisual ComfortDriving SafetyComprehensive Weights
23.84%19.14%5.18%51.85%
The clarity of information display6.76%0006.76%
The reasonability of information layout4.64%0004.64%
User habituation4.31%0004.31%
The perceptibility of feedback3.90%0003.90%
The smoothness of operations4.23%0004.23%
Operation displacements06.50%006.50%
Operation steps01.78%001.78%
Task elapsed time010.86%0010.86%
Visual comfort005.18%05.18%
Lane-keeping0009.46%9.46%
Speed keeping0006.03%6.03%
The number of gaze points 0008.80%8.80%
The number of sweeps0005.19%5.19%
Average sweep time00018.19%18.19%
Reaction time0004.18%4.18%
Table 3. The fuzzy judgment matrix of the A1.
Table 3. The fuzzy judgment matrix of the A1.
IndicatorPoorerPoorNormalGoodExcellent
C100010
C2000.0670.9330
C30000.5000.500
C40000.0670.933
C500001
C610000
C700100
C80000.1940.806
C9000.0010.9990
C10000.0130.9870
C11000.0200.9800
C1200.1600.84000
C1300.2940.70600
C14000.8920.1080
C15000.7850.2150
Table 4. The fuzzy judgment matrix of the A2.
Table 4. The fuzzy judgment matrix of the A2.
IndicatorPoorerPoorNormalGoodExcellent
C1000.9330.0670
C200.0670.93300
C300.8540.14600
C40.5000.500000
C510000
C600001
C700100
C80.1050.895000
C9000.7170.2830
C100000.0760.924
C11000.0070.9930
C120000.9790.021
C13000.0001.0000
C140000.9670.033
C15000.9910.0090
Table 5. The fuzzy judgment matrix of the A3.
Table 5. The fuzzy judgment matrix of the A3.
IndicatorPoorerPoorNormalGoodExcellent
C10.0670.933000
C20000.5000.500
C3000.5000.5000
C40000.9330.067
C5000.8540.1460
C6000.6900.3100
C700100
C8000.0070.9930
C900.0350.96500
C10000.8600.1400
C1100.4550.54500
C12000.9570.0430
C13000.9890.0110
C14000.0110.9890
C1500.0520.94800
Table 6. The fuzzy judgment matrix of the A4.
Table 6. The fuzzy judgment matrix of the A4.
IndicatorPoorerPoorNormalGoodExcellent
C100.0670.93300
C200.0670.93300
C300.8540.14600
C4000.9330.0670
C500.5000.50000
C60000.1780.822
C700100
C80.0070.993000
C9000.4250.5750
C10000.8250.1750
C110000.0710.929
C1200.1680.83200
C13000.7440.2560
C14000.9790.0210
C15000.6990.3010
Table 7. The fuzzy comprehensive evaluation results of co-pilot screens.
Table 7. The fuzzy comprehensive evaluation results of co-pilot screens.
VehiclePoorerPoorNormalGoodExcellentScore
A10.0650.0290.3290.3890.18865.12
A20.0730.1560.2090.4000.16060.45
A30.0040.0940.4620.4130.02659.02
A400.1880.6070.0940.10953.09
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ma, J.; Wang, W.; Li, J.; Xu, W. Usability Evaluation of Co-Pilot Screen Based on Fuzzy Comprehensive Evaluation Method. World Electr. Veh. J. 2023, 14, 219. https://doi.org/10.3390/wevj14080219

AMA Style

Ma J, Wang W, Li J, Xu W. Usability Evaluation of Co-Pilot Screen Based on Fuzzy Comprehensive Evaluation Method. World Electric Vehicle Journal. 2023; 14(8):219. https://doi.org/10.3390/wevj14080219

Chicago/Turabian Style

Ma, Jun, Wei Wang, Jiateng Li, and Wenxia Xu. 2023. "Usability Evaluation of Co-Pilot Screen Based on Fuzzy Comprehensive Evaluation Method" World Electric Vehicle Journal 14, no. 8: 219. https://doi.org/10.3390/wevj14080219

Article Metrics

Back to TopTop