Next Article in Journal
Assessing Climate Transition Risks in the Colombian Processed Food Sector: A Fuzzy Logic and Multi-Criteria Decision-Making Approach
Previous Article in Journal
Study of Cayley Digraphs over Polygroups
Previous Article in Special Issue
Machine-Learning-Based Approaches for Multi-Level Sentiment Analysis of Romanian Reviews
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Empirical Investigation on the Visual Imagery of Augmented Reality User Interfaces for Smart Electric Vehicles Based on Kansei Engineering and FAHP-GRA

by
Jin-Long Lin
1 and
Meng-Cong Zheng
2,*
1
Doctoral Program in Design, College of Design, National Taipei University of Technology, 1, Sec. 3, Chung-hsiao E. Rd., Taipei 10608, Taiwan
2
Department of Industrial Design, National Taipei University of Technology, 1, Sec. 3, Chung-hsiao E. Rd., Taipei 10608, Taiwan
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(17), 2712; https://doi.org/10.3390/math12172712
Submission received: 28 June 2024 / Revised: 27 August 2024 / Accepted: 27 August 2024 / Published: 30 August 2024

Abstract

:
Smart electric vehicles (SEVs) hold significant potential in alleviating the energy crisis and environmental pollution. The augmented reality (AR) dashboard, a key feature of SEVs, is attracting considerable attention due to its ability to enhance driving safety and user experience through real-time, intuitive driving information. This study innovatively integrates Kansei engineering, factor analysis, fuzzy systems theory, analytic hierarchy process, grey relational analysis, and factorial experimentation to evaluate AR dashboards’ visual imagery and subjective preferences. The findings reveal that designs featuring blue planar and unconventional-shaped dials exhibit the best performance in terms of visual imagery. Subsequent factorial experiments confirmed these results, showing that drivers most favor blue-dominant designs. Furthermore, in unconventional-shaped dial designs, the visual effect of vertical 3D is more popular with drivers than horizontal 3D, while the opposite is true in round dials. This study provides a scientific evaluation method for assessing the emotional experience of AR dashboard interfaces. Additionally, these findings will help reduce the subjectivity in interface design and enhance the overall competitiveness of SEV vehicles.

1. Introduction

By 2030, the number of cars worldwide is expected to increase from 1.3 billion to 2 billion [1]. The huge increase in the number of cars will bring enormous environmental pressure to the region and the world, especially in terms of air pollution and the greenhouse effect [2,3]. Recently, smart electric vehicles (SEVs) have undoubtedly become a hot research topic for the Sustainable Development Goals [4,5], which have great potential to facilitate the energy crisis and environmental pollution and appeal to many consumers by emphasizing the user experience. The annual sales volume of new energy passenger vehicles in China in 2023 has reached 7.736 million units, of which SEVs accounted for 6.619 million units, with a penetration rate of 85.6% [6]. As a way to reduce carbon emissions and improve driver experience, SEVs have a bright future in China and worldwide. With the development of augmented reality (AR) and In-Vehicle Information Systems (IVISs), in-vehicle AR display technology has been widely applied and researched as an important functional system for SEVs.
AR is an advanced form of Human-Computer Interaction (HCI) that provides intuitive and rich interface information by embedding and overlaying virtual elements onto the real environment [7]. The applications of AR technology are extensive, spanning various domains such as gaming, education, entertainment, and manufacturing [8]. In the automotive sector, AR applications are primarily categorized into two types: one is in-vehicle display systems designed to provide information to drivers, and the other is auxiliary systems used during the automotive development process [7]. This study focuses on AR dashboard display systems intended for drivers. An AR dashboard combines augmented reality technology with dashboard displays, overlaying driving data, navigation, and assisted driving information onto live road video and presenting it to the driver through the dashboard display [9]. If AR is appropriately integrated with real-time road video, it can enhance drivers’ situational awareness and thereby improve driving safety [10]. Some automakers have already adopted this technology in their production models. For example, China’s SAIC Group launched the Rongwei MARVEL X in 2018, which pioneered the rendering of visual recognition results, fused positioning results, and map navigation information into AR images displayed in the in-vehicle dashboard. In addition, many scholars have conducted research on in-vehicle AR for the human-machine interface (HMI). Calvi and D’Amico [11] found that in-vehicle AR warnings significantly enhance the safety of left turns. Liu and Yin found through eye-tracking experiments that the reading performance on blue AR interfaces was the poorest, while green and adaptive colors demonstrated the most stable performance [12]. Zhong and Cheng [13] studied how environmental illuminance, interface color, and speed font design affect driver visual fatigue and visibility. Li and Wang [14] examined the impact of AR interface color combinations on the visual search performance and cognitive efficiency of drivers, considering gender and driving scenarios. However, most HMI research on in-vehicle AR has focused on driver safety [7,11,14,15,16], neglecting the experiential and emotional aspects of the driver. User experience is a pivotal determinant in enhancing user engagement and overall usage [17]. It encompasses both the behavior and emotions that users exhibit towards a particular object or system [18]. In addition, the user’s emotional experience significantly affects purchase intention [19], usage intention, and user satisfaction [20]. Given its importance, this study aims to evaluate the emotional experience elicited by the user interface of an in-vehicle AR dashboard. Simultaneously, this study developed a method for evaluating the visual imagery and subjective preferences related to in-vehicle HMIs.

2. Theoretical Background

2.1. Kansei Engineering

Kansei engineering is a product design and development method based on human emotions and needs. Its core lies in the quantitative analysis of user emotions and feelings [21]. Kansei engineering focuses on capturing users’ “Kansei” during the design process, which refers to their perceptions of aspects such as the color and shape of products or interfaces. This approach emphasizes addressing users’ emotional needs, enabling designers to create products that better align with users’ expectations and thus enhance user satisfaction. Kansei engineering typically involves matching adjectives (emotional words) with visual imagery and using inferential calculations to identify the most suitable design solutions [22]. This process includes four steps: selection of visual imagery and adjectives, semantic space expansion, properties space expansion, and relationship modeling [22]. Semantic space extension refers to the rational screening, categorizing, and evaluating perceptual adjectives [23]. Semantic space expansion can usually be performed through factor analysis, cluster analysis, principal component analysis, or other methods [24]. Properties space expansion refers to the systematic definition and description of specific properties of visual images. The purpose of this process is to create a detailed properties space that enables features of visual images to be associated with semantic space. Relational model construction refers to establishing a mathematical or statistical model to describe and quantify the relationship between the attributes of the visual image (properties space) and the user’s emotional response (semantic space). Kansei engineering has been widely used in product development [24,25,26,27], user interface [28,29,30], and service design [31,32,33,34] and has achieved remarkable results. In the field of in-vehicle HMI, the dashboard’s visual imagery is the driver’s most frequently interacted element. For this reason, conducting a Kansei engineering study on the visual imagery of in-vehicle AR dashboards is necessary.

2.2. Multi-Criteria Decision Making (MCDM)

MCDM is a structured framework used to analyze decision problems with multiple complex objectives [35]. The core of MCDM lies in systematically analyzing and simplifying complex decision problems into manageable criteria and deriving the optimal decision through the trade-off of these criteria [36]. Handling uncertainty and subjectivity are common issues in the decision-making process, and MCDM methods provide decision makers with effective and robust decision support [37]. In practice, MCDM encompasses various methods, some of which include the analytic hierarchy process (AHP), grey relational analysis (GRA), the technique for order preference by similarity to the ideal solution (TOPSIS), and fuzzy TOPSIS [37]. These methods each offer unique advantages in addressing different types of decision problems and are suitable for various decision-making scenarios. The AHP, known for its structured approach and interpretability, is widely applied to various decision-making problems and remains one of the most commonly used MCDM methods to date [38]. Additionally, the AHP can be combined with triangular fuzzy numbers from fuzzy theory to make the decision-making process more realistic. However, when using the AHP or fuzzy AHP alone, the decision-maker judgment holds a dominant position within the hierarchy, which can lead to personal biases influencing the results [39]. GRA, on the other hand, is a flexible and adaptable MCDM method, but it is recommended to use weighted GRA, as it offers greater reliability and estimation accuracy compared to unweighted GRA [40]. TOPSIS is an MCDM method with relatively low computational complexity, making it suitable for handling large-scale decision problems. However, TOPSIS has certain limitations, such as the potential for rank reversal [41], and its use of Euclidean distance does not account for correlation, which may affect the results due to overlapping information [42]. Fuzzy TOPSIS, similar to the fuzzy AHP, incorporates triangular or trapezoidal fuzzy numbers to enhance decision-making accuracy.
In the field of Kansei engineering, evaluating visual imagery is a typical MCDM problem. For example, Jia and Tung [24] combined Kansei engineering with fuzzy theory to assess the visual imagery of wrist-worn wearable devices. Additionally, Lin and Zhai [26] applied TOPSIS within Kansei engineering to evaluate the visual imagery of automotive central touchscreens. In the realm of HMI, Li and Chen [28] conducted similar decision evaluations for visual imagery of waiting indicators. Wang and Yang [43] employed the GRA method to extract Kansei words related to wickerwork lamp products and conducted a study on Miryoku engineering. However, research combining MCDM with Kansei engineering in the automotive HMI domain remains relatively scarce. Although some scholars have used the AHP and GRA methods to evaluate the usability of automotive AR head-up displays (HUDs), they have not incorporated Kansei engineering methods [44]. In other research fields, fuzzy theory and TOPSIS have achieved certain successes in the study of visual imagery [24,26,28]. However, these methods are also limited by their respective theoretical foundations, and thus, they require a more comprehensive perspective. For instance, combining multiple MCDM methods can effectively address the limitations of individual methods, further enhancing the accuracy and reliability of decision analysis. Overall, MCDM methods have become important tools in modern decision analysis due to their scientific and systematic nature, with broad application prospects in complex systems or multi-criteria decision-making problems.

2.3. Fuzzy Analytic Hierarchy Process and Grey Relational Analysis (FAHP-GRA)

GRA is a significant method within MCDM. Its fundamental principle involves calculating grey relational degrees among variables to ascertain the degree of influence each factor has on the target variable, facilitating subsequent ranking and selection processes [45]. Compared to other decision-making methods, GRA exhibits clear advantages in handling uncertainty and fuzziness in MCDM processes [40]. In this study, the visual imagery investigation of the AR dashboard is evaluated based on the driver’s perception, and human subjective judgment is subjective and fuzzy, so it is particularly suitable for using the GRA analysis method. In practical applications, GRA has been widely employed in decision-making problems across various fields, such as engineering, management, and design. Its outstanding flexibility and effectiveness have been well demonstrated [46,47,48].
In addition, when conducting MCDM, we need to consider the weight value of each factor to achieve a more accurate and reliable assessment [49]. Specifically, when using GRA for decision making, weighted GRA is the optimal choice [40]. The fuzzy analytic hierarchy process (FAHP) [50] is a weight calculation method that combines fuzzy theory [51] and the analytic hierarchy process [52]. Because of the characteristics of human thinking and cognitive patterns in the actual decision-making process, the quantitative numerical approach may not accurately reflect the cognitive preferences of the decision maker [53]. If cognitive preferences are expressed through fuzzy semantic variables, they can provide a more flexible way of judgment [54]. Therefore, combining the FAHP and GRA methods can solve the standardized weighting problem inherent in the GRA model and promote the accuracy and science of MCDM assessment [55]. Additionally, the characteristics and advantages of these methods in MCDM have already been discussed earlier, and the FAHP-GRA method will be applied in the Kansei engineering process to construct and analyze relational models, providing decision support and design guidance for the visual imagery of in-vehicle AR dashboards.

2.4. Research Objectives

Interface evaluation is a typical MCDM problem. Additionally, the influence of HMI on drivers’ subjective preferences is complex and ambiguous. Therefore, this study employs a variety of rigorous analytical methods to conduct a comprehensive assessment of AR dashboard information design types. These methods include Kansei engineering, factor analysis, fuzzy theory, AHP, GRA, and factorial experiments. This study utilizes these objective research methods to review user perceptions and preferences regarding existing AR dashboard design types and conducts a design of experiments study on the main color, visual effects, and dial styling of AR dashboards. The objectives of this study are as follows:
  • To establish evaluation dimensions and indicator weights for the visual imagery of AR dashboards.
  • To rank the optimal design solutions for AR dashboards based on the visual imagery evaluation dimensions.
  • To investigate the effects of three independent variables—main color, visual effects, and dial styling—on drivers’ preferences.
  • To discuss the cross-validation results between drivers’ subjective evaluations and their visual imagery assessments of AR dashboards.

3. Methodological Procedures

The evaluation process for the AR dashboard HMI in this study is illustrated in Figure 1. Next, we will provide a detailed description of the three stages of Kansei engineering.

3.1. Phase 1: Selection and Expansion of Visual Imagery

The researchers collected 335 samples of dashboard interface design pictures and invited 12 in-vehicle HMI design experts to systematically analyze and discuss these picture samples. Among them were three user interface design experts, three user experience experts, three human factors researchers, and three product managers. Based on the discussion results of the in-vehicle HMI design experts, the researchers carried out the factorial experiment planning and HMI design for the AR dashboard. After systematic analysis, the main color, visual effect, and dial styling of the AR dashboard will be used as the independent variables in the factorial experiment. Previous studies have also indicated that the color and shape of a product are major factors influencing users’ emotional responses [56,57]. The visual effects and dial styling in this study are key aspects of the AR dashboard’s shape, making them highly relevant for studying the visual imagery of automotive dashboards. The main color and visual effect are a within-subjects factor, while the styling of the dial is a between-subjects factor; the main color is divided into three levels: blue (H:200, S:100, B:100), green (H:120, S:100, B:100), and yellow (H:60, S:100, B:100); the visual effect is divided into three levels: plane, vertical 3D, and horizontal 3D; and the styling of the dial is divided into two levels: round and unconventionally shaped. In a usability study of speedometers, Francois and Crave [58] noted that combination dials outperformed both analog and digital dials in the tasks of reading information and detecting dynamic speed changes. A combination dial is a design that uses both numeric and indicator elements to convey speed information. Therefore, we redesigned the AR dashboard interface of the SEV based on the speedometer design guidelines proposed by Francois and Crave [58]. The SEV-AR dashboard interface information in this experiment mainly consists of the speedometer and the power-to-weight ratio (PWR) dial. In designing the AR dashboard interface, we adhered to Nielsen’s principles of consistency, aesthetics, and minimalism [59]. The specific design proposal is shown in Figure 2. Based on the different levels of the three independent variables, we developed 18 AR dashboard interface design proposals. For example, the first design in the first row (Proposal 1) of Figure 2 features a blue planar and round dial.
The design proposals were formatted in a 12.3-inch (292.528mm × 109.698mm) format, and these designs were displayed on the liquid-crystal display. The evaluation task was carried out in a laboratory environment. We strictly followed the driver sight distance criterion proposed by Dreyfuss and Associates [60], where participants were asked to sit at a distance of 550 mm from the sight distance of the dashboard screen. While observing the design proposals, participants were able to swipe left and right to view each design proposal while completing the scale questionnaire. The scoring was on a 7-point Likert scale (1 for very low, 4 for average, 7 for very high). Figure 3 illustrates the process by which participants switched between different design schemes during the experiment.

3.2. Phase 2: Selection and Expansion of Adjectives

Visual imagery adjectives can effectively respond to the user’s mental feelings [28]. For instance, shapes and colors within visual imagery can have different impacts on users’ psychological responses [56]. One of the key tasks for designers is to evoke specific emotional responses from users by manipulating visual imagery elements such as shapes and colors [57]. Therefore, controlling the visual imagery of AR dashboards is a critical method for designers to convey information to drivers and elicit emotional responses. At this stage, we first collected many adjectives related to the visual imagery of the dashboard from automotive portals, design resource websites, and the relevant research literature. For example, we extracted adjectives from user reviews of dashboard HMIs on automotive portals and design resource websites. Subsequently, after expert focus group discussions, adjectives unsuitable for describing the in-vehicle HMI were eliminated, leaving 130 adjectives for subsequent experiments. In the following study phase, we invited 12 designers and researchers related to the vehicle HMI to participate in the experiment. The participants were asked to select 40 to 50 adjectives from the aforementioned 130 that best describe the AR dashboard interface. Finally, the researchers selected the 40 most recognized adjectives based on the frequency of votes cast by the participants for the study of the factor analysis scale.
Factor analysis is a statistical method that uses a system of indicators to analyze or measure the extent to which multiple factors have an impression on an objective phenomenon [61]. Factor analysis is the most used analysis method in Kansei engineering, which can extract key perceptual factors from a large number of sensibility words, and these factors can be used to guide the subsequent design. Recently, many scholars have demonstrated that factor analysis is a scientific and reliable method for studying visual imagery [24,28,62]. Therefore, we used factor analysis in this phase to extract imagery adjectives for the AR dashboard interface. Specifically, participants were invited to experience the AR dashboard design sample from Phase 1 and then asked to evaluate 40 imagery adjectives using a 5-point Likert scale (1 for very inappropriate, 3 for average, 5 for very appropriate). The collected data will be factor analyzed to extract the adjectives that match the AR dashboard interface.

3.3. Phase 3a: Relationship Modeling—Fuzzy Analytic Hierarchy Process (FAHP) to Determine Visual Imagery Evaluation Dimension Weights

In this stage, FAHP weights are calculated for the adjectives (assessment dimensions) derived from the factor analysis, and the specific calculation steps are as follows.
Step 1: Perform a pairwise comparison of the assessment dimensions.
This study will invite user interface designers, user experience designers, and product managers to form an evaluation team to compare the importance of visual image dimensions in pairs. The measurement scale uses a semantic scale with a 1–9 level pairwise comparison [52], which is then converted into a triangular fuzzy number [50,63], as shown in Table 1 and Figure 4.
Table 1. Triangular fuzzy conversion scale.
Table 1. Triangular fuzzy conversion scale.
Linguistic ScaleAHP ScaleTriangular Fuzzy Number Scale
Left EndpointMiddle ValueRight Endpoint
Equal importance1113
Slight importance3135
Important5357
Strong importance7579
Extreme importance9799
Slight unimportance1/31/51/31
Unimportant1/51/71/51/3
Strong unimportance1/71/91/71/5
Extreme unimportance1/91/91/91/7
Figure 4. Linguistic variables describing weights of the FAHP.
Figure 4. Linguistic variables describing weights of the FAHP.
Mathematics 12 02712 g004
Step 2: Create a pairwise comparison matrix.
The values of the pairwise comparison results of n image adjective dimensions are placed in the upper triangular part of the pairwise comparison matrix A, and the lower triangular part is the reciprocal of the relative position, that is, aji = 1/aij. Matrix A can be expressed as follows:
A = 1 a 12 a 1 n 1 a 12 1 a 2 n 1 a 1 n 1 a 2 n 1
Step 3: Calculate the maximum eigenvalue and conduct consistency identification.
W i ¯ = i = 1 n A i j n
W i = W i ¯ / i = 1 n W i
Next, the maximum eigenvalue λmax is found based on Wi and the comparison matrix A, as shown in Formula (4). Finally, find the random consistency ratio CR required in this step. When the CR value is not greater than 0.1, the importance matrix has satisfactory consistency. CI is the consistency index, and RI is the average random consistency index. The value range of the RI is visible in Table 2.
λ m a x = 1 n / i = 1 n ( A W ) i W i
C I = λ m a x n n 1
C R = C I / R I
Step 4: Convert the original scores into triangular fuzzy numbers and establish a fuzzy pairwise comparison matrix.
After passing the consistency test, each internal value of the pairwise comparison matrix A is converted into a triangular fuzzy number M ~ i j , and a fuzzy pairwise comparison matrix M is established. That is, M ~ i j = ( L i j , M i j , R i j ) , the fuzzy number of the evaluation dimension i is relative to the evaluation dimension j, and the lower triangular part of the matrix M is M ~ j i = 1 / M ~ i j . The matrix M can be expressed as follows:
M = M ~ j i = 1 , 1 , 1 M ~ 12 = ( L 12 , M 12 , R 12 ) M ~ 1 j = ( L 1 j , M 1 j , R 1 j ) M ~ 21 = 1 / M ~ 12 1,1 , 1 M ~ 2 j = ( L 2 j , M 2 j , R 2 j ) M ~ j 1 = 1 / M ~ 1 j M ~ j 2 = 1 / M ~ 2 j 1 , 1 , 1
Step 5: Calculate triangular fuzzy numbers and fuzzy weights.
Perform a geometric mean operation on the values in the fuzzy pairwise comparison matrix M to obtain the geometric mean triangular fuzzy number M ` = ( L i ` , M i ` , R i ` ) in each column of each evaluation dimension, and then add up the geometric mean triangular fuzzy numbers in each column. At the same time, to ensure that the left boundary value of the triangular fuzzyweight issmaller than the right boundary value, the summed triangular fuzzy number needs to be reciprocally converted. That is, M ` ` = ( L ` i ` , M ` i ` , R ` i ` ) = ( 1 / R ` i ` , 1 / M ` i ` , 1 / L ` i ` , ) . Finally, the triangular blur weight W i ~ = ( 1 / R ` i ` L i , 1 / M ` i ` M i , 1 / L ` i ` R i ) is calculated. The weight calculation methods of the FAHP and AHP are similar and can be deduced concerning Formulas (2) and (3), which will not be described again.
Step 6: Defuzzification and normalization.
Defuzzification is performed on the obtained triangular fuzzy weight W i ~ , which is, converted into a real value D W i . In addition, normalization needs to be performed again to make the sum of the importance of each evaluation dimension equal to 1. Finally, the fuzzy weight value D W i ` of each element is obtained. Assuming the triangular fuzzy weight W i ~ = ( W L i , W M i , W R i ) , the defuzzification and normalization formulas are as follows:
D W i = ( W R i W L i ) + ( W M i W L i ) 3 + W L i
D W i ` = D W i i = 1 n D W i

3.4. Phase 3b: Relationship Modeling—Evaluating Visual Imagery Using FAHP-GRA

In this stage, a second user questionnaire was constructed based on the design proposals and adjectives derived from Phases 1 and 2, thereby measuring the driver’s visual image evaluation of the AR instrument panel design proposal. The questionnaire adopts a 7-point Likert scale (1 for very low, 4 for medium, and 7 for very high). Subsequently, the questionnaire data were calculated by FAHP-GRA to obtain the scores of each AR dashboard design proposal. The calculation steps of FAHP-GRA are as follows.
Step 1: Construct reference and comparison sequences.
Based on the final scores of the 18 AR dashboards, the optimal value in each evaluation dimension is selected as the reference sequence C0. At the same time, the scores of the 18 AR dashboards will be used as the comparison sequences C1, C2, C3, …, C18.
Step 2: Perform non-dimensionalization.
Although all assessment dimensions use a 7-point Likert scale, differences in the resulting data range may lead to numerical instability or calculation accuracy issues. Therefore, the non-dimensionalization of reference and comparison sequences is required to improve the stability of data calculations.
X i ( k ) = C i ( k ) C k
Among them, C k = 1 / ( n + 1 ) i = 0 n C i ( k ) , k = 1 , 2 , , m .
Step 3: Determine the optimal value range of the distinguishing coefficient ρ.
Before performing the gray correlation calculation, the distinguishing coefficient ρ value must be determined. The value range of ρ is (0, 1). Usually, ρ = 0.5 is taken. However, since the distinguishing coefficient will affect the arrangement of related sequences, we should not simply apply ρ = 0.5 or other values. Necessary calculations need to be performed to determine the ρ value [64]. Therefore, this step adopts the calculation formula for the value range of the distinguishing coefficient ρ proposed by Guo and Guo [65] as follows:
ρ 1 = 0 i ( k ) m a x · 1 e 1
ρ 2 = ( e 1 ) · ρ 1
For this gray relational system, the distinguishing coefficient value is optimal between [ρ1, ρ2].
Step 4: Calculate the grey relational degree of each sequence.
After determining the ρ value, the grey relational degree can be calculated [66]. The formula is as follows:
ξ ( i ) ( k ) = min i min k X ( 0 ) ( k ) X ( i ) ( k ) + ρ   max i max k X ( 0 ) ( k ) X ( i ) ( k ) X ( 0 ) ( k ) X ( i ) ( k ) + ρ   max i max k X ( 0 ) ( k ) X ( i ) ( k )
Step 5: Calculate the weighting grey relational degree.
To compare different AR dashboard design proposals more scientifically and comprehensively, it is necessary to integrate the weight values and grey relational degree of each evaluation dimension [67]. Let the grey relational degree of each design proposal be γi, and its formula is as follows:
γ ( i ) = k = 1 n W i · γ ( i ) ( k )
When the γi value is larger, it indicates that the visual image of the AR dashboard design proposal is better.

4. Analysis and Results

4.1. Visual Imagery Adjective Extraction Results

After referring to the HMI design of AR dashboards, the expert team sifted and sorted out the 40 most common adjectives for evaluating AR dashboard interfaces from 130 imagery adjectives, which are all positive (see Table 3).
Next, we combined 40 adjectives with 18 AR dashboard HMI design proposals. Using purposive sampling, a total of 140 questionnaires were collected. All participants were required to have a driver’s license and possess certain driving information recognition capabilities. In the end, 123 valid questionnaires were obtained, 60 from males and 63 from females, with and an average age of 28.61 (SD = 5.46). Subsequently, we conducted the first factor analysis on the questionnaire data and a second factor analysis on the 23 adjectives with factor loadings greater than 0.6 (see Table 4).
As shown in Table 5, after the second factor analysis, KMO = 0.896, Bartlett = 1949.631, and p < 0.001 (df = 253), indicating a statistically significant difference. This result suggests that the correlation matrices in the original group have common factors and are suitable for factor analysis.
In addition, according to the principal component method and the eigenvalue principle, there are five factors with an eigenvalue greater than 1, and their total explained variance is 71.675% (see Table 6). Generally speaking, a value higher than 70% indicates a good level of explanation. Therefore, five groups of similar factors were extracted at this stage, as shown in Figure 5.
In Table 7, the difference between each adjective in the five components is obvious, and it does not overwhelm multiple components. Moreover, the loadings of the factors are all higher than the excellent standard of 0.6, indicating that these adjectives have very high structural validity for evaluating the design of the AR dashboard HMI.
After factor analysis, twenty-three adjectives in five groups were obtained in this stage. Since the adjectives in each group are related, we invited language and literature experts to rename each group. The results are shown in Table 8. These results created five basic visual image evaluation dimensions for the next stage of research.

4.2. Visual Imagery Weighting Results

At this stage, 12 experts were invited to rate important pairs of evaluation dimensions. Experts include in-vehicle HMI user interface (UI) designers, user experience (UE) designers, ergonomics researchers (ERs), and product managers (PMs). Next, we performed an AHP weight calculation (referring to Formulas (2) and (3)) and a consistency test (referring to Formulas (4), (5), and (6)) with the ratings of the 12 experts.
In Table 9, the CR values of all expert ratings are less than 0.1; that is, the weight matrix of each expert has satisfactory consistency. Therefore, the importance of weight calculation, defuzzification (refer to Formula (8)), and regularization (refer to Formula (9)) of the FAHP are performed again, and the results are shown in Table 10.
The results show that the weight of novel and splendid (N&S) is 0.074, technological and aesthetic (T&A) is 0.152, visible and concise (V&C) is 0.299, agile and reliable (A&R) is 0.268, and smooth and natural (S&N) is 0.207.

4.3. FAHP-GRA Calculation Results

At this stage, 100 drivers (60 males and 40 females) were invited to evaluate the five dimensions of N&S, T&A, V&C, A&R, and S&N of 18 AR instrument panels. Their average age was 29.76 years (SD = 5.03). The evaluation score results are processed according to the average, and the AR dashboard design solution’s performance value and comparison sequence are obtained (see Table 11).
Table 11 shows the reference sequence C 0 = 5.220 5.260 5.380 4.820 4.980 of the AR dashboard and the comparison sequence from Proposals 1 to 18. First, we performed non-dimensionalization processing on the reference and comparison sequences according to Formula (10). Next, we calculated the values of the distinguishing coefficients ρ1 and ρ2 according to Formulas (11) and (12). The results show that the distinguishing coefficient is optimal between 0.300 and 0.516. In this study, the ρ value takes the intermediate value of 0.4 for subsequent calculations. Finally, we calculated the gray relational degree of each proposal according to Formula (13). We substituted its gray relational degree and the FAHP results into Formula (14) to obtain the overall relational degree γi (see Table 12).
The closer the value of γi is to 1, the closer the proposal is to the ideal proposal (reference sequence). In Table 12, the blue planar and unconventional-shaped dial design (Proposal 10) has the closest value of γi to 1 (γ10 = 0.863). Therefore, this proposal is the best AR dashboard design.

4.4. Factorial Experiment Results

This experiment utilizes a 3 (main color) × 3 (visual effect) × 2 (dial styling) mixed factorial design. During the experiment, drivers were asked to rate both the visual imagery of the AR dashboard proposal and their subjective preferences. We conducted a three-way ANOVA on the subjective preference outcome data. Additionally, LSD post hoc test analysis was performed on statistically significant variables. It is worth noting that during the ANOVA analysis of repeated measures data, it is necessary to first examine the sphericity of the data [68]. In this study, Mauchly’s test of sphericity indicated a significant difference (p < 0.05), and we applied the Greenhouse Geisser correction to adjust the degrees of freedom [69,70]. The results are shown in Table 13.
The main color has a significant main effect (F1.704,98 = 30.710, p < 0.001; η2 = 0.239). Further analysis using the LSD post hoc test showed significant differences in subjective preferences between blue (M = 4.587, SE = 0.096), green (M = 4.143, SE = 0.110), and yellow (M = 3.770, SE = 0.131). The subjective preference for blue is significantly higher than for green and yellow, while the subjective preference for green is also significantly higher than for yellow. However, the main effect of visual effects was not significant (F1.819,98 = 0.154, p = 0.838 > 0.05; η2 = 0.002). Similarly, dial styling has no significant main effect (F1.000,98 = 0.549, p = 0.461 > 0.05; η2 = 0.006).
The interaction between the main color and the visual effect was significant (F3.484,98 = 3.426, p = 0.013 < 0.05; η2 = 0.034). To test the differences between groups within a certain level of an independent variable [71], we conducted a simple effects analysis. A simple effects analysis on the main color reveals that only the visual effect of green is significantly different (see Figure 6). Specifically, when the main color is green, the visual effect of vertical 3D is significantly worse than that of flat and horizontal 3D.
Additionally, there is a significant interaction between visual effects and dial styling (F1.819,98 = 3.249, p = 0.046 < 0.05; η2 = 0.032). In the simple effects analysis of dial styling, significant differences were found in the visual effects of different dial styles. For round dials, the visual effect of vertical 3D is significantly better than horizontal 3D. For unconventional-shaped dials, the visual effect of horizontal 3D is significantly better than vertical 3D (see Figure 7).
Overall, the main color (η2 = 0.239) has a greater impact on subjective preference than dial styling (η2 = 0.006) and visual effects (η2 = 0.002). In comparison, dial styling impacts subjective preference more than visual effects. According to the results of FAHP-GRA, the top five design solutions are all blue, with the highest-ranked design solution being a blue planar and unconventional-shaped dial design (Proposal 10).

5. Discussion

5.1. Discussion of the Results

This study discusses the effects of the main color, visual effect, and dial styling of AR dashboards on drivers’ visual imagery evaluation and subjective preference. Previous studies have shown that in in-vehicle AR user interfaces, blue, green, and yellow colors exhibit superior robustness and response efficiency compared to other colors [72]. This study further concluded through visual imagery experiments that among the main colors, drivers prefer blue the most, followed by green, with yellow being the least preferred. However, recent studies have shown that AR heads-up displays (HUDs) with green as the main color have the shortest response time [12]. In addition, there are studies showing that red, yellow, green, and orange perform better than other colors in terms of visual search performance and cognitive efficiency [14]. These studies have inconsistent evaluation results regarding dominant colors, possibly due to experimental factors such as display technology [73], ambient illumination [13], and driving scenes [14]. For example, the photophysical properties of blue luminescent materials are worse than those of other colors [74], particularly in terms of luminous efficiency, maximum brightness, and the working life of blue quantum dots [75]. In any case, previous research mostly focused on the objective effectiveness of in-vehicle AR display technology and did not study the impact of in-vehicle AR interface displays on the driver’s emotional experience. This research has identified a new direction for the automotive AR interface display field. From the dimensions of the driver’s emotional experience, blue is the best choice. Although blue light display technology is more challenging to develop than other colors of light, we recommend that automobile manufacturers and related technicians prioritize advancing blue light display technology to meet the emotional needs of most drivers.
In addition, the interaction between visual effects and dial styling was significantly different. This is similar to the results of Chen and Lu [76], which showed no significant difference between round and unconventional-shaped (hexagonal) designs in balanced aesthetics experiments, but there is a significant difference between vertical and horizontal ellipse images. Our results indicate that there is a significant difference between the vertical 3D design of a circle and the horizontal 3D design of a round. The round vertical 3D design was more popular among drivers, and the visual imagery of this proposal was rated higher. Our study further revealed an interaction between round and unconventional-shapes in terms of visual effect, where the vertical 3D design of unconventional-shapes was significantly less preferred than the horizontal 3D design of unconventional-shapes. Drivers preferred the unconventional-shaped horizontal 3D design more, and the visual imagery of that proposal was rated higher. This finding complements Chen and Lu [76]’s theory on the interaction between styling and visual effect in visual aesthetics. Research on the emotional aspects of products suggests that emotional experiences will help increase product utilization and influence future purchase choices [77]. Therefore, in developing in-vehicle AR dashboard HMIs, automobile manufacturers should pay attention to the effects of the main color, visual effect, and dial styling on the driver’s emotional experience.

5.2. Methodological Contributions

One of the important innovations of this study is the combination of various MCDM and Kansei engineering methods to evaluate the visual image of vehicle HMIs. The visual image evaluation of vehicle HMIs is influenced by factors such as participants’ personal preferences, culture, and knowledge level and exhibits typical gray system characteristics and ambiguity [78]. Therefore, the decision-making method using FAHP-GRA is particularly suitable. Recently, Yunuo and Xia [44] confirmed in a study on vehicle-mounted AR-HUD that AHP-GRA is more reliable than entropy weights TOPSIS in determining weights. Although the study by Yunuo and Xia [44] described the implementation of AHP-GRA in HMIs in great detail, it did not use the methods of Kansei engineering and factor analysis to construct evaluation indicators, which may affect the objectivity of the program. In previous research, we made a preliminary attempt to apply AHP-GRA in the usability evaluation of application programming interfaces and determined the feasibility of this method in the HMI field through a triangulation model [79]. This study further combines fuzzy system theory with the AHP-GRA method and introduces the research method of Kansei engineering to conduct visual image research on vehicle-mounted AR instrument panels. Additionally, we conducted interactive verification of the subjective preference results and visual image assessment results through factorial experiments to ensure the reliability of the MCDM results.

5.3. Limitations and Future Directions

Despite the rigorous examination conducted in this study, several limitations should be considered. First, this research only explored the visual imagery of three variables within AR dashboards. Future studies could expand to include more variables, such as the size, layout, and brightness of elements in AR dashboards. Second, the participants in this study were primarily drivers from the Chinese region, so caution should be exercised when generalizing the findings to consumers in other countries or regions. Future research could involve a comparative analysis of drivers from different countries or regions. Finally, the results of this study have not yet been tested in real-world settings. Future research may need to conduct usability tests and incorporate eye-tracking data and driving performance into the evaluation.

6. Conclusions

This study is highly innovative, both from a methodological perspective and within the context of in-vehicle HMI research. Methodologically, this study integrates Kansei engineering, fuzzy system theory, AHP, and GRA, proposing a subjective evaluation method and process for assessing the visual imagery of in-vehicle HMIs. This approach helps reduce the uncertainty in HMI design and effectively addresses the ambiguity inherent in human factors. The FAHP results reveal that the dimensions affecting the visual imagery evaluation of AR dashboards include novelty and splendor, technological and aesthetic aspects, visibility and conciseness, agility and reliability, and smoothness and naturalness. Among these dimensions, “visibility and conciseness” received the highest weight, while “novelty and splendor” received the lowest. Further GRA analysis indicated that the design featuring blue planar and unconventional-shaped dials (Proposal 10) was the optimal choice based on visual imagery criteria. Conversely, the design with yellow vertical 3D and unconventional-shaped dials (Proposal 17) was the least favored, a finding that was corroborated by factorial experiments.
The factorial experiment results demonstrated that the main color of the AR dashboard had the most significant impact on drivers’ subjective preferences. Blue was the most favored main color, followed by green, with yellow being the least favored. Avoiding vertical 3D visual effects in green AR dashboards is also recommended. The interaction between visual effects and dial styling in AR dashboards also requires special attention. In round dials, drivers preferred vertical 3D effects more than horizontal 3D effects. Conversely, horizontal 3D effects were more favored in unconventional-shaped dials than vertical 3D effects. These findings provide scientific and detailed guidance for future SEV-AR dashboard HMI designs, helping to enhance the in-vehicle user experience and, consequently, improving SEV vehicles’ market competitiveness.

Author Contributions

Conceptualization, J.-L.L. and M.-C.Z.; Methodology, J.-L.L.; Validation, J.-L.L.; Investigation, J.-L.L.; Resources, M.-C.Z.; Data curation, M.-C.Z.; Writing—original draft, J.-L.L.; Writing—review & editing, J.-L.L. and M.-C.Z.; Visualization, J.-L.L.; Supervision, M.-C.Z.; Project administration, M.-C.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Due to confidentiality agreements, supporting data can only be made available to bona fide researchers subject to a non-disclosure agreement. Details of the data and how to request access are available from Meng-Cong Zheng ([email protected]) at the National Taipei University of Technology.

Acknowledgments

The authors would like to very thank the Desay SV Automotive Co., Ltd. CT_UXD Department (former name: IND) experts for their selfless support in the experimental process.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Martins, L.S.; Guimarães, L.F.; Junior, A.B.B.; Tenório, J.A.S.; Espinosa, D.C.R. Electric car battery: An overview on global demand, recycling and future approaches towards sustainability. J. Environ. Manag. 2021, 295, 113091. [Google Scholar] [CrossRef] [PubMed]
  2. Aijaz, I.; Ahmad, A. Electric vehicles for environmental sustainability. In Smart Technologies for Energy and Environmental Sustainability; Springer: Cham, Switzerland, 2022; pp. 131–145. [Google Scholar]
  3. Kumar, R.R.; Alok, K. Adoption of electric vehicle: A literature review and prospects for sustainability. J. Clean. Prod. 2020, 253, 119911. [Google Scholar] [CrossRef]
  4. Haque, T.S.; Rahman, M.H.; Islam, M.R.; Razzak, M.A.; Badal, F.R.; Ahamed, M.H.; Moyeen, S.I.; Das, S.K.; Ali, M.F.; Tasneem, Z. A review on driving control issues for smart electric vehicles. IEEE Access 2021, 9, 135440–135472. [Google Scholar] [CrossRef]
  5. Bhatti, G.; Mohan, H.; Singh, R.R. Towards the future of smart electric vehicles: Digital twin technology. Renew. Sustain. Energy Rev. 2021, 141, 110801. [Google Scholar] [CrossRef]
  6. EqualOcean. China’s SEV Annual Sales List Announced. 2024. Available online: https://www.iyiou.com/data/202401161059254 (accessed on 1 June 2024).
  7. Boboc, R.G.; Gîrbacia, F.; Butilă, E.V. The application of augmented reality in the automotive industry: A systematic literature review. Appl. Sci. 2020, 10, 4259. [Google Scholar] [CrossRef]
  8. Devagiri, J.S.; Paheding, S.; Niyaz, Q.; Yang, X.; Smith, S. Augmented Reality and Artificial Intelligence in industry: Trends, tools, and future challenges. Expert Syst. Appl. 2022, 207, 118002. [Google Scholar] [CrossRef]
  9. Choi, K.-H.; Park, S.-Y.; Kim, S.-H.; Lee, K.-S.; Park, J.-H.; Cho, S.-I.; Park, J.-H. Methods to detect road features for video-based in-vehicle navigation systems. J. Intell. Transp. Syst. 2010, 14, 13–26. [Google Scholar] [CrossRef]
  10. Akaho, K.; Nakagawa, T.; Yamaguchi, Y.; Kawai, K.; Kato, H.; Nishida, S. Route guidance by a car navigation system based on augmented reality. Electr. Eng. Jpn. 2012, 180, 43–54. [Google Scholar] [CrossRef]
  11. Calvi, A.; D’Amico, F.; Ferrante, C.; Ciampoli, L.B. Evaluation of augmented reality cues to improve the safety of left-turn maneuvers in a connected environment: A driving simulator study. Accid. Anal. Prev. 2020, 148, 105793. [Google Scholar] [CrossRef]
  12. Liu, S.; Yin, G. Research on Color Adaptation of Automobile Head-up Display Interface. In Proceedings of the 2021 IEEE 8th International Conference on Industrial Engineering and Applications (ICIEA), Chengdu, China, 23–26 April 2021. [Google Scholar]
  13. Zhong, X.; Cheng, Y.; Tian, L. Color Visibility Evaluation of In-Vehicle AR-HUD Under Different Illuminance. In Proceedings of the International Conference on Information Economy, Data Modeling and Cloud Computing, ICIDC 2022, Qingdao, China, 17–19 June 2022. [Google Scholar]
  14. Li, Y.; Wang, Y.; Song, F.; Liu, Y. Assessing Gender Perception Differences in Color Combinations in Digital Visual Interfaces Using Eye tracking–The Case of HUD. Int. J. Hum.–Comput. Interact. 2023, 1–17. [Google Scholar] [CrossRef]
  15. Kim, H.; Gabbard, J.L. Assessing distraction potential of augmented reality head-up displays for vehicle drivers. Hum. Factors 2022, 64, 852–865. [Google Scholar] [CrossRef] [PubMed]
  16. Abdi, L.; Meddeb, A. In-vehicle augmented reality system to provide driving safety information. J. Vis. 2018, 21, 163–184. [Google Scholar] [CrossRef]
  17. Chatzopoulos, D.; Bermejo, C.; Huang, Z.; Hui, P. Mobile augmented reality survey: From where we are to where we go. IEEE Access 2017, 5, 6917–6950. [Google Scholar] [CrossRef]
  18. Hassenzahl, M.; Tractinsky, N. User experience-a research agenda. Behav. Inf. Technol. 2006, 25, 91–97. [Google Scholar] [CrossRef]
  19. Nasermoadeli, A.; Ling, K.C.; Maghnati, F. Evaluating the impacts of customer experience on purchase intention. Int. J. Bus. Manag. 2013, 8, 128. [Google Scholar] [CrossRef]
  20. Deng, L.; Turner, D.E.; Gehling, R.; Prince, B. User experience, satisfaction, and continual usage intention of IT. Eur. J. Inf. Syst. 2010, 19, 60–75. [Google Scholar] [CrossRef]
  21. Nagamachi, M. Kansei engineering: A new ergonomic consumer-oriented technology for product development. Int. J. Ind. Ergon. 1995, 15, 3–11. [Google Scholar] [CrossRef]
  22. Nagamachi, M. Kansei engineering as a powerful consumer-oriented technology for product development. Appl. Ergon. 2002, 33, 289–294. [Google Scholar] [CrossRef]
  23. Simon, T.; Eklund, J.; Jan, R.A.; Nagamachi, M. Concepts, methods and tools in Kansei engineering. Theor. Issues Ergon. Sci. 2004, 5, 214–231. [Google Scholar]
  24. Jia, L.-M.; Tung, F.-W. A study on consumers’ visual image evaluation of wrist wearables. Entropy 2021, 23, 1118. [Google Scholar] [CrossRef]
  25. Chen, C.-H.; Lin, Z. The application of fuzzy theory in the evaluation of visual images of smartphone rear cameras. Appl. Sci. 2021, 11, 3555. [Google Scholar] [CrossRef]
  26. Lin, Z.; Zhai, W.; Li, S.; Li, X. Evaluating the impact of the center control touch screen of new energy vehicles on user visual imagery and preferences. Displays 2023, 78, 102435. [Google Scholar] [CrossRef]
  27. Wang, P.; Chu, J.; Yu, S.; Chen, C.; Hu, Y. A consumers’ Kansei needs mining and purchase intention evaluation method based on fuzzy linguistic theory and multi-attribute decision making method. Adv. Eng. Inform. 2024, 59, 102267. [Google Scholar] [CrossRef]
  28. Li, S.; Chen, C.-H.; Lin, Z. Evaluating the impact of wait indicators on user visual imagery and speed perception in mobile application interfaces. Int. J. Ind. Ergon. 2022, 88, 103280. [Google Scholar] [CrossRef]
  29. Cao, X.; Watanabe, M.; Ono, K. How character-centric game icon design affects the perception of gameplay. Appl. Sci. 2021, 11, 9911. [Google Scholar] [CrossRef]
  30. Guo, F.; Liu, W.L.; Cao, Y.; Liu, F.T.; Li, M.L. Optimization design of a webpage based on Kansei engineering. Hum. Factors Ergon. Manuf. Serv. Ind. 2016, 26, 110–126. [Google Scholar] [CrossRef]
  31. Oey, E.; Ngudjiharto, B.; Cyntia, W.; Natashia, M.; Hansopaheluwakan, S. Driving process improvement from customer preference with Kansei engineering, SIPA and QFD methods-a case study in an instant concrete manufacturer. Int. J. Product. Qual. Manag. 2020, 31, 28–48. [Google Scholar] [CrossRef]
  32. Chen, M.-C.; Hsu, C.-L.; Chang, K.-C.; Chou, M.-C. Applying Kansei engineering to design logistics services—A case of home delivery service. Int. J. Ind. Ergon. 2015, 48, 46–59. [Google Scholar] [CrossRef]
  33. Restuputri, D.P.; Indriani, T.R.; Masudin, I. The effect of logistic service quality on customer satisfaction and loyalty using kansei engineering during the COVID-19 pandemic. Cogent Bus. Manag. 2021, 8, 1906492. [Google Scholar] [CrossRef]
  34. Hartono, M. The modified Kansei Engineering-based application for sustainable service design. Int. J. Ind. Ergon. 2020, 79, 102985. [Google Scholar] [CrossRef]
  35. Zeleny, M. MCDM: Past Decade and Future Trends: A Source Book of Multiple Criteria Decision Making; JAI Press: London, UK, 1984. [Google Scholar]
  36. Keeney, R. Decisions with Multiple Objectives: Preferences and Value Trade-Offs; Cambridge University Press: Cambridge, UK, 1993. [Google Scholar]
  37. Sahoo, S.K.; Goswami, S.S. A comprehensive review of multiple criteria decision-making (MCDM) Methods: Advancements, applications, and future directions. Decis. Mak. Adv. 2023, 1, 25–48. [Google Scholar] [CrossRef]
  38. Munier, N.; Hontoria, E. Uses and Limitations of the AHP Method; Springer: Berlin/Heidelberg, Germany, 2021. [Google Scholar]
  39. Munier, N.; Hontoria, E.; Munier, N.; Hontoria, E. Shortcomings of the AHP Method. In Uses and Limitations of the AHP Method: A Non-Mathematical and Rational Analysis; Springer: Cham, Switzerland, 2021; pp. 41–90. [Google Scholar]
  40. Hsu, C.-J.; Huang, C.-Y. Comparison of weighted grey relational analysis for software effort estimation. Softw. Qual. J. 2011, 19, 165–200. [Google Scholar] [CrossRef]
  41. Shin, Y.B.; Lee, S.; Chun, S.G.; Chung, D. A critical review of popular multi-criteria decision making methodologies. Issues Inf. Syst. 2013, 14, 358–365. [Google Scholar]
  42. Çelikbilek, Y.; Tüysüz, F. An in-depth review of theory of the TOPSIS method: An experimental analysis. J. Manag. Anal. 2020, 7, 281–300. [Google Scholar] [CrossRef]
  43. Wang, T.; Yang, L. Combining GRA with a fuzzy QFD model for the new product design and development of Wickerwork Lamps. Sustainability 2023, 15, 4208. [Google Scholar] [CrossRef]
  44. Cheng, Y.; Zhong, X.; Ye, M.; Tian, L. Usability Evaluation of in-Vehicle AR-HUD Interface Applying AHP-GRA. Hum.-Centric Intell. Syst. 2022, 2, 124–137. [Google Scholar]
  45. Deng, J. Introduction to grey system theory. J. Grey Syst. 1989, 1, 1–24. [Google Scholar]
  46. Wang, P.; Meng, P.; Zhai, J.-Y.; Zhu, Z.-Q. A hybrid method using experiment design and grey relational analysis for multiple criteria decision making problems. Knowl.-Based Syst. 2013, 53, 100–107. [Google Scholar] [CrossRef]
  47. Kuo, Y.; Yang, T.; Huang, G.-W. The use of grey relational analysis in solving multiple attribute decision-making problems. Comput. Ind. Eng. 2008, 55, 80–93. [Google Scholar] [CrossRef]
  48. Wu, H.-H. A comparative study of using grey relational analysis in multiple attribute decision making problems. Qual. Eng. 2002, 15, 209–217. [Google Scholar] [CrossRef]
  49. Singh, M.; Pant, M. A review of selected weighing methods in MCDM with a case study. Int. J. Syst. Assur. Eng. Manag. 2021, 12, 126–144. [Google Scholar] [CrossRef]
  50. Van Laarhoven, P.J.; Pedrycz, W. A fuzzy extension of Saaty’s priority theory. Fuzzy Sets Syst. 1983, 11, 229–241. [Google Scholar] [CrossRef]
  51. Zadeh, L.A. Fuzzy sets. Inf. Control 1965, 8, 338–353. [Google Scholar] [CrossRef]
  52. Saaty, T.L. The analytic hierarchy process (AHP). J. Oper. Res. Soc. 1980, 41, 1073–1076. [Google Scholar]
  53. Liu, Y.; Eckert, C.M.; Earl, C. A review of fuzzy AHP methods for decision-making with subjective judgements. Expert Syst. Appl. 2020, 161, 113738. [Google Scholar] [CrossRef]
  54. Herrera, F.; Herrera-Viedma, E.; Chiclana, F. Multiperson decision-making based on multiplicative preference relations. Eur. J. Oper. Res. 2001, 129, 372–385. [Google Scholar] [CrossRef]
  55. Wang, T.-K.; Zhang, Q.; Chong, H.-Y.; Wang, X. Integrated supplier selection framework in a resilient construction supply chain: An approach via analytic hierarchy process (AHP) and grey relational analysis (GRA). Sustainability 2017, 9, 289. [Google Scholar] [CrossRef]
  56. Crozier, R.; Crozier, W.R. Manufactured Pleasures: Psychological Responses to Design; Manchester University Press: Manchester, UK, 1994. [Google Scholar]
  57. Hsiao, K.-A.; Chen, L.-L. Fundamental dimensions of affective responses to product shapes. Int. J. Ind. Ergon. 2006, 36, 553–564. [Google Scholar] [CrossRef]
  58. Francois, M.; Crave, P.; Osiurak, F.; Fort, A.; Navarro, J. Digital, analogue, or redundant speedometers for truck driving: Impact on visual distraction, efficiency and usability. Appl. Ergon. 2017, 65, 12–22. [Google Scholar] [CrossRef]
  59. Nielsen, J. Usability Heuristics for User Interface Design. Available online: https://www.nngroup.com/articles/ten-usability-heuristics/ (accessed on 25 May 2024).
  60. Dreyfuss, H.; Associates, H.D.; Tilley, A.R. The Measure of Man and Woman: Human Factors in Design; Whitney Library of Design: New York, NY, USA, 1993. [Google Scholar]
  61. Kline, P. An Easy Guide to Factor Analysis; Routledge: London, UK, 2014. [Google Scholar]
  62. Wu, F.; Lu, P.; Lin, Y.-C. Research on the Influence of Wheelsets on the Visual Imagery of City Bicycles. Sustainability 2022, 14, 2762. [Google Scholar] [CrossRef]
  63. Buckley, J.J. Fuzzy hierarchical analysis. Fuzzy Sets Syst. 1985, 17, 233–247. [Google Scholar] [CrossRef]
  64. Azzeh, M.; Neagu, D.; Cowling, P.I. Fuzzy grey relational analysis for software effort estimation. Empir. Softw. Eng. 2010, 15, 60–90. [Google Scholar] [CrossRef]
  65. Guo, y.; Guo, W. Method for Determining the Distinguishing Coefficient in Grey Relational Analysis. Arid Environ. Monit. 1994, 8, 132–135. [Google Scholar]
  66. Deng, J.L. A Course on Grey System Theory; Huazhong University of Science and Technology Press: Wuhan, China, 1990. [Google Scholar]
  67. Mu, R.; Zhang, J. Research of hierarchy synthetic evaluation based on grey relational analysis. Syst. Eng. Theory Pract. 2008, 28, 125–130. [Google Scholar]
  68. Gamst, G.; Meyers, L.S.; Guarino, A. Analysis of Variance Designs: A Conceptual and Computational Approach with SPSS and SAS; Cambridge University Press: Cambridge, UK, 2008. [Google Scholar]
  69. Blanca, M.J.; Arnau, J.; García-Castro, F.J.; Alarcón, R.; Bono, R. Repeated measures ANOVA and adjusted F-tests when sphericity is violated: Which procedure is best? Front. Psychol. 2023, 14, 1192453. [Google Scholar] [CrossRef]
  70. Zahmat Doost, E.; Zhang, W. The Impact of Different Interruptions on Perceived Stress: Developing a Multimodal Measurement for Early Detection. Int. J. Hum.–Comput. Interact. 2024, 1–21. [Google Scholar] [CrossRef]
  71. Coulombe, D. Two-way ANOVA with and without repeated measurements, tests of simple main effects, and multiple comparisons for microcomputers. Behav. Res. Methods Instrum. Comput. 1984, 16, 397–398. [Google Scholar] [CrossRef]
  72. Merenda, C.; Smith, M.; Gabbard, J.; Burnett, G.; Large, D. Effects of real-world backgrounds on user interface color naming and matching in automotive AR HUDs. In Proceedings of the 2016 IEEE VR 2016 Workshop on Perceptual and Cognitive Issues in AR (PERCAR), Greenville, SC, USA, 19 March 2016. [Google Scholar]
  73. Firth, M. Introduction to Automotive Augmented Reality Head-Up Displays Using TI DLP® Technology; Technical document; Texas Instruments Incorporated: Dallas, TX, USA, 2019. [Google Scholar]
  74. Shirasaki, Y.; Supran, G.J.; Bawendi, M.G.; Bulović, V. Emergence of colloidal quantum-dot light-emitting technologies. Nat. Photonics 2013, 7, 13–23. [Google Scholar] [CrossRef]
  75. Kim, T.; Kim, K.-H.; Kim, S.; Choi, S.-M.; Jang, H.; Seo, H.-K.; Lee, H.; Chung, D.-Y.; Jang, E. Efficient and stable blue quantum dot light-emitting diode. Nature 2020, 586, 385–389. [Google Scholar] [CrossRef]
  76. Chen, X.; Lu, Y.; Hao, G. Balanced Aesthetics: How Shape, Contrast, and Visual Force Affect Interface Layout. Int. J. Hum.–Comput. Interact. 2023, 1–14. [Google Scholar] [CrossRef]
  77. Jordan, P.W. Human factors for pleasure in product use. Appl. Ergon. 1998, 29, 25–33. [Google Scholar] [CrossRef] [PubMed]
  78. Yan, H.-B.; Huynh, V.-N.; Murai, T.; Nakamori, Y. Kansei evaluation based on prioritized multi-attribute fuzzy target-oriented decision analysis. Inf. Sci. 2008, 178, 4080–4093. [Google Scholar] [CrossRef]
  79. Lin, J.-L. Research on the Usability of Mobile Shopping Applications Based on Triangulation Model. National Cheng Kung University. 2021. Available online: https://nckur.lib.ncku.edu.tw/handle/987654321/204702 (accessed on 25 May 2024).
Figure 1. Assessment architecture diagram of this study.
Figure 1. Assessment architecture diagram of this study.
Mathematics 12 02712 g001
Figure 2. Design proposal for 18 AR dashboards.
Figure 2. Design proposal for 18 AR dashboards.
Mathematics 12 02712 g002
Figure 3. Schematic diagram of design proposal switching.
Figure 3. Schematic diagram of design proposal switching.
Mathematics 12 02712 g003
Figure 5. The scree plot of eigenvalues and the number of factors.
Figure 5. The scree plot of eigenvalues and the number of factors.
Mathematics 12 02712 g005
Figure 6. The results of the simple effects analysis of the main color within the interaction between the main color and visual effects. Error bars represent +1 SEM. (Notes: * p ≤ 0.05, ** p ≤ 0.01, *** p ≤ 0.001).
Figure 6. The results of the simple effects analysis of the main color within the interaction between the main color and visual effects. Error bars represent +1 SEM. (Notes: * p ≤ 0.05, ** p ≤ 0.01, *** p ≤ 0.001).
Mathematics 12 02712 g006
Figure 7. The results of the simple effects analysis of dial styling within the interaction between visual effects and dial styling. Error bars represent +1 SEM. (Notes: * p ≤ 0.05, ** p ≤ 0.01, *** p ≤ 0.001).
Figure 7. The results of the simple effects analysis of dial styling within the interaction between visual effects and dial styling. Error bars represent +1 SEM. (Notes: * p ≤ 0.05, ** p ≤ 0.01, *** p ≤ 0.001).
Mathematics 12 02712 g007
Table 2. Value of random indexes (RI).
Table 2. Value of random indexes (RI).
Matrix Size (n)34567
RI value0.580.901.121.241.32
Table 3. Forty most common adjectives for visual imagery.
Table 3. Forty most common adjectives for visual imagery.
Visual Imagery Adjectives
DetailedDynamicRichConciseVivid
ResponsiveCoolIntelligentTechnologicalImmersive
InnovativeSecureEfficientFuturisticAesthetic
UniqueStunningAbstractVisualPremium
ReliableSmoothGorgeousPersonalizedAttractive
HarmoniousNaturalInterestingDiversePractical
OrderlyDreamlikeClearSleekIntuitive
TrustworthyGrandRhythmicFreshAgile
Table 4. The 23 adjectives with factor loadings higher than 0.6.
Table 4. The 23 adjectives with factor loadings higher than 0.6.
AdjectivesInitialExtractionAdjectivesInitialExtraction
Premium1.0000.803Efficient1.0000.722
Natural1.0000.762Reliable1.0000.715
Unique1.0000.758Intelligent1.0000.706
Aesthetic1.0000.758Trustworthy1.0000.704
Attractive1.0000.751Concise1.0000.701
Personalized1.0000.741Diverse1.0000.699
Dreamlike1.0000.738Stunning1.0000.679
Visual1.0000.736Interesting1.0000.677
Smooth1.0000.735Responsive1.0000.661
Gorgeous1.0000.734Innovative1.0000.646
Technological1.0000.728Rich1.0000.605
Clear1.0000.726
Extraction method: principal component analysis.
Table 5. The KMO and Bartlett test results.
Table 5. The KMO and Bartlett test results.
Visual Imagery Adjectives
Kaiser Meyer Olkin 0.896
Bartlett’s Sphericity TestApproximate Chi-Squared1949.631
df253
p<0.001 ***
* Significantly different at α = 0.05 level (p < 0.05). ** Significantly different at α = 0.01 level (p < 0.01). *** Significantly different at α = 0.001 level (p < 0.001).
Table 6. Total variance explained.
Table 6. Total variance explained.
ComponentInitial EigenvaluesSquares Loading ExtractionTransformed Squares Loading
TotalVariance (%)Accumulative (%)TotalVariance (%)Accumulative (%)TotalVariance (%)Accumulative (%)
110.31344.83744.83710.31344.83744.8376.10326.53326.533
22.57311.18956.0262.57311.18956.0263.35314.57941.112
31.3015.65761.6831.3015.65761.6832.80112.18053.292
41.1865.15866.8401.1865.15866.8402.50010.86864.160
51.1124.83471.6751.1124.83471.6751.7287.51571.675
Table 7. The transformed component matrices.
Table 7. The transformed component matrices.
AdjectivesComponent
Factor 1Factor 2Factor 3Factor 4Factor 5
Unique0.841 0.157 0.132 0.022 0.093
Dreamlike0.829 0.097 0.070 0.139 0.129
Gorgeous0.781 0.198 −0.060 0.279 0.058
Innovative0.767 0.161 0.136 0.032 0.110
Diverse0.755 0.228 0.184 0.200 0.054
Stunning0.751 0.200 0.028 0.195 0.190
Personalized0.738 0.376 0.189 0.126 0.056
Interesting0.704 0.356 0.046 0.057 0.222
Rich0.612 0.309 0.195 0.312 0.012
Premium0.260 0.755 0.054 0.320 0.248
Aesthetic0.269 0.701 0.157 0.330 0.246
Attractive0.312 0.698 0.236 0.322 0.084
Technological0.403 0.692 0.262 −0.133 −0.018
Intelligent0.428 0.673 0.225 0.117 0.069
Visual0.118 0.334 0.774 0.056 −0.090
Efficient0.212 0.078 0.740 0.315 0.157
Clear0.201 0.098 0.730 0.370 0.084
Concise−0.053 0.155 0.716 0.029 0.400
Trustworthy0.047 0.227 0.285 0.743 0.132
Responsive0.329 0.205 0.078 0.709 −0.049
Reliable0.254 0.085 0.310 0.672 0.310
Natural0.264 0.101 0.103 0.021 0.819
Smooth0.175 0.238 0.242 0.339 0.689
Extraction method: principal component analysis. Transformed method: Kaiser normalized maximum variance method.
Table 8. Renamed adjectives and their codes.
Table 8. Renamed adjectives and their codes.
FactorAdjective GroupsFactor RenamingCode
1Unique, Dreamlike, Gorgeous, Innovative, Diverse, Stunning, Personalized, Interesting, RichNovel and SplendidN&S
2Premium, Aesthetic, Attractive, Technological, IntelligentTechnological and AestheticT&A
3Visual, Efficient, Clear, ConciseVisible and ConciseV&C
4Trustworthy, Responsive, ReliableAgile and ReliableA&R
5Natural, SmoothSmooth and NaturalS&N
Table 9. AHP weights and consistency test results of experts.
Table 9. AHP weights and consistency test results of experts.
Expert CodeN&ST&AV&CA&RS&NCICR
UI 10.0490.1000.4430.1930.2140.0500.045
UI 20.0500.0500.3670.3790.1550.0700.063
UI 30.0560.4850.2550.1070.0970.0860.076
UE 10.0430.0710.0570.3350.4940.0760.068
UE 20.0730.0610.1720.3940.3000.0620.055
UE 30.0550.1730.2390.3180.2160.0390.035
ER 10.0390.0940.5670.2160.0840.0710.064
ER 20.0460.0780.2280.2060.4410.0720.064
ER 30.1290.2450.4990.0830.0450.0690.062
PM 10.0370.1710.4390.1820.1710.0260.024
PM 20.1120.1920.1920.2390.2650.0810.072
PM 30.0460.0790.1660.3550.3550.0600.053
Table 10. FAHP weights of experts.
Table 10. FAHP weights of experts.
Expert CodeN&ST&AV&CA&RS&N
UI 10.0550.1230.4030.2180.201
UI 20.0560.0480.3890.3410.166
UI 30.0710.4490.2640.1260.090
UX 10.0520.0780.0500.3430.477
UX 20.0920.0590.1910.3950.264
UX 30.0620.2170.2520.3020.168
ER 10.0460.1160.5360.2170.085
ER 20.0500.0840.2570.1980.411
ER 30.1370.2540.4600.0970.052
PM 10.0360.2130.4080.1910.153
PM 20.1450.2310.1950.2210.208
PM 30.0500.0850.1870.3660.312
Geometric mean0.0650.1320.2600.2320.180
Normalized weight0.0740.1520.2990.2680.207
Table 11. Performance scores of 18 proposals.
Table 11. Performance scores of 18 proposals.
Proposal CodesN&ST&AV&CA&RS&N
Proposal 14.380 4.500 5.380 4.800 4.980
Proposal 25.220 5.260 4.840 4.600 4.740
Proposal 34.740 4.580 4.700 4.580 4.480
Proposal 44.380 4.300 4.600 4.420 4.460
Proposal 54.460 4.420 4.340 4.280 4.420
Proposal 64.260 4.220 4.160 4.060 4.060
Proposal 73.900 3.880 4.040 3.840 3.840
Proposal 84.080 3.920 3.860 3.880 3.900
Proposal 93.860 3.840 3.960 3.940 3.920
Proposal 104.400 4.680 5.340 4.820 4.980
Proposal 114.820 4.820 4.440 4.220 4.260
Proposal 124.900 4.780 4.800 4.440 4.460
Proposal 134.300 4.280 4.760 4.480 4.560
Proposal 144.440 4.280 4.340 4.180 4.100
Proposal 154.840 4.640 4.620 4.400 4.280
Proposal 164.160 4.000 4.520 4.120 4.140
Proposal 174.220 4.060 3.920 3.740 3.580
Proposal 184.360 4.120 4.360 3.860 4.040
Table 12. Weighted relational degree of each proposal.
Table 12. Weighted relational degree of each proposal.
Proposal CodesN&ST&AV&CA&RS&NγiSequence
Proposal 10.031 0.066 0.299 0.259 0.207 0.862 2
Proposal 20.074 0.152 0.158 0.194 0.146 0.725 3
Proposal 30.041 0.071 0.141 0.189 0.111 0.553 5
Proposal 40.031 0.058 0.131 0.158 0.109 0.487 8
Proposal 50.033 0.063 0.110 0.138 0.105 0.449 10
Proposal 60.028 0.055 0.099 0.115 0.080 0.378 13
Proposal 70.023 0.046 0.093 0.099 0.070 0.331 17
Proposal 80.025 0.046 0.085 0.102 0.072 0.331 16
Proposal 90.023 0.045 0.090 0.106 0.073 0.336 15
Proposal 100.031 0.077 0.281 0.268 0.207 0.863 1
Proposal 110.044 0.087 0.117 0.131 0.092 0.472 9
Proposal 120.048 0.084 0.153 0.161 0.109 0.556 4
Proposal 130.029 0.057 0.148 0.168 0.120 0.523 6
Proposal 140.032 0.057 0.110 0.127 0.082 0.409 11
Proposal 150.045 0.074 0.133 0.155 0.094 0.501 7
Proposal 160.027 0.048 0.124 0.121 0.085 0.404 12
Proposal 170.028 0.050 0.088 0.093 0.061 0.319 18
Proposal 180.030 0.052 0.112 0.100 0.079 0.373 14
Table 13. The mixed factorial ANOVA results of subjective preference (after correction).
Table 13. The mixed factorial ANOVA results of subjective preference (after correction).
SourceSSdfMSFpη2Post Hoc
Main color100.2871.70458.84730.710<0.001 ***0.239Blue > Green > Yellow
Visual effect0.5071.8190.2790.1540.8380.002
Dial styling4.5511.0004.551 0.5490.4610.006
Main color × visual effect7.3073.4842.0973.4260.013 *0.034
Main color × dial styling5.0161.7042.9431.5360.2200.015
Visual effect × dial styling10.7021.8195.8833.2490.046 *0.032
Main color × visual effect × dial styling1.7113.4840.4910.8020.5090.008
* Significantly different at α = 0.05 level (p < 0.05). ** Significantly different at α = 0.01 level (p < 0.01). *** Significantly different at α = 0.001 level (p < 0.001).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lin, J.-L.; Zheng, M.-C. An Empirical Investigation on the Visual Imagery of Augmented Reality User Interfaces for Smart Electric Vehicles Based on Kansei Engineering and FAHP-GRA. Mathematics 2024, 12, 2712. https://doi.org/10.3390/math12172712

AMA Style

Lin J-L, Zheng M-C. An Empirical Investigation on the Visual Imagery of Augmented Reality User Interfaces for Smart Electric Vehicles Based on Kansei Engineering and FAHP-GRA. Mathematics. 2024; 12(17):2712. https://doi.org/10.3390/math12172712

Chicago/Turabian Style

Lin, Jin-Long, and Meng-Cong Zheng. 2024. "An Empirical Investigation on the Visual Imagery of Augmented Reality User Interfaces for Smart Electric Vehicles Based on Kansei Engineering and FAHP-GRA" Mathematics 12, no. 17: 2712. https://doi.org/10.3390/math12172712

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop