Next Article in Journal
Comparing Physiological Synchrony and User Copresent Experience in Virtual Reality: A Quantitative–Qualitative Gap
Next Article in Special Issue
Trust and Trustworthiness from Human-Centered Perspective in Human–Robot Interaction (HRI)—A Systematic Literature Review
Previous Article in Journal
Correction: Grobelna, I.; Pławiak-Mowna, A. Motivation and Engagement of Students: A Case Study of Automatics and Robotics Projects. Electronics 2024, 13, 3997
Previous Article in Special Issue
Smart Transparency: A User-Centered Approach to Improving Human–Machine Interaction in High-Risk Supervisory Control Tasks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Combining Design Neurocognition Technologies and Neural Networks to Evaluate and Predict New Product Designs: A Multimodal Human–Computer Interaction Study

by
Jun Wu
1,
Xiangyi Lyu
1,
Yi Wang
1,
Tao Liu
2,
Shinan Zhao
1 and
Lirui Xue
3,*
1
Department of Industrial Engineering, School of Economics and Management, Jiangsu University of Science and Technology, Zhenjiang 212100, China
2
School of Management, Shanghai University, Shanghai 200444, China
3
School of Business, Jiangnan University, Wuxi 214122, China
*
Author to whom correspondence should be addressed.
Electronics 2025, 14(6), 1128; https://doi.org/10.3390/electronics14061128
Submission received: 25 January 2025 / Revised: 5 March 2025 / Accepted: 11 March 2025 / Published: 13 March 2025
(This article belongs to the Special Issue Emerging Trends in Multimodal Human-Computer Interaction)

Abstract

:
The multimodal data collection that includes physiological and psychological data, combined with data processing using artificial intelligence technology, has become a research trend in human–computer interaction. In the stage of new product design, it is necessary to consider user experience for the evaluation and prediction of new products. The paper presents a human–computer interaction study on new product design with user participation. This research adopts a combination of design neurocognition and genetic algorithms in design optimization to evaluate the usability of engineering control interfaces using eye-tracking and facial expression data. Eye-tracking and neural network technology are used to predict the appearance of humanoid robots. The paper explored the evaluation and prediction of new product design using multimodal physiological and psychological data. The research results indicate that artificial intelligence technologies represented by neural networks can fully exploit biometric data represented by eye-tracking and facial expression, improving the effectiveness of new product evaluation and prediction accuracy. The research results provide a solution based on the combination of design neurocognition and artificial intelligence technology for the evaluation and prediction of new product design in the future.

1. Introduction

In the context of intense market competition, enterprises must thoroughly consider user needs and design products with greater appeal to their target audience [1]. Effective user involvement during the product development phase has been shown to significantly enhance new product development performance [2], thereby fostering the adoption of open innovation strategies by enterprises [3]. Within this critical innovation framework, obtaining insights into user perceptions of new products is of paramount importance. For instance, in the development of Xiaomi smartphones, Zeng et al. [4] leveraged user feedback data from the Xiaomi MIUI community to construct a model integrating the quality characteristics of user ideas, contributor attributes, and sentiment features to explain user preferences. Consequently, product success is influenced not only by its functional attributes but also by users’ cognitive and emotional responses [5]. During the open innovation process, it is essential to effectively detect and analyze users’ cognitive and emotional experiences with new products, thereby enabling precise evaluation and prediction to inform product design.
Traditional approaches to evaluating and predicting new products typically rely on user survey data. However, the exclusive use of questionnaire data may be subject to measurement biases such as social desirability bias, the halo effect, and unwillingness to answer [6]. Moreover, questionnaire data fail to capture users’ real-time psychological and physiological states during product interaction, nor does it reveal users’ subconscious thoughts [7]. As a result, overreliance on questionnaire data can lead to the streetlight effect, where an excessive dependence on readily available data results in risking backward- and inward-looking myopia. This myopia occurs when changes in user needs and desires, driven by evolving trends, cause past purchasing behaviors to become ineffective in predicting future preferences, leading to a disconnect between marketing data growth and company development [8]. An effective solution to address this streetlight effect is to incorporate biometric data into the evaluation and prediction of new products. Biometrics refers to the automatic identification of individuals based on their biological and behavioral characteristics [9]. The application of biometric data today has extended beyond the traditional field of information security to the realm of new product development, introducing elements of “design neurocognition” into conventional design processes [10]. Design neurocognition refers to the integration of brain data, physiological data, and behavioral data during the development of new products, aiming to study the cognitive and emotional processes of users and designers over time, predict user behavior, and analyze data derived from biometric sources such as eye-tracking, facial expressions, and electroencephalography (EEG) [11]. Currently, an increasing number of companies are utilizing biometric data for new product evaluation and prediction. For example, Subaru employs eye-tracking and facial feature data to assess the level of driver distraction [12].
When employing the design neurocognition approach, a vast amount of biometric data are generated during the evaluation and prediction of new products. Analyzing this data and presenting the results poses a significant challenge to the development of new products. With the advancement of artificial intelligence technologies, the efficiency of processing biometric data has been greatly enhanced. This is particularly evident in the evaluation of new products, where pattern recognition and classification of user facial expression data are employed, and in the prediction of new products, where machine learning and prediction based on user eye-tracking data are applied. These AI-driven methods represent a substantial improvement in accuracy compared to traditional statistical approaches [13].
In the field of research on new product evaluation and prediction, the predominant focus has been on the cognitive dimensions of users. Wang et al. [14] conducted both qualitative and quantitative assessments of interface layout on human–computer multi-interfaces, using regression equations for quantitative evaluation and eye-tracking data for qualitative analysis. In the field of transportation systems development and evaluation, Liu et al. [15] combined the K-nearest neighbors (KNN) algorithm, used for identifying physiological feature data, with the corresponding ensemble classifier. They developed an adaptive KNN-ensemble pilot workload detection model to assess pilots’ workload. Li et al. [16] proposed a new four-stage framework that uses neural networks to analyze spatial and temporal gaze patterns in eye-tracking data, assessing the vigilance levels of traffic controllers. He et al. [17] applied various machine learning models to classify eye-tracking and other data, thereby assessing drivers’ cognitive load. The aforementioned studies utilized artificial intelligence techniques to analyze biometric data, but they primarily focused on evaluating and predicting users’ cognitive load. This paper integrates facial expressions and eye-tracking metrics, aiming not only to assess users’ workload from a cognitive perspective but also to explore the user experience of a product from an emotional perspective.
In conclusion, to succeed in market competition, enterprises must ensure active user participation in the new product development process, thus realizing open innovation. A key aspect of implementing this corporate strategy is conducting user experience studies for new products. By identifying users’ cognitive and emotional patterns during the product experience, enterprises can accurately assess and predict new product design proposals. Previous research predominantly relied on subjective questionnaire survey data. However, with advancements in technology, the accessibility and acceptability of biometric data have matured, making it feasible to apply the “design neurocognition” approach—integrating brain data, physiological data, and behavioral data—to new product evaluation and prediction. In the face of vast amounts of biometric data, artificial intelligence technologies effectively address the challenges of data analysis and result presentation. Therefore, this study will adopt the design neurocognition approach to conduct both evaluation and prediction research for new products. For the first study on new product evaluation, usability assessment is included. According to the definition provided by the International Organization for Standardization (ISO 9241-11), usability refers to the effectiveness, efficiency, and satisfaction with which specific users achieve specific goals in a particular task environment using a given product [18]. In Nielsen’s classic work on usability, he set five usability goals: (1) learnability, (2) efficiency of use once the system has been learned, (3) the ability of infrequent users to return to the system without having to learn it all over, (4) the frequency and seriousness of user errors, and (5) subjective user satisfaction [19]. Through a systematic literature study of 790 usability-related documents, Weichbroth identified key usability attributes, including efficiency, satisfaction, effectiveness, learnability, memorability, cognitive load, errors, simplicity, and ease of use [20]. In addition to the traditional NASA-TLX cognitive load questionnaire [21] for the subjective assessment of usability, eye-tracking technology is employed in human–computer interaction research to measure efficiency and effectiveness in usability evaluation [22], while facial expression analysis technology is used to assess user satisfaction [23]. In this study, we take the usability evaluation of engineering control interfaces as an example. Building upon the optimization of the control interface using a genetic algorithm, facial expression and eye-tracking data are collected during users’ interaction with the control interface. These biometric data are then analyzed using machine learning techniques and statistical methods, alongside the cognitive load scale, to assess the usability of the designed control interface. For the second study on new product prediction, we focus on evaluating the appearance design of robots. Eye-tracking data are collected while users view a series of robot appearance designs. Neural networks are employed to explore the relationship between the eye-tracking data and subjective evaluations, enabling the prediction of user assessments of new robot appearance designs. Through these two studies, we aim to reveal the operational mechanism of artificial intelligence technologies in new product evaluation and prediction, driven by biometric data. This will form a working model of new product evaluation and prediction, driven by multimodal biometric data, combining facial expression analysis and eye-tracking, with the aim of enabling enterprises to effectively implement open innovation strategies from the perspective of user experience.
This paper is structured as follows: Section 2 presents a theoretical analysis, discussing facial expression analysis and eye-tracking based on artificial intelligence technologies, which lays the theoretical foundation for the subsequent new product evaluation and prediction research using these techniques. Section 3 explores new product evaluation based on facial expression analysis and eye-tracking, taking the usability evaluation of engineering control interfaces as an example. Section 4 focuses on new product prediction based on eye-tracking, using robot appearance design evaluation as an example. Section 5 summarizes the research conclusions and significance, and discusses the limitations of the study and future research directions.

2. Theoretical Background

2.1. Facial Expression Analysis Based on Artificial Intelligence Technology

Facial recognition systems are biometric, data-driven artificial intelligence systems that identify individuals by analyzing patterns in facial texture and shape. The field of computer-based facial recognition originated in the mid-1960s, with Bledsoe manually identifying facial landmarks on photographs, such as mouth width and eye width, and using these landmarks for template matching. Subsequently, Kanade [24] developed a system capable of automatically extracting facial landmarks, creating the first fully automated facial recognition system.
To date, numerous facial expression analysis systems have been developed. This study employs an AI-based facial expression analysis system, FaceReader, developed by VICAR Vision, to analyze users’ facial expressions while interacting with a control interface. FaceReader operates based on Professor Ekman’s facial action coding system (FACS) and integrates the active appearance model (AAM) proposed by Cootes and Taylor [25] with deep neural network (DNN) models, combining these two approaches to identify and interpret emotional valence and arousal extracted from facial expressions. The analysis process consists of two main steps: first, AAM is used to create an accurate 3D model of the face. FaceReader’s AAM model, trained on a large image database, can extract 500 key points of facial texture and shape from images. Second, the extracted key point information is input into an artificial neural network for recognition and analysis. The neural network, pre-trained on a large image database, refines the emotional valence and arousal based on the key points provided by AAM. Furthermore, when the face is partially occluded, the DNN model is activated to directly detect and analyze the face, thereby improving the accuracy of the analysis. The entire facial expression analysis process is illustrated in Figure 1.
Based on the aforementioned AI-driven facial expression analysis, the system ultimately outputs two key indicators of users’ facial expressions: “emotional valence” and “arousal.” Emotional valence refers to the classification of emotional states as positive (pleasant) or negative (unpleasant), while arousal indicates the level of activation ranging from calm to excited [26]. AI-based facial expression analysis has been applied across various fields in management studies. In usability research on control interfaces, Xu et al. [27] utilized facial expression data to evaluate users’ perceptions of different in-vehicle information systems. In the field of business ethics research, Zhang and Gong et al. [28] utilized the “Anger” index from facial expression data to verify that investors exhibit different emotional responses to the misconduct of companies with varying levels of past corporate social responsibility (CSR). Research using facial expression analysis is most prevalent in the field of food consumption. For instance, facial expression data have been employed to evaluate consumers’ emotional responses to chocolate [29], olive oil [30], and beer [31,32].
In summary, databases constructed from facial expressions, when analyzed using artificial intelligence techniques, can effectively reflect users’ emotional responses to evaluated products or behaviors. This research method has been widely applied across various fields in management studies. In this paper, the first study on new product evaluation will take the usability assessment of engineering control interfaces as an example. Facial expression data will be collected while users interact with different control interfaces. Machine learning techniques will then be employed to analyze this data. Additionally, the study will incorporate eye-tracking and cognitive load scales to comprehensively evaluate the usability of the designed control interfaces.

2.2. Eye-Tracking Based on Artificial Intelligence Technology

Eye-tracking is another biometric technology and has recently become a psychophysiological method for detecting users’ visual attention during new product testing [33]. This technology relies on the eye-mind hypothesis, which posits that the location of eye fixation corresponds to the focus of visual attention, and visual attention reflects psychological attention. Moreover, patterns of visual attention provide insight into individuals’ cognitive strategies [34,35]. Eye-tracking enables researchers to study areas of interest (AOIs) in product displays, where users exhibit longer fixation durations and higher fixation frequencies upon viewing these regions [36].
The integration of biometric technologies into new product development aims primarily to effectively predict user adoption behaviors. A typical study integrated various biometric data with artificial intelligence-based machine learning techniques to develop user choice models, successfully predicting user decision-making behaviors. The findings revealed that eye-tracking technology outperformed facial expression analysis and EEG in explaining and predicting user choices [37]. This study will also employ eye-tracking experiments and utilize artificial neural network (ANN) techniques to predict user behavior regarding new products. ANNs are computational models that simulate the structure of human brain neurons. They consist of a network of interconnected artificial neurons and operate by mimicking the way biological neurons receive, process, and transmit information through sophisticated learning and optimization algorithms. This AI approach is highly efficient in performing tasks such as data perception, recognition, and analysis. In this study, the focus will be on the intrinsic relationship between eye-tracking data and subjective aesthetic evaluation. To this end, a neural network model will be constructed, leveraging the self-learning and self-evolving capabilities of ANNs. This model will enable a deep understanding of how eye-tracking data influence subjective evaluations, thereby facilitating the analysis and prediction of users’ subjective aesthetic judgments.
AI-driven eye-tracking analysis has been applied across various fields. Singh et al. [38] combined fixation data from eye-tracking with artificial intelligence to construct a predictive model for recognizing human intentions, successfully forecasting interpersonal interaction behaviors. Similarly, Smith et al. [39] captured users’ fixation patterns using an eye-tracking system as they performed a series of visual decision-making tasks. By integrating these patterns with self-reported confidence scales and applying deep learning methods, they developed a predictive model demonstrating that users’ decision confidence could be accurately inferred from their eye-tracking behavior. Additionally, Sharma et al. [40] illustrated how integrating eye-tracking, facial expressions, and EEG data with machine learning methods provides optimal predictions of cognitive effort and learning performance for students in adaptive learning environments.
In summary, eye-tracking data collected through eye-tracking technology can be effectively analyzed using artificial intelligence techniques to predict user behaviors toward tested products. This research approach has been widely applied across various fields. In the second study of this paper on new product prediction, the evaluation of robot appearance design will serve as an example. Eye-tracking data will be collected as users observe a series of robot appearance designs. Neural networks will then be employed to explore the relationship between eye-tracking data and subjective evaluations, enabling the prediction of user assessments for newly designed robot appearances.

3. Study 1: Evaluation of New Products Based on Facial Expression Analysis and Eye-Tracking

3.1. Research Context

The operation of modern industrial equipment increasingly relies on multi-touch control panels, which act as crucial intermediaries in human–computer interaction. These interfaces harness users’ perceptual and motor abilities to enable interactions with computer programs, including command retrieval and execution, parameter adjustment, information search, and multitasking [41]. With increasing functional demands and higher user experience expectations, the number of tasks and controls integrated into these interfaces has risen significantly. The configuration and layout of elements within a control interface have become both a challenge in design and a critical factor in evaluating the quality of user experience [42].
The evaluation of product usability involves measuring users’ emotions, with facial expression data serving as an indicator of emotional parameters. The evaluation of facial appearance has long been a key topic in emotion research [43], with the classic circumplex model of affect embedded in FaceReader, the tool used in this study [44]. Facial expressions, as effective biometric indicators of emotion, can objectively reflect consumers’ emotional states during the decision-making process [45]. Although studies have shown that heart rate in physiological measurements is influenced by the parasympathetic nervous system (PNS) and the sympathetic nervous system (SNS) and is used to assess emotions [46], it can only reflect arousal in emotion models but not valence [47]. Additionally, behavioral experiments such as think-aloud protocols can also be used to study users’ emotions [48]. However, in this study, subjects are not required to verbalize while interacting with the control interface. Therefore, facial expressions are used as biometric indicators for emotion measurement.
This study focuses on the control interface for the remote operation of disaster-relief robots. Addressing interface layout optimization, usability experiments will compare two versions of the interface—before and after optimization—from the user’s perspective. Objective data, including facial expression and eye-tracking data, along with subjective data from cognitive load scales, will be used to assess the usability of the product.

3.2. Control Interface Design and Optimization

The design of the control interface for the remote operation of disaster-relief robots should ensure accurate operational precision, high efficiency, ease of learning and use, and low cognitive workload for the operator [49,50]. To meet these design requirements, the interface was developed based on Sanders and McCormick’s [51] principles for arranging interface elements in order of use, as well as Wickens and Carswell’s [52] proximity compatibility principle. Additionally, the design adheres to the human factors standards outlined in ISO 11064-5 for control center interfaces [53].
Due to the necessity of adhering to multiple design principles and international standards, the control interface design requires decision making that balances interrelated guidelines, human factors, and technical objectives. Consequently, the interface was manually coded by developers in software, relying on professional expertise. While this approach—based on various design norms and professional experience—represents the current standard in engineering control interface design, it often fails to comprehensively address all key design considerations. As a result, the designed interface may fall short in terms of functionality, usability, recognizability, user-friendliness, and learnability.
Genetic algorithms (GAs) have been widely applied to interface optimization, thereby enhancing human performance [54,55]. Chou et al. [56] applied a genetic algorithm to improve interface layout and validated its effectiveness in optimization design through a behavioral experiment with 30 participants. Studies have shown that genetic algorithms used in interface layout optimization can enhance operational efficiency and subjective preference. Lu et al. [57] compared the effectiveness of a genetic algorithm and ant colony algorithm in interface layout optimization. The results indicated that the interface optimization based on the genetic algorithm outperformed that based on the ant colony algorithm.
GAs are optimization algorithms that simulate biological evolution to optimize existing structures or systems. The algorithm encodes the solution to the objective function as “chromosomes” within genes, and then iteratively optimizes the solution through operations such as selection, reproduction, crossover, and mutation, inspired by natural evolutionary processes. The basic workflow of the genetic algorithm is illustrated in Figure 2.
To apply a genetic algorithm to interface layout optimization, the first step is encoding the buttons to be arranged. This study employs real number encoding in its optimization model. Each button related to disaster rescue robot functions is assigned a unique numerical code. The buttons are encoded using numbers from 1 to 19, which correspond to the genetic algorithm’s encoding scheme, as shown in Table 1. The decoding process works in reverse—each individual is placed in its corresponding position in the interface according to sequence, thus completing the decoding process.
As shown in Figure 2, the genetic algorithm generates new individuals through crossover and mutation operations, while preserving superior individuals through selection. For the selection method, tournament selection is employed. Specifically, in a population P, four individuals (P1, P2, P3, P4) are randomly chosen. These individuals are evaluated based on their objective functions, and the one with the optimal value is selected.
Random gene crossover operations on parent individuals can potentially produce superior offspring. This optimization model uses the POX (precedence operation crossover) operator for crossover operations. The POX operator randomly divides button encodings into two parts, J1 and J2 [58]. The offspring C1 and C2 then inherit corresponding genes from J1 of their parents (dad and mom), while the remaining gene positions inherit the remaining genes from J2 of mom and dad respectively. A representative example illustrating this mechanism can be found in Figure 3.
For mutation, this study uses an exchange mutation operator, which randomly selects and swaps several genes between chromosomes of two individuals from the population to generate new offspring [58]. This operation serves multiple purposes in the genetic algorithm: it enhances local search capabilities, maintains population diversity, and effectively prevents premature convergence during the iteration process. A representative example illustrating this mechanism can be found in Figure 4.
In layout design, the arrangement of buttons should minimize the interaction cost between buttons [59]. Therefore, in the genetic algorithm, this cost should be incorporated into the minimization fitness function for optimization, as shown in Equation (1). In this equation, C is defined as the interaction cost, n represents the total number of buttons on the interface, dij denotes the distance between the center points of button i and button j, and cij represents the number of task transitions between button i and button j. The equation implies that when users follow the task sequence to locate relevant buttons on the interface, buttons designed to be placed together will result in the shortest search path to find the corresponding buttons.
C = m i n i = 1 n d i j c i j ,
In the process of optimizing interface design, the search path length alone should not be the sole criterion for evaluating the quality of the design. It is also essential to consider the corresponding design principles, such as the interface structure and implementation principles, functionality principles, and usage sequence principles. The score for interface design principles is defined as P, where Pi represents the score of the i-th principle. The optimization objective is to maximize this score, as shown in Equation (2). In this equation, m represents the number of principles considered.
P = m a x i = 1 m P i ,
In summary, this issue represents a multi-objective optimization problem, which is inherently complex and has been extensively studied in academia, with various methods available. In this optimization model, the weighted sum method is employed to transform the multi-objective optimization problem into a single-objective optimization problem. The final objective function of the optimization model is expressed in Equation (3), where M1 and M2 are the weight values.
y = m i n M 1 i = 1 n d i j c i j M 2 i = 1 m P i ,
The genetic algorithm was used to optimize the control interface, resulting in the design of a new control interface for the remote operation of disaster-relief robots. Figure 5a shows the pre-optimization control interface for the remote operation of the disaster-relief robot, while Figure 5b displays the post-optimization control interface.

3.3. Procedure

The experiment was conducted in a soundproof, well-lit, and thermally comfortable human factors laboratory. The subjects were randomly divided into two groups, with 30 subjects in each group, based on the two interface designs. Upon arrival at the laboratory, the experimenter checked the subjects’ visual conditions to exclude those with severe astigmatism that would hinder eye-tracking calibration. After being briefed on the experimental content, subjects signed an informed consent form and were randomly assigned to one of the two groups. The experimenter then provided a detailed explanation of the control interface’s purpose, the functionality of each button, and the operation process, ensuring that the subjects fully understood both the pre-experiment and the formal experimental tasks. Once the subjects comprehended the experimental tasks, they began the eye-tracking experiment.
After entering the laboratory, as shown in Figure 6, the subject sat comfortably in front of the eye-tracker, with a camera positioned above the eye-tracker to capture the subject’s face. The subject was required to complete the eye-tracker calibration (5 points), and only those with an eye position error of less than 0.5° were allowed to proceed with the eye-tracking experiment. The pre-experiment phase was first conducted with the purpose of familiarizing the subjects with the experimental process. No eye-tracking data were recorded during this phase. Afterward, the formal experiment began. During the experiment, it was ensured that the facial image captured by the camera remained clear. The experimental task involved controlling a robot to use its mechanical arm for tasks in a disaster area, and the subject was required to click the operational buttons according to the disaster response task. Prior to the task, there was a 2-s black cross on a gray background to ensure that the subject’s gaze was focused on the center of the screen. After the task was completed, eye-tracking and facial expression sampling stopped, and the subject filled out the NASA-TLX cognitive load questionnaire. Upon completion of the questionnaire, the experiment ended. The experimental procedure is illustrated in Figure 7.

3.4. Subjects

A total of 60 subjects aged between 18 and 25 were recruited for the experiment, including 41 males and 19 females, with an average age of 21.28 years (SD = 1.09). All subjects were right-handed, healthy, with no history of neurological or psychiatric disorders, and had not used sedatives, hypnotics, or psychoactive substances within 24 h before the experiment. All subjects had normal or corrected-to-normal vision and passed the eye-tracking calibration. Subjects who completed the experiment successfully received course credits. The study followed the Declaration of Helsinki, and all subjects signed an informed consent form prior to participation.

3.5. Apparatus

The experiment utilized an SMI-iView RED eye tracker with an accuracy of 0.4° and a sampling rate of 250 Hz. Subjects viewed the experimental stimuli binocularly, but only the right eye’s gaze data were recorded. The stimuli were presented using iView Experiment Center software 3.5 on a 23.8-inch monitor with a refresh rate of 60 Hz and a resolution of 1920 × 1080. Eye-tracking calibration was conducted using a five-point calibration method, maintaining an accuracy error within 0.5°. Gaze data were analyzed using BeGaze 3.5 software. Additionally, a Logitech 1080P camera recorded the subjects’ facial expressions during task execution.

3.6. Results and Discussion

3.6.1. Facial Expression Data

As shown in Figure 8, facial expression analysis was conducted using the AI-based facial expression analysis system (FaceReader, Wageningen, The Netherlands). The process began with extracting facial information from the video, followed by constructing a facial model with 500 data points. Artificial neural networks were then employed for machine learning-based classification, utilizing Ekman’s facial action coding system as the basis. This analysis ultimately provided the emotional valence of users while interacting with the control interface.
Descriptive statistical analysis was conducted on the valence values of the collected facial expression data, as shown in Table 2. The Shapiro–Wilk test was used to assess normality, and the valence values of both groups followed a normal distribution. Therefore, an independent sample t-test was applied to evaluate the differences, as presented in Table 3. Cohen’s d was used to measure the effect size, with values greater than 0.8 indicating a large effect size. Figure 9 illustrates the comparison of emotional valence data between the pre- and post-optimization control interfaces.
The results indicated a significant difference in users’ emotional valence, as reflected in their facial expressions, between the use of the pre-optimized and post-optimized control interfaces (p < 0.05). This finding suggests that the optimized interface design elicits more positive emotions from users, thereby providing evidence from an affective psychology perspective that the optimized control interface offers a superior user experience.

3.6.2. Eye-Tracking Data

The heatmaps (Figure 10) and gaze plots (Figure 11) from the eye-tracking analysis demonstrate that compared to the pre-optimized control interface, the optimized control interface exhibits smaller hotspot areas and shorter gaze trajectories. This indicates that users require less visual attention and experience higher cognitive fluency when completing the same tasks with the optimized control interface.
The eye-tracking metrics included fixation duration, fixation count, total scan path length, and average scan path length. Descriptive statistical analysis of the four eye-tracking metrics is shown in Table 4. The Shapiro–Wilk test was performed for normality. The results indicate that only the total scan path length data do not meet the assumption of normality for both pre- and post-optimized groups. Therefore, the Mann–Whitney U test was used for the total scan path length, while an independent samples t-test was applied to the fixation duration, fixation count, and average scan path length. The results of the significance tests are shown in Table 5. Cohen’s d was used to assess the effect size, with Cohen’s d greater than 0.8 for all four metrics, indicating a high effect size. Figure 12 presents a comparison of the fixation duration, fixation count, total scan path length, and average scan path length data between pre- and post-optimization.
The results indicated that all four eye-tracking metrics—fixation duration, fixation count, total scan path length, and average scan path length—showed significant differences. The fixation count, total scan path length, and average scan path length all had p-values less than 0.001, while fixation duration had a p-value of 0.001. The p-value of <0.001 suggests that the relevant eye-tracking metrics differ significantly between the pre- and post-optimization control interfaces. Based on the descriptive statistics, the results demonstrated that the eye-tracking data for the optimized interface are significantly lower than for the pre-optimization interface. This suggests that subjects using the optimized control interface exhibit shorter fixation times and more concentrated visual cognitive processing areas. The eye-tracking data confirm the superiority of the optimized control interface.

3.6.3. Behavioral Data

Behavioral data were assessed using the NASA-TLX cognitive load questionnaire to evaluate subjects’ cognitive workload during the experimental tasks. The data encompassed seven dimensions: mental demand, physical demand, temporal demand, effort, performance, frustration level, and task completion time. Descriptive statistical analysis of the data is shown in Table 6. Reliability and validity tests were conducted, yielding a Cronbach’s alpha of 0.913, indicating high reliability, and a KMO coefficient of 0.877. The Shapiro–Wilk test was used to check for normality across the seven dimensions. None of the dimensions satisfied the requirement for normal distribution in both groups; therefore, non-parametric tests were employed for the significance tests. The results of the significance tests are shown in Table 7. Figure 13 compares the data across NASA-TLX cognitive load questionnaire dimensions between the pre-optimization and post-optimization interfaces, while Figure 14 provides a comparison of task completion times.
The results of the behavioral data analysis indicated significant differences across all seven dimensions between the pre- and post-optimization conditions. For six dimensions, other than physical demand, p < 0.001; for physical demand, p < 0.01. Additionally, the descriptive statistics show that the NASA-TLX cognitive load questionnaire scores after optimization are significantly lower than those before optimization, suggesting reduced cognitive load when using the optimized control interface. The comparison of task completion times further demonstrates that the optimized control interface significantly improves work efficiency.
In summary, the results of Study 1 demonstrate that for evaluating new products, such as engineering control interfaces, the integration of biometric data from facial expressions and eye-tracking, combined with subjective cognitive load assessments using the NASA-TLX cognitive load questionnaire, can effectively assess product usability when analyzed using machine learning techniques and statistical methods.

4. Study 2: New Product Prediction Based on Eye-Tracking

4.1. Research Context

The user experience of a product can be divided into two value dimensions: pragmatic value and hedonic value. Pragmatic value refers to the cognitive assessment of a product’s functionality, while hedonic value pertains to the emotional experience of using the product [60]. The appearance of a product significantly influences consumers’ emotional experience and serves as a critical aspect of its hedonic value. Consequently, the quality of a new product’s appearance design is a key factor in determining whether it gains customer favor and captures market share [61]. With the rise of artificial intelligence (AI), the embodied application of AI has become a primary focus, particularly in service robots [62]. Anthropomorphism, as one of the key features representing the appearance of robots, is central to the acceptance of AI, especially in the domain of service robots. Anthropomorphic robots are often perceived as possessing certain social capabilities, allowing users to develop empathetic experiences towards them [63]. This study focuses on robots with anthropomorphic characteristics and addresses the issue of product appearance design. Eye-tracking data were collected as subjects viewed a series of robot appearances. A neural network was then employed to explore the relationship between the eye-tracking data and subjective evaluations, enabling predictions of user assessments for newly designed robot appearances.

4.2. Procedure

The experiment was conducted in a human factors laboratory with appropriate soundproofing, lighting, and temperature. Upon entering the lab, subjects were asked about their visual condition to exclude individuals with severe astigmatism that could hinder eye-tracking data collection. The experimenter provided a detailed explanation of the experimental tasks, and subjects signed informed consent forms after fully understanding the procedures. The experimental session then commenced.
After entering the laboratory, as shown in Figure 15, subjects sat comfortably in front of the display screen, with their chin positioned on a rest 65 cm away from the screen. A pre-experiment was conducted before the formal experiment to familiarize subjects with the experimental tasks, during which no eye-tracking data were collected. In the formal experiment, subjects observed 19 images of robot appearances for 8 s each. Before each image, a blank screen was displayed for 3 s to eliminate any residual impression from the previous image, followed by a fixation cross displayed for 2 s to center the subjects’ gaze on the screen. The order of image presentation was randomized. After viewing each image, the subjects rated the robot’s appearance on a 10-point scale, where 1 indicated a very poor design and 10 indicated an excellent design. The robot images were converted to grayscale to minimize the influence of color differences on subjects. The experimental procedure is shown in Figure 16.

4.3. Subjects

A total of 15 subjects aged between 19 and 25 years were recruited into the experiment, including 9 males and 6 females with an average age of 21.07 years (SD = 0.77). All subjects were right-handed, physically healthy, with no history of neurological or psychiatric disorders. All subjects had normal or corrected-to-normal vision and passed the eye-tracking calibration. Additionally, they refrained from taking sedatives, hypnotics, or psychoactive substances within 24 h prior to the experiment. Subjects who successfully completed the experiment received corresponding course credits. This study adhered to the principles of the Declaration of Helsinki, and all subjects signed informed consent forms prior to the experiment.

4.4. Apparatus

The experiment utilized an SMI iView RED eye tracker with an accuracy of 0.4° for data collection, operating at a sampling rate of 500 Hz. Subjects viewed the experimental stimuli with both eyes, but only the eye movement data from the right eye were recorded. The experimental stimuli were programmed using the iView Experiment Center software 3.5 and presented on a 23.8-inch monitor with a refresh rate of 60 Hz and a resolution of 1920 × 1080. A five-point calibration method was employed, ensuring an eye position error within 0.5°. Eye-tracking data were analyzed and processed using BeGaze 3.5 software. To minimize the effects of head movement on eye-tracking trajectories, subjects’ heads were stabilized with a headrest positioned 65 cm away from the monitor.

4.5. Results

The eye-tracking heatmap (Figure 17a) and eye movement trajectories map (Figure 17b) indicate that subjects primarily focused on the head of the robot when viewing its appearance, followed by the torso region. To quantify subjects’ visual attention patterns, each robot image was divided into seven AOIs, as illustrated in Figure 17c: the robot’s head, torso, arms, hands, legs, feet, and the entire robot. Based on the descriptive statistics of fixation duration (Figure 18), subjects directed most of their attention to the robot’s head region when observing its appearance. A meta-analysis by Roesler et al. [64] on human–robot interactions highlights that anthropomorphic design elements in the robot’s head have a positive impact on humans. Therefore, future robot appearance designs should prioritize the design of the robot’s head.

4.6. Subjective Evaluation Prediction Based on Neural Networks

4.6.1. Input Parameters

The application of eye-tracking data for predicting subjective evaluation can be understood as a process of data fitting and generalization [65]. Following data inspection, we deliberately avoided interpolation methods to preserve data integrity, instead removing indicators containing missing values or outliers. The remaining 13 validated eye-tracking metrics were selected as neural network inputs: (1) first fixation duration, (2) total fixation duration, (3) total number of fixations, (4) total fixation ratio, (5) average fixation duration, (6) gaze duration, (7) gaze ratio, (8) net gaze duration, (9) net gaze ratio, (10) normalized gaze duration, (11) saccade duration, (12) number of saccades, and (13) transition time.

4.6.2. Model Construction

The prediction model was established using a BP neural network model with two hidden layers. The input consisted of the 13 types of eye-tracking data mentioned above, and the output was the subjective evaluation data. The structure of the model is shown in Figure 19.
Using eye-tracking data as input and subjective evaluation scores as output, a neural network-based regression model was established. The specific steps were as follows:
(1) Data Preprocessing: All eye-tracking data and scores were normalized using the method shown in Formula (4) to avoid poor fitting and generalization effects caused by inconsistent dimensional units. After normalization, the third image of the 19 robot appearance design images was used as the test sample, and the remaining 18 images were used as training samples to train the neural network.
x = x m i n ( x ) max x m i n ( x )
(2) Selection of Activation Function: The purpose of introducing an activation function is to introduce non-linearity into the model, thereby enhancing the model’s ability to fit non-linear data. The RELU function has higher versatility and avoids the drawbacks of the Sigmoid and Tanh functions. Therefore, the RELU function was chosen as the activation function for this neural network regression model, as shown in Formula (5).
f x = max 0 ,   x
(3) Selection of Hyperparameters: Hyperparameters are crucial for the results of a neural network. Common parameter optimization methods include manual tuning, grid search, random search, and Bayesian optimization [66]. Among them, Bayesian optimization is an efficient hyperparameter optimization method that is widely applied in deep learning and other machine learning fields. Bayesian optimization probabilistically models the objective function, enabling the model to learn from existing hyperparameter configurations and corresponding performance evaluations, and thereby infer new hyperparameter configurations that are expected to perform better. Bayesian optimization was employed to determine the optimal hyperparameters, with 200 iterations. Table 8 shows the search space and the optimal hyperparameters identified through Bayesian optimization. These optimized parameters were then used for model training.
(4) Model Training and Evaluation: The preprocessed training set was input into the neural network for training. During this process, the neural network adjusts the weights and bias values through the backpropagation of error information to minimize the difference between the actual output and the expected output. Once training was completed, the eye-tracking data corresponding to the unseen robot images in the test set were fed into the neural network. The predicted values were then compared with the actual values to test the model’s effectiveness and generalization capability.

4.6.3. Results and Discussion

The eye-tracking data of the 15 subjects corresponding to the selected third robot appearance image were used as the test set and input into the trained neural network model to obtain the predicted values. The predicted values were compared with the actual values, and the comparison results are shown in Figure 20.
The mean absolute error (MAE) was calculated to be 0.42. Since the predicted data are continuous, the accuracy of the prediction was further evaluated using Equation (6), which determines the model’s accuracy in predicting subjective evaluations of appearance. In Equation (6), n represents the total sample size, y i ^ denotes the predicted value from the neural network, and y i represents the actual value.
ƞ = 1 i n | y i ^ y i | i n y i × 100 %
By substituting the predicted values and actual values into Equation (6), the model’s relative error was calculated to be 7.23%, with an accuracy of 92.77%. The high accuracy indicates that the model can reasonably predict users’ preferences for new product designs to some extent. Furthermore, to evaluate the error, the mean square error (MSE) was calculated using Equation (7). The resulting MSE value of approximately 0.35 indicates good prediction performance.
M S E = 1 n * ( y i y i ^ ) 2
This result highlights a certain relationship between eye-tracking data and subjective evaluations, providing a valuable tool for product developers to leverage artificial intelligence in predicting user behavior during the new product development process.
Compared to existing methods, current research predominantly focuses on explicit evaluation approaches such as subjective assessments. Participants are typically asked to verbally describe their product experiences and provide numerical ratings through questionnaires and interviews, representing a direct method for obtaining user feedback [67]. The proposed method extends this methodology by incorporating an implicit approach through physiological data monitoring. This technique eliminates the need for direct user questioning while yielding more objective and precise measurement data. By employing neural networks to capture common patterns between physiological responses and subjective evaluations, this approach enables product developers to gain deeper insights into users’ authentic reactions [68]. Furthermore, it effectively mitigates potential biases inherent in purely subjective evaluation systems [69,70].
In summary, the results of Study 2 indicate that for new products exemplified by robot appearance designs, collecting eye-tracking data during users’ observations of a series of robot designs and using neural networks to explore the relationship between eye-tracking data and subjective evaluations can enable the prediction of evaluations for newly designed robot appearances.

5. Conclusions and Future Directions

5.1. Conclusions

This research combines design neurocognition with artificial intelligence to investigate new product evaluation and prediction driven by biometric data. Study 1 focused on new product evaluation, using the usability assessment of engineering control interfaces as an example. By optimizing the control interface using a genetic algorithm, facial expression and eye-tracking data were collected during user interaction with the control interface. These biometric data were subsequently analyzed using machine learning techniques and statistical methods, alongside the cognitive load scale, to evaluate the usability of the designed control interface. The results of the study indicate that: (1) With respect to the emotions of users when interacting with the control interface, machine learning techniques can effectively model and classify facial movements, ultimately providing key emotional indicators such as emotional valence, arousal, and overall emotional state. Experimental results demonstrate that users experience higher emotional valence when using the optimized control interface. (2) Regarding the cognitive fluency of users while using the control interface, four eye-tracking metrics consistently show that users interacting with the optimized control interface exhibit shorter fixation times and more concentrated visual cognitive processing areas while completing tasks. (3) Behavioral data from user also support the findings, indicating that user completes tasks faster when using the optimized control interface, with all cognitive load scale indicators significantly lower than those of the pre-optimized control interface. Therefore, Study 1 confirms, based on artificial intelligence techniques and biometric data, the effectiveness of heuristic algorithm optimization for engineering control interfaces.
In Study 2, which focused on new product prediction, the evaluation of robot appearance design was used as an example. Eye-tracking data were collected from users as they observed a series of robot designs. Then, a neural network was employed to explore the relationship between the eye-tracking data and subjective evaluations, enabling the prediction of users’ assessment of newly designed robot appearances. The results of the study indicate that by utilizing an eye-tracking system to analyze users’ gaze patterns, and combining this data with user-completed evaluation scales, a predictive model can be constructed using deep learning techniques. This model allows for the accurate prediction of user choices based on measurements of their eye movement behavior. This provides a solution for future applications where real-time eye-tracking data collected from users through devices like Apple or Google smart glasses could be automatically analyzed to assess preferences for new products.
This research, within the context of enterprises implementing an open innovation strategy, addresses user needs in the evaluation and prediction of new product development. It explores the use of artificial intelligence technology to drive new product evaluation and prediction through biometric data, proposing a novel approach that combines artificial intelligence with design neurocognition for new product development.
In summary, this study utilizes facial expressions and eye-tracking data as biometric indicators for the evaluation and prediction of new products. Neural networks are employed to extract users’ physiological and psychological characteristics from these data. Within the context of AI-driven biometric data analysis, this study systematically measures users’ cognition and emotions during their experience with the new product, capturing cognitive load and emotional valence while predicting the product’s appearance. The findings highlight the potential of artificial intelligence in product evaluation and prediction based on biometric data.

5.2. Implications for Research

The significance of this study is as follows: (1) Traditional approaches to evaluating and predicting user experience in new product development have primarily relied on subjective questionnaire-based methods, which are susceptible to various research biases and often lead to the streetlight effect. By incorporating multimodal biometric data, such as eye-tracking and facial expression analysis, this study addresses these limitations. It establishes a novel framework that integrates objective biometric data with subjective questionnaire data, thereby offering a more comprehensive and reliable approach to user experience research. (2) Conventional studies utilizing biometric data often rely on statistical methods like analysis of variance and regression analysis, which limit the capacity to explore the full potential of large-scale biometric datasets. This study adopts advanced artificial intelligence techniques, including machine learning and neural networks, to tackle two critical aspects of new product development: evaluation and prediction. These methods enable the effective analysis and interpretation of biometric data, providing robust AI-driven technical solutions that pave the way for future innovations in design neurocognition field.

5.3. Limitations and Future Directions

This study has several limitations that need to be addressed in future research. First, the field of artificial intelligence is rapidly evolving, with new methods emerging constantly. This study employed a limited range of techniques for the evaluation and prediction of new products. Future research could explore the application of other AI technologies to improve the accuracy of evaluations and the precision of predictions. Second, while there are various types of biometric data available, this study focused solely on eye-tracking and facial expression data for product evaluation and prediction. Future research could incorporate additional data sources, such as EEG and functional near-infrared spectroscopy (fNIRS), to achieve more comprehensive and robust results in user experience studies. Finally, this study was conducted as a laboratory-based experiment. Future studies could consider using external datasets, expanding the sample range and size, and conducting real-world testing to validate scalability.

Author Contributions

Conceptualization, L.X. and J.W.; methodology, L.X. and Y.W.; formal analysis, X.L. and Y.W.; resources, L.X.; writing—original draft preparation, X.L., J.W. and L.X.; writing—review and editing, T.L. and S.Z.; visualization, X.L.; supervision, J.W., T.L. and S.Z.; project administration, J.W.; funding acquisition, J.W., T.L. and S.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This study was supported by the National Natural Science Foundation of China (72374088, 72471105, 72472094, 72001096), the Key Project of Education Science Planning in Jiangsu Province (B-b/2024/01/162), the Key Project of Higher Education Science Research Planning of the China Association of Higher Education (24XX0205), the Jiangsu Government Scholarship for Overseas Studies (JS-2024-69), and the Humanities and Social Science Fund of the Ministry of Education of China (24YJCZH445).

Institutional Review Board Statement

All subjects gave their informed consent for inclusion before they participated in the study. The study was conducted in accordance with the Declaration of Helsinki, and approved by the Institutional Review Board of School of Economics and Management, Jiangsu University of Science and Technology (protocol code human factors_20220601 and 01 June 2022).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are available on request from associate professor Jun Wu (wujunergo@just.edu.cn).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Taraghi, M.; Armellini, F.; Imbeau, D. An Exploratory Investigation of Cognitive Mapping for Analyzing Needs in UX Design. IEEE Trans. Eng. Manag. 2024, 71, 6581–6594. [Google Scholar] [CrossRef]
  2. Wu, Y.; Jiao, Y.; Cao, Q. Unlocking the link between user participation and new product performance: The moderating effect of network capability. J. Bus. Res. 2023, 168, 114241. [Google Scholar] [CrossRef]
  3. Morgan, T.; Anokhin, S. Entrepreneurial orientation and new product performance in SMEs: The mediating role of customer participation. J. Bus. Res. 2023, 164, 113921. [Google Scholar] [CrossRef]
  4. Zeng, Q.; Zhang, L.; Guo, Q.; Zhuang, W.; Fan, W. Factors Influencing User-Idea Selection in Open Innovation Communities. Int. J. Electron. Commer. 2022, 26, 415–440. [Google Scholar] [CrossRef]
  5. Alsharif, A.H.; Isa, S.M. Electroencephalography Studies on Marketing Stimuli: A Literature Review and Future Research Agenda. Int. J. Consum. Stud. 2025, 49, e70015. [Google Scholar] [CrossRef]
  6. Verhulst, N.; De Keyser, A.; Gustafsson, A.; Shams, P.; Van Vaerenbergh, Y. Neuroscience in service research: An overview and discussion of its possibilities. J. Serv. Manag. 2019, 30, 621–649. [Google Scholar] [CrossRef]
  7. Karmarkar, U.R.; Plassmann, H. Consumer Neuroscience: Past, Present, and Future. Organ. Res. Methods 2017, 22, 174–195. [Google Scholar] [CrossRef]
  8. Du, R.Y.; Netzer, O.; Schweidel, D.A.; Mitra, D. Capturing Marketing Information to Fuel Growth. J. Mark. 2020, 85, 163–183. [Google Scholar] [CrossRef]
  9. Dargan, S.; Kumar, M. A comprehensive survey on the biometric recognition systems based on physiological and behavioral modalities. Expert Syst. Appl. 2020, 143, 113114. [Google Scholar] [CrossRef]
  10. Borgianni, Y.; Maccioni, L. Review of the use of neurophysiological and biometric measures in experimental design research. Artif. Intell. Eng. Des. Anal. Manuf. 2020, 34, 248–285. [Google Scholar] [CrossRef]
  11. Balters, S.; Weinstein, T.; Mayseless, N.; Auernhammer, J.; Hawthorne, G.; Steinert, M.; Meinel, C.; Leifer, L.J.; Reiss, A.L. Design science and neuroscience: A systematic review of the emergent field of Design Neurocognition. Des. Stud. 2023, 84, 101148. [Google Scholar] [CrossRef]
  12. De Keyser, A.; Bart, Y.; Gu, X.; Liu, S.Q.; Robinson, S.G.; Kannan, P.K. Opportunities and challenges of using biometrics for business: Developing a research agenda. J. Bus. Res. 2021, 136, 52–62. [Google Scholar] [CrossRef]
  13. Torrico, D.D.; Mehta, A.; Borssato, A.B. New methods to assess sensory responses: A brief review of innovative techniques in sensory evaluation. Curr. Opin. Food Sci. 2023, 49, 100978. [Google Scholar] [CrossRef]
  14. Wang, L.L.; Tang, W.Z.; Montagu, E.; Wu, X.L.; Xue, C.Q. Cognitive evaluation based on regression and eye-tracking for layout on human-computer multi-interface. Behav. Inf. Technol. 2024, 1–24. [Google Scholar] [CrossRef]
  15. Liu, Y.H.; Gao, Y.J.; Yue, L.S.S.; Zhang, H.; Sun, J.H.; Wu, X.R. A Real-Time Detection of Pilot Workload Using Low-Interference Devices. Appl. Sci. 2024, 14, 6521. [Google Scholar] [CrossRef]
  16. Li, F.; Chen, C.H.; Lee, C.H.; Feng, S.S. Artificial intelligence-enabled non-intrusive vigilance assessment approach to reducing traffic controller’s human errors. Knowl.-Based Syst. 2022, 239, 108047. [Google Scholar] [CrossRef]
  17. He, D.B.; Wang, Z.Q.; Khalil, E.B.; Donmez, B.; Qiao, G.K.; Kumar, S. Classification of Driver Cognitive Load: Exploring the Benefits of Fusing Eye-Tracking and Physiological Measures. Transp. Res. Rec. 2022, 2676, 670–681. [Google Scholar] [CrossRef]
  18. ISO 9241-11-2018; Ergonomics of Human-System Interaction-Part 11: Usability: Definitions and Concepts. ISO: Geneva, Switzerland, 2018.
  19. Nielsen, J. The usability engineering life cycle. Computer 1992, 25, 12–22. [Google Scholar] [CrossRef]
  20. Weichbroth, P. Usability of Mobile Applications: A Systematic Literature Study. IEEE Access 2020, 8, 55563–55577. [Google Scholar] [CrossRef]
  21. Hart, S. Development of NASA-TLX (Task Load Index): Results of empirical and theoretical research. In Advances in Psychology; Elsevier: Amsterdam, The Netherlands, 1988; Volume 52, pp. 139–183. [Google Scholar] [CrossRef]
  22. Molina, A.I.; Arroyo, Y.; Lacave, C.; Redondo, M.A.; Bravo, C.; Ortega, M. Eye tracking-based evaluation of accessible and usable interactive systems: Tool set of guidelines and methodological issues. Univers. Access Inf. Soc. 2024, 1–24. [Google Scholar] [CrossRef]
  23. Talen, L.; den Uyl, T.E. Complex Website Tasks Increase the Expression Anger Measured with FaceReader Online. Int. J. Hum.–Comput. Interact. 2022, 38, 282–288. [Google Scholar] [CrossRef]
  24. Jain, A.K.; Nandakumar, K.; Ross, A. 50 years of biometric research: Accomplishments, challenges, and opportunities. Pattern Recognit. Lett. 2016, 79, 80–105. [Google Scholar] [CrossRef]
  25. Cootes, T.F.; Taylor, C.J. Statistical models of appearance for medical image analysis and computer vision. In Proceedings of the Medical Imaging 2001: Image Processing, Bellingham, WA, USA, 31 October 2001; pp. 236–248. [Google Scholar]
  26. Feldman, M.J.; Bliss-Moreau, E.; Lindquist, K.A. The neurobiology of interoception and affect. Trends Cogn. Sci. 2024, 28, 643–661. [Google Scholar] [CrossRef] [PubMed]
  27. Xu, N.; Guo, G.; Lai, H.; Chen, H. Usability study of two in-vehicle information systems using finger tracking and facial expression recognition technology. Int. J. Hum.–Comput. Interact. 2018, 34, 1032–1044. [Google Scholar] [CrossRef]
  28. Zhang, Z.; Gong, M.; Zhang, S.; Jia, M. Buffering or Aggravating Effect? Examining the Effects of Prior Corporate Social Responsibility on Corporate Social Irresponsibility. J. Bus. Ethics 2023, 183, 147–163. [Google Scholar] [CrossRef]
  29. Bartkiene, E.; Mockus, E.; Monstaviciute, E.; Klementaviciute, J.; Mozuriene, E.; Starkute, V.; Zavistanaviciute, P.; Zokaityte, E.; Cernauskas, D.; Klupsaite, D. The evaluation of dark chocolate-elicited emotions and their relation with physico chemical attributes of chocolate. Foods 2021, 10, 642. [Google Scholar] [CrossRef] [PubMed]
  30. Pichierri, M.; Peluso, A.M.; Pino, G.; Guido, G. Health claims’ text clarity, perceived healthiness of extra-virgin olive oil, and arousal: An experiment using facereader. Trends Food Sci. Technol. 2021, 116, 1186–1194. [Google Scholar] [CrossRef]
  31. Wakihira, T.; Morimoto, M.; Higuchi, S.; Nagatomi, Y. Can facial expressions predict beer choices after tasting? A proof of concept study on implicit measurements for a better understanding of choice behavior among beer consumers. Food Qual. Prefer. 2022, 100, 104580. [Google Scholar] [CrossRef]
  32. Viejo, C.G.; Fuentes, S.; Howell, K.; Torrico, D.D.; Dunshea, F.R. Integration of non-invasive biometrics with sensory analysis techniques to assess acceptability of beer by consumers. Physiol. Behav. 2019, 200, 139–147. [Google Scholar] [CrossRef]
  33. Wang, J.; Antonenko, P.; Celepkolu, M.; Jimenez, Y.; Fieldman, E.; Fieldman, A. Exploring relationships between eye tracking and traditional usability testing data. Int. J. Hum.–Comput. Interact. 2019, 35, 483–494. [Google Scholar] [CrossRef]
  34. Just, M.A.; Carpenter, P.A. Eye fixations and cognitive processes. Cogn. Psychol. 1976, 8, 441–480. [Google Scholar] [CrossRef]
  35. Just, M.A.; Carpenter, P.A. A theory of reading: From eye fixations to comprehension. Psychol. Rev. 1980, 87, 329. [Google Scholar] [CrossRef] [PubMed]
  36. Huang, J.; Peng, Y.; Wan, X. The color-flavor incongruency effect in visual search for food labels: An eye-tracking study. Food Qual. Prefer. 2021, 88, 104078. [Google Scholar] [CrossRef]
  37. Huseynov, S.; Kassas, B.; Segovia, M.S.; Palma, M.A. Incorporating biometric data in models of consumer choice. Appl. Econ. 2019, 51, 1514–1531. [Google Scholar] [CrossRef]
  38. Singh, R.; Miller, T.; Newn, J.; Velloso, E.; Vetere, F.; Sonenberg, L. Combining gaze and AI planning for online human intention recognition. Artif. Intell. 2020, 284, 103275. [Google Scholar] [CrossRef]
  39. Smith, J.; Legg, P.; Matovic, M.; Kinsey, K. Predicting user confidence during visual decision making. ACM Trans. Interact. Intell. Syst. (TiiS) 2018, 8, 1–30. [Google Scholar] [CrossRef]
  40. Sharma, K.; Papamitsiou, Z.; Giannakos, M. Building pipelines for educational data using AI and multimodal analytics: A “grey-box” approach. Br. J. Educ. Technol. 2019, 50, 3004–3031. [Google Scholar] [CrossRef]
  41. Gaspar, J.F.; Teixeira, Â.P.; Santos, A.; Soares, C.G.; Golyshev, P.; Kähler, N. Human centered design methodology: Case study of a ship-mooring winch. Int. J. Ind. Ergon. 2019, 74, 102861. [Google Scholar] [CrossRef]
  42. Oulasvirta, A.; Dayama, N.R.; Shiripour, M.; John, M.; Karrenbauer, A. Combinatorial optimization of graphical user interface designs. Proc. IEEE 2020, 108, 434–464. [Google Scholar] [CrossRef]
  43. Todorov, A.; Oh, D.; Uddenberg, S.; Albohn, D.N. Face evaluation: Findings, methods, and challenges. Ann. N. Y. Acad. Sci. 2025, 1–10. [Google Scholar] [CrossRef]
  44. Landmann, E. I can see how you feel-Methodological considerations and handling of Noldus’s FaceReader software for emotion measurement. Technol. Forecast. Soc. Change 2023, 197, 122889. [Google Scholar] [CrossRef]
  45. Zhu, H.; Zhou, Y.; Wu, Y.; Wang, X. To smile or not to smile: The role of facial expression valence on mundane and luxury products premiumness. J. Retail. Consum. Serv. 2022, 65, 102861. [Google Scholar] [CrossRef]
  46. Claverie, D.; Rutka, R.; Verhoef, V.; Canini, F.; Hot, P.; Pellissier, S. Psychophysiological dynamics of emotional reactivity: Interindividual reactivity characterization and prediction by a machine learning approach. Int. J. Psychophysiol. 2021, 169, 34–43. [Google Scholar] [CrossRef] [PubMed]
  47. Campos da Silveira, A.; Lima de Souza, M.; Ghinea, G.; Saibel Santos, C.A. Physiological Data for User Experience and Quality of Experience: A Systematic Review (2018–2022). Int. J. Hum.–Comput. Interact. 2025, 41, 664–693. [Google Scholar] [CrossRef]
  48. Cheng, X.; Zhang, S.; Mou, J. Are you caught in the dilemma of metaverse avatars? The impact of individuals’ congruity perceptions on paradoxical emotions and actual behaviors. Decis. Support Syst. 2025, 189, 114387. [Google Scholar] [CrossRef]
  49. Murphy, R.R. Human-robot interaction in rescue robotics. IEEE Trans. Syst. Man Cybern. Part C (Appl. Rev.) 2004, 34, 138–153. [Google Scholar] [CrossRef]
  50. Casper, J.; Murphy, R.R. Human-robot interactions during the robot-assisted urban search and rescue response at the world trade center. IEEE Trans. Syst. Man Cybern. Part B (Cybern.) 2003, 33, 367–385. [Google Scholar] [CrossRef]
  51. Sanders, M.; McCormick, E. Human Factors in Engineering and Design Seventh Edition; McGraw-Hili: New York, NY, USA, 1993. [Google Scholar]
  52. Wickens, C.D.; Carswell, C.M. The proximity compatibility principle: Its psychological foundation and relevance to display design. Hum. Factors 1995, 37, 473–494. [Google Scholar] [CrossRef]
  53. ISO11064-5; Ergonomic Design of Control Centres: Part 5: Displays and Controls. ISO: Geneva, Switzerland, 2008.
  54. Oulasvirta, A. Optimizing User Interfaces for Human Performance. In Proceedings of the Intelligent Human Computer Interaction, Evry, France, 11–13 December 2017; pp. 3–7. [Google Scholar]
  55. Oulasvirta, A. User Interface Design with Combinatorial Optimization. Computer 2017, 50, 40–47. [Google Scholar] [CrossRef]
  56. Chou, T.C.; Lu, J.M. Automated Usability Improvement of Two-Dimensional Graphical Interfaces through the Simulation of User’s Operations. Int. J. Hum.-Comput. Interact. 2024, 1–16. [Google Scholar] [CrossRef]
  57. Lu, G.Y.; Yu, J.Q.; Zhou, J.K.; Cheng, T.Y.; Zhang, T.; Zhang, S. Interface Layout Optimization for Electrical Devices Using Heuristic Algorithms and Eye Movement. IEEE Access 2023, 11, 106083–106094. [Google Scholar] [CrossRef]
  58. Xue, L.; Zhao, S.; Mahmoudi, A.; Feylizadeh, M.R. Flexible job-shop scheduling problem with parallel batch machines based on an enhanced multi-population genetic algorithm. Complex Intell. Syst. 2024, 10, 4083–4101. [Google Scholar] [CrossRef]
  59. Malińska, M.; Bugajska, J.; Bartuzi, P. Occupational and non-occupational risk factors for neck and lower back pain among computer workers: A cross-sectional study. Int. J. Occup. Saf. Ergon. 2021, 27, 1108–1115. [Google Scholar] [CrossRef]
  60. Hassenzahl, M.; Tractinsky, N. User experience-a research agenda. Behav. Inf. Technol. 2006, 25, 91–97. [Google Scholar] [CrossRef]
  61. Ranscombe, C.; Hicks, B.; Mullineux, G. A method for exploring similarities and visual references to brand in the appearance of mature mass-market products. Des. Stud. 2012, 33, 496–520. [Google Scholar] [CrossRef]
  62. De Keyser, A.; Kunz, W.H. Living and working with service robots: A TCCM analysis and considerations for future research. J. Serv. Manag. 2022, 33, 165–196. [Google Scholar] [CrossRef]
  63. Belanche, D.; Casaló, L.V.; Schepers, J.; Flavián, C. Examining the effects of robots’ physical appearance, warmth, and competence in frontline services: The Humanness-Value-Loyalty model. Psychol. Mark. 2021, 38, 2357–2376. [Google Scholar] [CrossRef]
  64. Roesler, E.; Manzey, D.; Onnasch, L. A meta-analysis on the effectiveness of anthropomorphism in human-robot interaction. Sci. Robot. 2021, 6, eabj5425. [Google Scholar] [CrossRef]
  65. Wu, Y.; Li, N.; Xia, L.; Zhang, S.; Liu, F.; Wang, M. Visual attention predictive model of built colonial heritage based on visual behaviour and subjective evaluation. Humanit. Soc. Sci. Commun. 2023, 10, 869. [Google Scholar] [CrossRef]
  66. Yang, L.; Shami, A. On hyperparameter optimization of machine learning algorithms: Theory and practice. Neurocomputing 2020, 415, 295–316. [Google Scholar] [CrossRef]
  67. Moon, S.E.; Kim, J.H.; Kim, S.W.; Lee, J.S. Prediction of Car Design Perception Using EEG and Gaze Patterns. IEEE Trans. Affect. Comput. 2021, 12, 843–856. [Google Scholar] [CrossRef]
  68. Akdim, K.; Belanche, D.; Flavián, M. Attitudes toward service robots: Analyses of explicit and implicit attitudes based on anthropomorphism and construal level theory. Int. J. Contemp. Hosp. Manag. 2023, 35, 2816–2837. [Google Scholar] [CrossRef]
  69. Xu, D.; Agarwal, M.; Gupta, E.; Fekri, F.; Sivakumar, R. Accelerating Reinforcement Learning using EEG-based implicit human feedback. Neurocomputing 2021, 460, 139–153. [Google Scholar] [CrossRef]
  70. Zhao, Q.; Ye, Z.; Deng, Y.; Chen, J.; Chen, J.; Liu, D.; Ye, X.; Cheng, H. An advance in novel intelligent sensory technologies: From an implicit-tracking perspective of food perception. Compr. Rev. Food Sci. Food Saf. 2024, 23, e13327. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Facial expression analysis process based on AI.
Figure 1. Facial expression analysis process based on AI.
Electronics 14 01128 g001
Figure 2. Basic workflow of genetic algorithm.
Figure 2. Basic workflow of genetic algorithm.
Electronics 14 01128 g002
Figure 3. POX crossover.
Figure 3. POX crossover.
Electronics 14 01128 g003
Figure 4. Illustration of mutation.
Figure 4. Illustration of mutation.
Electronics 14 01128 g004
Figure 5. Control interfaces for remote operation of disaster-relief robots. (a) Pre-optimization control interface; (b) post-optimization control interface. Note: The text displayed on the control interface represents (1) the functional descriptions of the remote operation of disaster-relief robots, including: start, stop, path finding, obstacle avoidance, robotic arm grip, movement, robotic arm release, robotic arm positioning, robotic arm retraction and robotic arm control; (2) the rescue environment and system information presented on the control interface, including: temperature, humidity, battery level, risk factor, and video channels.
Figure 5. Control interfaces for remote operation of disaster-relief robots. (a) Pre-optimization control interface; (b) post-optimization control interface. Note: The text displayed on the control interface represents (1) the functional descriptions of the remote operation of disaster-relief robots, including: start, stop, path finding, obstacle avoidance, robotic arm grip, movement, robotic arm release, robotic arm positioning, robotic arm retraction and robotic arm control; (2) the rescue environment and system information presented on the control interface, including: temperature, humidity, battery level, risk factor, and video channels.
Electronics 14 01128 g005
Figure 6. Experimental scenario for control interface evaluation. Note: The portrait in this image has been authorized by the individual.
Figure 6. Experimental scenario for control interface evaluation. Note: The portrait in this image has been authorized by the individual.
Electronics 14 01128 g006
Figure 7. Experimental procedure for control interface evaluation. Note: The text displayed on the control interface represents (1) the functional descriptions of the remote operation of disaster-relief robots, including: start, stop, path finding, obstacle avoidance, robotic arm grip, movement, robotic arm release, robotic arm positioning, robotic arm retraction and robotic arm control; (2) the rescue environment and system information presented on the control interface, including: temperature, humidity, battery level, risk factor, and video channels.
Figure 7. Experimental procedure for control interface evaluation. Note: The text displayed on the control interface represents (1) the functional descriptions of the remote operation of disaster-relief robots, including: start, stop, path finding, obstacle avoidance, robotic arm grip, movement, robotic arm release, robotic arm positioning, robotic arm retraction and robotic arm control; (2) the rescue environment and system information presented on the control interface, including: temperature, humidity, battery level, risk factor, and video channels.
Electronics 14 01128 g007
Figure 8. Facial expression analysis for control interface evaluation. Note: The portrait in this image has been authorized by the individual.
Figure 8. Facial expression analysis for control interface evaluation. Note: The portrait in this image has been authorized by the individual.
Electronics 14 01128 g008
Figure 9. Mean emotional valence of pre- and post-optimized control interfaces.
Figure 9. Mean emotional valence of pre- and post-optimized control interfaces.
Electronics 14 01128 g009
Figure 10. Eye-tracking heatmaps of pre- and post-optimized control interfaces. (a) Pre-optimization; (b) post-optimization. Note: The text displayed on the control interface represents (1) the functional descriptions of the remote operation of disaster-relief robots, including: start, stop, path finding, obstacle avoidance, robotic arm grip, movement, robotic arm release, robotic arm positioning, robotic arm retraction and robotic arm control; (2) the rescue environment and system information presented on the control interface, including: temperature, humidity, battery level, risk factor, and video channels.
Figure 10. Eye-tracking heatmaps of pre- and post-optimized control interfaces. (a) Pre-optimization; (b) post-optimization. Note: The text displayed on the control interface represents (1) the functional descriptions of the remote operation of disaster-relief robots, including: start, stop, path finding, obstacle avoidance, robotic arm grip, movement, robotic arm release, robotic arm positioning, robotic arm retraction and robotic arm control; (2) the rescue environment and system information presented on the control interface, including: temperature, humidity, battery level, risk factor, and video channels.
Electronics 14 01128 g010
Figure 11. Eye-tracking gaze plots of pre- and post-optimized control interfaces. (a) Pre-optimization; (b) post-optimization. Note: The text displayed on the control interface represents (1) the functional descriptions of the remote operation of disaster-relief robots, including: start, stop, path finding, obstacle avoidance, robotic arm grip, movement, robotic arm release, robotic arm positioning, robotic arm retraction and robotic arm control; (2) the rescue environment and system information presented on the control interface, including: temperature, humidity, battery level, risk factor, and video channels.
Figure 11. Eye-tracking gaze plots of pre- and post-optimized control interfaces. (a) Pre-optimization; (b) post-optimization. Note: The text displayed on the control interface represents (1) the functional descriptions of the remote operation of disaster-relief robots, including: start, stop, path finding, obstacle avoidance, robotic arm grip, movement, robotic arm release, robotic arm positioning, robotic arm retraction and robotic arm control; (2) the rescue environment and system information presented on the control interface, including: temperature, humidity, battery level, risk factor, and video channels.
Electronics 14 01128 g011
Figure 12. Eye-tracking metrics of pre- and post-optimized control interfaces. (a) Fixation duration; (b) fixation count; (c) total scan path length; (d) average scan path length.
Figure 12. Eye-tracking metrics of pre- and post-optimized control interfaces. (a) Fixation duration; (b) fixation count; (c) total scan path length; (d) average scan path length.
Electronics 14 01128 g012
Figure 13. Mean scores of NASA-TLX cognitive load questionnaire of pre- and post-optimized control interfaces.
Figure 13. Mean scores of NASA-TLX cognitive load questionnaire of pre- and post-optimized control interfaces.
Electronics 14 01128 g013
Figure 14. Mean task completion time of pre- and post-optimized control interfaces.
Figure 14. Mean task completion time of pre- and post-optimized control interfaces.
Electronics 14 01128 g014
Figure 15. Experimental scenario for predicting robot appearance. Note: The portrait in this image has been authorized by the individual.
Figure 15. Experimental scenario for predicting robot appearance. Note: The portrait in this image has been authorized by the individual.
Electronics 14 01128 g015
Figure 16. Experimental procedure for predicting robot appearance.
Figure 16. Experimental procedure for predicting robot appearance.
Electronics 14 01128 g016
Figure 17. Heatmap, trajectory map, and AOI division of robot appearance. (a) Eye-tracking heatmap; (b) eye movement trajectories map; (c) robot AOI division, where AOI1 represents the robot’s head, AOI2 represents the robot’s torso, AOI3 represents the robot’s arms, AOI4 represents the robot’s hands, AOI5 represents the robot’s legs, AOI6 represents the robot’s feet, and AOI7 represents the robot as a whole.
Figure 17. Heatmap, trajectory map, and AOI division of robot appearance. (a) Eye-tracking heatmap; (b) eye movement trajectories map; (c) robot AOI division, where AOI1 represents the robot’s head, AOI2 represents the robot’s torso, AOI3 represents the robot’s arms, AOI4 represents the robot’s hands, AOI5 represents the robot’s legs, AOI6 represents the robot’s feet, and AOI7 represents the robot as a whole.
Electronics 14 01128 g017
Figure 18. Fixation duration in different AOIs of the robot appearance.
Figure 18. Fixation duration in different AOIs of the robot appearance.
Electronics 14 01128 g018
Figure 19. Neural network structure.
Figure 19. Neural network structure.
Electronics 14 01128 g019
Figure 20. Comparison of model predicted values with actual values.
Figure 20. Comparison of model predicted values with actual values.
Electronics 14 01128 g020
Table 1. Corresponding codes of control keys.
Table 1. Corresponding codes of control keys.
KeysEncodingKeysEncoding
Directional Button Group1Auto Obstacle Avoidance Enable11
Low Speed Mode2Auto Obstacle Avoidance Disable12
High Speed Mode3Auto Path Finding Enable13
Robotic Arm Positioning4Auto Path Finding Disable14
Robotic Arm Retraction5Movement Precision Increase15
Robotic Arm Grip6Movement Precision Decrease16
Robotic Arm Release7Clockwise Rotation17
Robotic Arm Segment 18Counter-clockwise Rotation18
Robotic Arm Segment 29Robotic Arm Control19
Robotic Arm Segment 310
Table 2. Descriptive statistics of facial expression data.
Table 2. Descriptive statistics of facial expression data.
Interface StatusMean25th PctlMedian75th PctlSDMinMax
Pre-Optimization−0.2907−0.3815−0.3146−0.18900.1081−0.4900−0.1000
Post-Optimization−0.2208−0.3097−0.2045−0.13100.1494−0.52000.0700
Note: Pctl, percentile.
Table 3. Significance test of facial expression data.
Table 3. Significance test of facial expression data.
Metrict ValueSig
Emotional Valence−2.0760.042
Table 4. Descriptive statistics of eye-tracking metrics.
Table 4. Descriptive statistics of eye-tracking metrics.
MetricInterface StatusMean25th PctlMedian75th PctlSDMinMax
Fixation Duration (ms)Pre-Optimization24,995.1417,771.6025,818.6034,494.2810,514.003747.0044,884.60
Post-Optimization15,840.7410,259.1816,584.0020,320.188916.892373.5039,181.90
Fixation CountPre-Optimization102.8380.00103.00122.7535.1132.00198.00
Post-Optimization70.1346.0065.5088.5030.3429.00151.00
Total Scan Path Length (px)Pre-Optimization16,299.5312,897.7516,162.5019,948.755384.343417.0028,275.00
Post-Optimization7829.505319.507346.009237.503266.503327.0018,439.00
Average Scan Path Length (px/s)Pre-Optimization440.16388.86453.24499.0797.89132.68613.25
Post-Optimization302.33233.24319.23365.4676.1295.41404.61
Note: Pctl, percentile.
Table 5. Significance test of eye-tracking metrics.
Table 5. Significance test of eye-tracking metrics.
Fixation DurationFixation CountTotal Scan Path LengthAverage Scan Path Length
z Value −5.515
t Value3.6373.86 6.088
Sig0.001<0.001<0.001<0.001
Cohen’s d0.9390.9971.9021.572
Table 6. Descriptive statistics of behavioral data.
Table 6. Descriptive statistics of behavioral data.
MetricInterface StatusMean25th PctlMedian75th PctlSDMinMax
Mental DemandPre-Optimization69.7360.0073.0089.2521.3014.0097.00
Post-Optimization13.105.0010.0017.0011.120.0045.00
Physical DemandPre-Optimization20.734.5010.0025.7524.960.0085.00
Post-Optimization6.231.005.005.258.340.0036.00
Temporal DemandPre-Optimization63.9750.0064.0081.2522.736.0098.00
Post-Optimization16.538.7512.5023.7513.051.0050.00
EffortPre-Optimization58.0335.7560.0077.5024.2113.0098.00
Post-Optimization12.575.0010.0018.5010.130.0040.00
PerformancePre-Optimization50.9310.0061.5081.2533.840.00100.00
Post-Optimization10.004.2510.0015.009.020.0040.00
Frustration LevelPre-Optimization46.1713.7513.7571.2532.470.0099.00
Post-Optimization6.030.755.0010.007.160.0033.00
Task Completion Time (ms)Pre-Optimization37,662.6728,680.0034,095.5044,395.7512,445.6818,041.0066,077.00
Post-Optimization26,415.5720,845.7523,384.5029,139.009640.1014,231.0058,435.00
Note: Pctl, percentile.
Table 7. Significance test of behavioral data.
Table 7. Significance test of behavioral data.
Mental DemandPhysical DemandTemporal DemandEffortPerformanceFrustrationLevelTask CompletionTime
z Value−6.384−3.078−6.026−6.229−4.133−4.564−6.377
Sig<0.0010.002<0.001<0.001<0.001<0.001<0.001
Table 8. Hyperparameter search space and optimal configuration.
Table 8. Hyperparameter search space and optimal configuration.
HyperparameterSearch SpaceOptimal Hyperparameters
Learning Rate[1 × 10−8, 0.1]4.33 × 10−7
First Hidden Nodes[20, 2000]353
Second Hidden Nodes[20, 2000]1790
Batch Size[5, 100]100
Epochs[5, 500]458
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wu, J.; Lyu, X.; Wang, Y.; Liu, T.; Zhao, S.; Xue, L. Combining Design Neurocognition Technologies and Neural Networks to Evaluate and Predict New Product Designs: A Multimodal Human–Computer Interaction Study. Electronics 2025, 14, 1128. https://doi.org/10.3390/electronics14061128

AMA Style

Wu J, Lyu X, Wang Y, Liu T, Zhao S, Xue L. Combining Design Neurocognition Technologies and Neural Networks to Evaluate and Predict New Product Designs: A Multimodal Human–Computer Interaction Study. Electronics. 2025; 14(6):1128. https://doi.org/10.3390/electronics14061128

Chicago/Turabian Style

Wu, Jun, Xiangyi Lyu, Yi Wang, Tao Liu, Shinan Zhao, and Lirui Xue. 2025. "Combining Design Neurocognition Technologies and Neural Networks to Evaluate and Predict New Product Designs: A Multimodal Human–Computer Interaction Study" Electronics 14, no. 6: 1128. https://doi.org/10.3390/electronics14061128

APA Style

Wu, J., Lyu, X., Wang, Y., Liu, T., Zhao, S., & Xue, L. (2025). Combining Design Neurocognition Technologies and Neural Networks to Evaluate and Predict New Product Designs: A Multimodal Human–Computer Interaction Study. Electronics, 14(6), 1128. https://doi.org/10.3390/electronics14061128

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop