Next Article in Journal
The Fast and Reliable Detection of Multiple Narrowband FH Signals: A Practical Framework
Previous Article in Journal
Radio Signal Modulation Recognition Method Based on Hybrid Feature and Ensemble Learning: For Radar and Jamming Signals
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

How the Degree of Anthropomorphism of Human-like Robots Affects Users’ Perceptual and Emotional Processing: Evidence from an EEG Study

School of Mechanical Engineering, Southeast University, Suyuan Avenue 79, Nanjing 211189, China
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(15), 4809; https://doi.org/10.3390/s24154809
Submission received: 18 June 2024 / Revised: 16 July 2024 / Accepted: 22 July 2024 / Published: 24 July 2024
(This article belongs to the Section Biomedical Sensors)

Abstract

:
Anthropomorphized robots are increasingly integrated into human social life, playing vital roles across various fields. This study aimed to elucidate the neural dynamics underlying users’ perceptual and emotional responses to robots with varying levels of anthropomorphism. We investigated event-related potentials (ERPs) and event-related spectral perturbations (ERSPs) elicited while participants viewed, perceived, and rated the affection of robots with low (L-AR), medium (M-AR), and high (H-AR) levels of anthropomorphism. EEG data were recorded from 42 participants. Results revealed that H-AR induced a more negative N1 and increased frontal theta power, but decreased P2 in early time windows. Conversely, M-AR and L-AR elicited larger P2 compared to H-AR. In later time windows, M-AR generated greater late positive potential (LPP) and enhanced parietal-occipital theta oscillations than H-AR and L-AR. These findings suggest distinct neural processing phases: early feature detection and selective attention allocation, followed by later affective appraisal. Early detection of facial form and animacy, with P2 reflecting higher-order visual processing, appeared to correlate with anthropomorphism levels. This research advances the understanding of emotional processing in anthropomorphic robot design and provides valuable insights for robot designers and manufacturers regarding emotional and feature design, evaluation, and promotion of anthropomorphic robots.

1. Introduction

Robots are increasingly being used in a variety of human social life situations and play a vital role in numerous fields, including industrial production, healthcare, homecare, education, and entertainment [1,2,3]. Given the diverse range of backgrounds, training, physical capabilities, and cognitive skills among users in the field of Human-Robot Interaction (HRI), it is expected that robots should possess qualities such as intuitiveness, ease of use, trustworthiness, high user acceptance, and responsiveness to meet the demands and states of their users. To facilitate robot service and overcome challenges (e.g., trust and acceptance of robots) to HRI, robot researchers and engineers have designed anthropomorphic (resemble humans or human-like design) robots [4,5,6]. The attribution of humanlike characteristics to nonhuman or inanimate entities is referred to as anthropomorphism and has been widely adopted as a supporting design feature in the robotics field [7]. Positive reactions to anthropomorphic robots often include increased trust, empathy, and cooperation. For instance, healthcare robots designed with human-like features have been shown to improve patient comfort and compliance with medical instructions [8]. Similarly, in educational settings, anthropomorphic robots can foster a more engaging and interactive learning environment, leading to improved educational outcomes [9]. However, not all reactions to anthropomorphic robots are positive. Negative responses can arise due to the uncanny valley effect, where robots that appear almost human, but not quite, elicit feelings of eeriness and discomfort. An example of this is seen in customer service scenarios, where overly human-like robots may cause unease and decrease customer satisfaction [10]. Additionally, there are concerns about privacy and security, as anthropomorphic robots equipped with advanced sensors and data processing capabilities may be perceived as intrusive [11]. Thus, it is not always the case that the more anthropomorphic a robot is, the better it is perceived to be [12]. It is believed that anthropomorphism, when used properly, can significantly alter users’ attitudes and improve robot marketing [13]. Hence, it is of great importance to investigate how the anthropomorphism of robots affects humans’ perceptions, emotional responses, attitudes, and evaluations.
A number of studies have been conducted exploring factors influencing how users perceive, judge and evaluate anthropomorphic robots, mainly from aspects of the uncanny valley effect [14,15], robot appearance [13,16,17,18,19], facial expressions of emotion [20,21], robot actions [22,23,24], human-robot interaction [25,26,27], even gender [28], personality [29], and robot anxiety [30]. However, most studies aforementioned have tended to use the traditional approaches based on questionnaire surveys and interviews, which may lead to subjective bias and make it difficult to obtain individuals’ true perceptual and cognitive information [13,14,31,32]. Recently, several studies have attempted to investigate users’ preferences, robot actions and emotional behaviors of humanoid robots using quantitative psychological monitoring techniques, such as functional magnetic resonance imaging (fMRI) [33], eye tracking (ET) [31,34], electroencephalography (EEG) [23,35], and event-related potentials (ERPs) [23,31,33,36,37,38,39]. While these efforts are innovative and meaningful for anthropomorphic robot research, they tend to focus on responses to particular features (e.g., appearance or actions) of the same categorical robots (e.g., humanoid robots). Few have explored how the degree of anthropomorphism of human-like robots affects users’ perceptual processing and emotional responses from the time dimension of neuroprocessing, and there was a lack of studies regarding robotic anthropomorphism that looked at the brain areas associated with emotional processing [40].
Evaluating users’ perceptual processing, emotional processing and attitudes toward anthropomorphic robots could provide guidelines for potential improvements in the design of emotionally anthropomorphic robots. It is, therefore, important for anthropomorphic robot designers to ensure that the robot can accurately elicit users’ positive emotional responses. While Mori’s theory [10] depicted the relationship between human-likeness and affective reactions (see Figure 1), the emotional valence and arousal related to the degree of anthropomorphism of human-like robots and the corresponding underlying neural dynamics have not been fully considered. Some studies have sought to investigate the impact of the uncanny valley effect on affective responses and psychophysiological reactions. For instance, Cheetham et al. [36] examined affective experience in response to categorically ambiguous compared with unambiguous avatars and human faces by using EEG, electromyography (EMG) and self-report indices. The LPP and SAM-based measures of arousal and valence indicated a general rise in negative affective state (i.e., enhanced arousal and negative valence) with greater morph distance from the human end of the dimension of human likeness. However, the stimuli used in the study were drawn from controlled morph continua of human faces. This way of controlling anthropomorphism via the face dimension (changing outward surface features of a face) may not be applicable to real-life robots, which are more diverse and have varying degrees of anthropomorphism (e.g., middle anthropomorphic robots, toy robots, service robots, etc.).
Accordingly, the present study aimed to investigate the time course of neural dynamics involved in the users’ perceptual and emotional processing in response to robots with different levels of anthropomorphism, to evaluate the relationship between users’ subjective emotional valence, arousal, and electrophysiological data, as well as assess user’s attitudes toward anthropomorphic robots, such as perceived likeability and warmth. In this study, we combined the subjective evaluation and the objective electrophysiological measurements. Forty-two participants were equipped with an EEG device to record the electroencephalogram (EEG) data while performing an affective rating task, and ERPs and ERSP were analyzed to reveal their information processing stream while viewing, perceiving and evaluating the appearance of anthropomorphic robots. The subjective ratings were utilized to complement the objective data. Attitudes toward anthropomorphic robots were gleaned by assessing individuals’ perceived likeability and warmth. The results might advance our understanding of how the anthropomorphism of an anthropomorphic robot affects users’ perceptual or cognitive processes and emotional responses from the underlying neural level. Moreover, the findings might contribute to a better understanding of the cognitive underpinnings of the uncanny effect. In practice, the findings could provide reference and guidelines for emotional design, feature design, assessment, marketing, implementation, and promotion of anthropomorphic robots for robotics designers and manufacturers.

2. Related Work

2.1. Anthropomorphism of Robots

Anthropomorphism is the attribution of external humanlike characteristics such as the face, nose and mouth, and internal characteristics such as motivations, intentions and emotions to nonhuman or inanimate entities [7,41]. When humans see a schema consistent with their owns in a nonhuman object, it will be regarded that the nonhuman object is anthropomorphized [41]. As a new marketing method, anthropomorphism is often applied in robot design [42,43], product design [44,45,46], icon design [47] and brand promotion [48,49,50]. Anthropomorphic robots realize anthropomorphism by mimicking the physiological and psychological states of humans [21,51], as well as human-like personality traits [30,52]. Robots’ actions, emotional facial expressions, voice, and efficiency of human-robot interaction all have an influence on users’ perceptions and evaluations regarding anthropomorphism [21,41,53].
Regarding the measurement of robots’ anthropomorphism, most academics have concentrated primarily on traditional methods such as subjective ratings. MacDorman [54] proposed a questionnaire with a single question to assess human-likeness. Powers and Kiesler [18], in comparison, suggested a different anthropomorphic questionnaire that contained six items and appeared to have a higher level of reliability. Bartneck et al. [42] developed a consistent questionnaire for detecting anthropomorphism using semantic differential scales based on prior research. Roesler et al. [55] used the frequency of choosing robots and the response latency of every selection to measure the preferred degree of anthropomorphism. Subjective measurement of anthropomorphism has its own advantages, but it has been limited by the subjective bias of individuals and may introduce psychometric noise into the self-reported ratings [56]. Thus, devising an objective means to measure the anthropomorphism of human-like replicas is crucial. Some studies have explored the unconscious behavior of users by employing covert anthropomorphic measurements. For instance, Minato et al. [57] examined participants’ gaze patterns while gazing at either a human or an android. In a follow-up study, Shimada et al. [34] examined the similarity between the gaze behavior (i.e., duration and gaze orientation) toward an android robot and a human, and found that an android with a higher physical resemblance to a human evoked a more human-oriented gaze, suggesting that gaze behavior can be utilized to implicitly quantify robot anthropomorphism. Moreover, in other studies, anthropomorphism has been manipulated and measured objectively by altering the outward surface features of a face, converting a real human face into a synthetic one, or manipulating robot personality traits [30,58,59,60].
Although many studies have been conducted to evaluate and measure the anthropomorphism of robots and demonstrated that robots can be anthropomorphized through anthropomorphic appearances and behaviors, few have investigated the neural mechanisms behind how the degree of anthropomorphism of human-like robots affects individuals’ perceptual and emotional processing. Uncovering the neural dynamics underlying users’ perceptions and affective processing of robots varied in their degree of anthropomorphism may help design emotionally anthropomorphic robots.

2.2. Emotional Responses, Likeability, and Warmth on Robots

A growing body of studies has shown that the anthropomorphism of robots was responsible for people’s emotional responses, likeability and warmth [13,14,19,58]. Concerning emotional responses, Mori’s uncanny valley theory [10] holds significant importance and should not be disregarded. It elucidates the relationship between robots’ human likeness and humans’ emotional responses. While Mori’s theory showed how affective responses vary with the perceived human likeness, emotional valence and arousal have not been fully considered, particularly the valence dimension [10]. Additionally, most research has generally relied on the conventional methods of emotional evaluation based on questionnaire surveys, which may introduce subjectivity bias or memory bias, and provide difficulties or constraints in eliciting true information about individuals’ perceptual and affective processing. Thus, it is crucial to develop an objective way to quantify emotional processing while evaluating robots with different degrees of anthropomorphism. Furthermore, previous research has demonstrated that emotional valence is positively related to emotional experience, while higher emotional arousal corresponds with good or poor affective experience and lower arousal with a moderate emotional experience [61,62]. Based on the uncanny valley effect, robots with a high level of anthropomorphism often have highly anthropomorphized faces, which may give people a potentially weird, uncomfortable feeling of uncanniness and evoke a poor emotional experience. Robots with a low level of anthropomorphism exhibit more adorable non-human characteristics that may induce a good affective experience.
In terms of likeability, it has been argued that early favorable impressions (e.g., likeability or affinity) often result in more positive assessments of an individual [14,42,63]. It has been shown that people are prone to make judgments and form attitudes in the instants immediately after encountering another person. Since robots are regarded as social actors to some extent, it can be assumed that people are capable of judging robots in a similar manner [64,65]. In the present study, the degree to which users have favorable perceptions and evaluations of robot looks was referred to as the users’ likeability for robots. According to Mori’s hypothetical curve, to a point, likeability increased with increasing human-resemblance, but as robots became more human-like (such as robots with high level of anthropomorphism), they began to be perceived as very unlikeable.
With regard to human warmth, it has been associated with social attributes such as caring, nice, and sociable traits, as well as friendliness, and trustworthiness [13,66]. It has been revealed that warmth judgments are more relevant than competence in affective and behavioral responses, despite the fact that their emergence as dimensions of warmth and competence was consistent [13,41,67]. People’s positive attitudes have been found to be closely linked to an individual’s level of warmth; thus, the warmer an individual is, the more positive people’s attitudes are as well [68]. Similar findings have been reported in late anthropomorphic brand research [48], as well as virtual agents [65]. The Uncanny Valley theory [10], suggests that the affinity fades when robots become too human-like, and people will experience an uncomfortable feeling of uncanniness. Thus, it can be assumed that when people anthropomorphize robots and perceive them as growingly warm in external appearance, the increase in warmth will be perceived positively at first. However, once robots resemble humans too closely, an uncomfortable feeling of uncanniness should creep in and result in a negative shift in attitude.

2.3. Electrophysiological Components Related to Perceptual Processing, Affective Processing, and Evaluation

The ERP technique is a useful tool in the investigation of unveiling physiological aspects that affect users’ behavior and preferences, and it can also help investigate the “common scale” that makes it possible to compare users’ heterogeneous and individualized behaviors [69]. Numerous studies have explored the neural dynamics involved in the processing of affective information. However, most prior ERP studies on affection have typically utilized emotional pictures as stimuli from IPAS [70], and some have also used other stimuli, such as arts [71], faces [72], humidifiers [73], webpages [62], and even logos [74]. Results from these studies have identified P1, N1, P2, N2, P3, and LPP components as sensitive to affective contents and affective processing of the stimuli. Two affective ERP components have often been the focus of emotional studies: one is the early posterior negativity (EPN), a negative potential that is primarily distributed across the visual cortex in the time window of 230–280 ms; the other is the late positive potential (LPP), a positive potential that typically held long-lasting increased positivity, and exhibited a comparable onset time and activated cortical distribution as P3. In the current study, the ERP components of interest were exogenous N1, P2, and endogenous LPP.

2.3.1. The N1 Component

The visual N1, which has a wide scalp distribution and peaks earlier in the frontal than in the posterior regions [75], is closely related to the early visual processing of emotional stimuli and affective pictures. For instance, Keil et al. [76] revealed that strengthened N1 can be elicited by both positive and negative stimuli relative to neutral stimuli. It has been identified that the N1 component could index the distribution of attentional resources in response to stimuli and might have an effect on perceptual processing, such as the selection and discrimination of perceptual features (i.e., color, appearance and shape) [73,77,78]. Guo et al. [37] suggested that the preferred humanoid robot appearances elicited a larger N1 amplitude. Convincing evidence also has indicated that the N1 component was firmly associated with the physical properties of stimuli [75,79]. Anthropomorphic robots with a high degree of anthropomorphism frequently possess facial features that are highly anthropomorphized, leading individuals to experience a sense of uncanniness, which is characterized by perceived negativity. On the other hand, robots with a low degree of anthropomorphism often exhibit non-human characteristics that are more adorable, resulting in individuals perceiving a feeling of cuteness, which is associated with perceived positivity. In contrast, robots with a moderate level of anthropomorphism tend to be perceived as neutral, displaying a moderate level of anthropomorphic traits. Therefore, it is postulated that there would be a higher amplitude of N1 for both high and low anthropomorphic robots, whereas a lower amplitude of N1 is expected for middle anthropomorphic robots.

2.3.2. The P2 Component

The P2, a frontal or occipital-parietal positivity, is involved in higher-level perceptual and attentional processing of visual stimuli [79,80] and has revealed that larger amplitudes can be elicited by emotional stimuli than by neutral stimuli [70,81,82]. Studies have previously shown that P2 is indicative of the early rapid detection of perceptual features and has been reported to be associated with the attentional allocation in the early time window post-stimulus onset [83,84], and it might be modulated by negative or threatening information [85,86]. In addition, the P2 component has been indexed to the allocation of attentional resources for the inherent affective preference in response to stimuli with different perceptual features. For instance, Guo et al. [37] have suggested that an increased magnitude of P2 can be elicited by the preferred appearance of humanoid robots compared to the non-preferred one. Ma et al. [87] have indicated that enhanced P200 can be evoked by the most-preferred product designs of experience goods than for the least-preferred designs. Due to the physical properties and emotional salience, robots with high levels of anthropomorphism frequently possess facial features that are highly anthropomorphized, which may elicit a non-preferred feeling compared to robots with low and middle levels of anthropomorphism. Thus, we expected to see larger P2 for middle and low anthropomorphic robots than for high anthropomorphic robots, reflecting increased selective attention and feature detection.

2.3.3. The LPP Component

The LPP, a positive potential with wide scalp distribution, has been linked to the stimuli’s arousal and valence level, indicating it can reflect individuals’ subjective affective experience [76,88,89,90,91]. Specifically, increased LPP could be evoked by stimuli with high or low valence, and larger LPP could be elicited by higher-arousal stimuli compared to low-arousal stimuli [70,89,91,92,93]. Both positive and negative stimuli could evoke a stronger LPP response than neutral ones [70,76]. Vaitonytė et al. [40] suggested that LPP was susceptible to facial appearance at the temporal level. Convincing evidence also has revealed that the LPP component is associated with the distribution of sustained attention [89], top-down processing [94], affective evaluation [95], categorization [92], and even affective evaluative categorization in preference [73,87].

2.3.4. ERSP of Theta Band

In addition to ERPs, time-frequency measurements of neuro-oscillatory power, broadly used in substantial studies concerning the attentional aspects [96,97,98], have been effective in characterizing neural component processes when ERPs may not be interpreted clearly. The oscillations in theta band of emotional stimuli and affective pictures have been related to the distribution of attentional resources in the early perceptual processing [96,97], encoding [99], emotional discrimination [100], evaluation [37], and categorization in the late time window [101,102]. Generally, increased theta power as event-related synchronization (ERS) can be elicited by both positive and negative stimuli rather than neutral stimuli. Convincing evidence has also suggested that theta-band activations are linked with affective preference formation and can reflect the attentional distributions for the affective information processing and evaluative categorization in affective preference formation [37,98]. Due to the physical features and emotional salience, we expected to see greater theta oscillations for high and low anthropomorphic robots than for middle anthropomorphic robots.
We have formulated the following hypotheses based on existing literature (see Table 1):
  • Biomarkers:
Hypothesis 1.1.
Higher anthropomorphism levels will elicit more negative N1 and increased frontal theta power in the early time window.
Hypothesis 1.2.
Medium and low anthropomorphism levels will induce larger P2 responses compared to high anthropomorphism.
2.
Emotional affect:
Hypothesis 2.1.
High anthropomorphism will result in greater affective arousal as indicated by enhanced LPP.
Hypothesis 2.2.
Medium anthropomorphism will cause more pronounced parietal-occipital theta-band oscillations than low and high anthropomorphism.
3.
Level of robot anthropomorphism:
Hypothesis 3.
Different levels of anthropomorphism will show distinct neural patterns in both early feature detection and later affective appraisal phases.
Table 1. The summary of hypotheses.
Table 1. The summary of hypotheses.
VariablesLow Anthropomorphic Robots (L-AR)Middle Anthropomorphic Robots (M-AR)High Anthropomorphic Robots (H-AR)
Biomarkers
N1 More Negative frontal and central N1
Frontal theta power Increased
P2Larger P2Larger P2Decreased P2
Emotional affect
Affective arousal (LPP) Greater LPPGreater affective arousal
Parietal-occipital theta Enhanced
Overall neural patternsDistinct patterns in early detection and later appraisal phasesDistinct patterns in early detection and later appraisal phasesDistinct patterns in early detection and later appraisal phases

3. Method

3.1. Participants

A total of 42 college students (22 males and 20 females; aging from 21 to 25 years; mean age, 22.12 years, SD = 1.24) participated in the study. All were right-handed and reported normal or corrected to normal vision and no history of neurological or psychiatric diseases. The participants were recruited from the undergraduate and graduate populations. Participants were not addicted to drugs or alcohol and were asked to avoid the use of stimulants (e.g., alcohol and caffeine) 48 h before the experiment, which may affect EEG results. All participants provided their written informed consent prior to the experiment and received compensation of 80 RMB or course credits for their participation. Approval for the experiment was obtained from the Ethics Committee of Southeast University. The experiment gathered EEG data from all 42 subjects, two of whom were excluded from the analysis due to excessive artifacts resulting from frequent leg or hand jitters. Thus, the total of subjects included in every grand average for further analysis was 40 (21 males and 19 females; aging from 21 to 25 years; mean age, 22.32 years, SD = 1.38).

3.2. Stimuli

Most experimental stimuli (Figure 2) were selected from the ABOT (Anthropomorphic roBOT) Database, a collection of 251 images of real-world robots with one or more human-like appearance features [4]. It has been identified that the human-like robot appearance can be divided into four distinct dimensions: surface look, body manipulators, facial features, and mechanical locomotion [4]. An overall human-likeness score for each robot could be computed based on the above four dimensions using the Human-Likeness Estimator proposed by Phillips et al. [4]. According to the human-likeness scores, the initial stimuli were split into three groups, high anthropomorphic robot group (“H-AR”; with human-likeness scores ranging from 80 to 100), middle anthropomorphic robot group (“M-AR”; with human-likeness scores ranging from 40 to 65), and low anthropomorphic robot group (“L-AR”; with human-likeness scores ranging from 0 to 20). Thus, we had sixty-one L-ARs, forty-four M-ARs and seven H-ARs. Due to the lack of H-ARs, we collected an additional 13 H-ARs from the Internet, whose human-likeness scores varied from 88 to 96. Hence, we obtained 20 high-human-likeness robots in total. Then, we asked five Ph.D. candidate volunteers who are proficient in human factors and robot research but did not participate in the late EEG research to classify and rate anthropomorphic robots for each group based on the appearance similarities or shared characteristics among each other on a 5-point Likert scale (1 = not similar at all, 5 = highly similar). We selected 12 stimuli with approximate similarity scores (ML-AR = 4.03; MM-AR = 4.12; MH-AR = 3.38) for each group (see Figure 2), respectively. Finally, we used the one-way ANOVA analysis to compare the human-likeness scores between the three groups. The results showed a significant difference between the groups (ML-AR = 14.340, SDL-AR = 3.968; MM-AR = 47.584, SDM-AR = 5.841; MH-AR = 92.055, SDH-AR = 2.310; p < 0.001).

3.3. Procedures

This study was conducted in the Human Factor Engineering Lab at Southeast University with soft lighting, suitable temperature, and noise strictly controlled. The participants were asked to sit in the chair comfortably at 650 mm away from a computer screen. All stimuli (Figure 2) were presented on a 27-inch LCD monitor with a brightness of 92 cd/m2, and a resolution of 1920 × 1080 pixels. The experimental task (as shown in Figure 2) was programmed through E-Prime 2.0 (Psychology Software Tools).
Before the formal experiment, the participants were asked to read the instructions provided. It is worth noting that participants have been briefed on the meanings of emotional valence and arousal during the instruction period. They were also asked to maintain their attentional focus when the fixation cross appeared until the pictures of different anthropomorphic robots disappeared. Once the participants finished reading the instructions provided, the formal experimental procedure began. Each trial began with a fixation cross that remained in the center of the screen for 1200–1600 ms to prime individuals for the EEG experiment. This was followed by the presentation of different anthropomorphic robot stimuli for 1500 ms. After viewing and perceiving the experimental stimuli, the participants were presented with the following behavioral questionnaire: “What do you think is the level of emotional valence produced by the anthropomorphic robot?” and “What do you think is the level of emotional arousal produced by the anthropomorphic robot?”. The options for responses were designed by fusing the Likert 5-point scale with the Self-Assessment Manikin (SAM) [103], where ‘1’ represents the lowest level and ‘5’ represents the highest level. Participants were required to finish the two questionnaires by using a keyboard. Subsequently, the next trial started. A total of 180 trials were presented in five blocks, with the order of trials within each block randomized. The experimental stimuli appeared randomly to eliminate the sequential effect. A rest period was given after each block, and the length of the rest interval was self-determined by the participants before continuing the experiment. Each participant spent approximately 40 min completing the experiment. After the experiment, participants were asked to finish a likeability questionnaire [63] (Cronbach’s alpha is well above 0.84). To facilitate comprehension, the likeability scale was modified into a more accessible Likert 5-point scale. Each word (讨人喜欢的/likable, 令人感到亲切的/friendly, 待人礼貌的/kind, 令人愉快的/pleasant, 让人放心接近的/approachable) was divided into five levels [42,63]. Subsequently, the human warmth questionnaire [13] (Cronbach’s α = 0.95) was presented with five warmth-related traits (合群的/sociable, 令人感到亲切的/friendly, 充满善意的/kind, 可爱的/likable, 充满温情的/warm) in the manner of the Likert 5-point scale, and the participants were asked to response how close the descriptions were to their own feelings.

3.4. Electroencephalogram Data Recordings and Preprocessing

The EEG data were continuously recorded (bandpass 0.05–100 Hz, at a 1000 Hz sampling rate) by the Brain Vision actiCHamp EEG system (Brain Product, Munich, Germany) (Figure 3a) with 64 Ag/AgCl electrodes (Figure 3b). The electrodes were mounted on an elastic electrode cap based on the international 10–20 system [104], and the sixty-four channels were all utilized for recording the EEG signals. The FCZ electrode was utilized as the reference electrode, and FPZ served as the ground electrode. Interelectrode impedances of all electrodes were kept below 5 kΩ throughout the experiment.
The EEG data were preprocessed offline in MATLAB 2013b (MathWorks Inc., Natick, MA, USA) using the EEGLAB 13 toolbox (Swartz Center for Computational Neuroscience, UCSD; http://sccn.ucsd.edu/eeglab (accessed on 5 June 2023). The raw EEG recordings were re-referenced to the average of the TP9 and TP10 channels, down-sampled to 500 Hz and were filtered through a 30-Hz low-pass and 0.1-Hz high-pass filter, respectively. Segments with a low signal noise ratio (SNR) were then excluded and independent component analysis (ICA) was performed. Artifacts (e.g., electro-oculogram (EOG), electromyogram (EMG), and sweat) were corrected by employing the ADJUST1.1.1 plugin in the EEGLAB toolbox. The EEG data were epoched from 200 ms prior to the onset of the anthropomorphic robots to 1000 ms after the presentation, with the first 200 ms interval serving as the referent baseline. Epochs of each trial type were then categorized. Epochs with amplitudes exceeding ±80 μV threshold were rejected [105,106]. z-score normalization was performed on the ERP amplitudes across subjects, converting the amplitudes into standard scores. These steps ensure that the ERP data are comparable across subjects, mitigating potential biases due to scale differences [79]. After rejection, at least 56 trials per participant were available for each type of anthropomorphic robot. The rejection rates for H-AR\M-AR\L-AR were 6.67%, 5% and 3.33%, respectively. There were no significant group differences in the rejection rates (F (2, 57) = 0.355, p = 0.703). The grand average waveforms for different anthropomorphic robots are depicted in Figure 4.

3.5. Data Analysis

3.5.1. ERP Analysis

As shown in Figure 4 and Figure 5, the N1 (110–140 ms), P2 (240–310 ms) and LPP (400–800) have been elicited in different anthropomorphic levels of robots. Based on the visual inspection, the N1 was more obvious in the anterior sites, while P2 and LPP components were more pronounced in the anterior and posterior sites. Thus, three electrode clusters, including the frontal (F3, FZ, F4), frontal-central (FC3, FCZ, FC4), and central (C3, CZ, C4) regions were selected for N1 component analysis. The P2 and LPP component analysis was performed at six electrode clusters including the frontal (F3, FZ, F4), frontal-central (FC3, FCZ, FC4), central (C3, CZ, C4), central-parietal (CP3, CPZ, CP4), parietal (P3, PZ, P4) and parietal-occipital (PO3, POZ, PO4) locations. The averaged amplitude of N1 in the time window of 110–140 ms was analyzed in a 3 (Type of robot: high, middle, and low anthropomorphic robots; H-AR\M-AR\L-AR) × 3 (Location: frontal, frontal-central, central; F, FC, C) two-way ANOVA. The averaged amplitude of each ERP component (P2 and LPP) was analyzed in a 3 (Type of robot: high, middle, and low anthropomorphic robots; H-AR\M-AR\L-AR) × 6 (Location: frontal, frontal-central, central, central-parietal, parietal and parietal-occipital; F, FC, C, CP, P, PO) two-way ANOVA. If necessary, the Greenhouse–Geisser corrections of freedom were applied when the data failed the sphericity tests, and the Bonferroni correction was employed for post hoc testing as needed.

3.5.2. Event-Related Spectral Perturbations (ERSPs) Analysis

We reprocessed the EEG raw data with a 0.15 Hz high-pass filter. Other preprocessing processes were similar to ERP data analysis, except a longer epoch for each trial type from 500 ms pre-stimulus onset to 1500 ms post-stimulus onset was segmented for further time-frequency analysis. To gain a comprehensive view of neural oscillations across the scalp of anterior and posterior sites, time-frequency decomposition was carried out on the two electrode channel clusters, frontal cluster (AF3, AFZ, AF4, F3, FZ, F4, FC3, FCZ, FC4) and parietal-occipital cluster (CP3, CPZ, CP4, P3, PZ, P4, PO3, POZ, PO4), each of which was an average of selected channels. These channels were chosen mainly based on visual inspection of topographic maps and prior work. Each epoch was split into 200 time points ranging from −372 ms to 1372 ms. The signals of EEG data were decomposed by short-time Fourier transformation with Hanning window tapering as implemented in the EEGLAB function newtimef.m. Using a sliding window of 256 ms with a step size of 10 ms and a filling ratio of 4 (default value), the spectral power at each time point in each EEG epoch was calculated, yielding 48 linear-spaced frequencies ranging from 3 to 50 Hz [107]. Subsequently, baseline correction was implemented according to the gaining model [108], and the spectral power of each time-frequency point was divided by the mean pre-stimulus baseline power of the corresponding frequency. Finally, the spectral power at each time point and frequency were then averaged for all segments within each trial type of each cluster. ERSPs were analyzed in the early time window (50–380 ms) and the late time window (400–1000 ms) for the two-channel clusters within the theta band (3–8 Hz). Theta band was selected based on previous work which suggests that theta band is associated with attentional distribution and affective preference [37,96,98]. The theta power was analyzed using a 3 (Type of robot: high, middle, and low anthropomorphic robots; H-AR\M-AR\L-AR) × 2 (Cluster: frontal, parietal-occipital) analysis of variance (ANOVA). The Greenhouse–Geisser and Bonferroni corrections were employed whenever necessary.

3.5.3. Statistical Analysis

All the statistical analyses of subjective ratings, ERP, and ERSPs data were performed in IBM SPSS Statistics 26.0. The normality of the data was verified by using the Kolmogorov–Smirnov test, and the results indicated that the data were normally distributed, and the variance was homogenous (p > 0.05). If necessary, the Greenhouse–Geisser corrections of freedom were applied, and the Bonferroni correction was employed for post hoc testing. The alpha level was set as 0.05 for statistical tests.

4. Results

4.1. Results for Subjective Rating Data

Emotional valence and arousal ratings: The analysis revealed that the type of anthropomorphic robots had significant effects on the rating of the emotional valence (F (2, 40) = 87.294, p < 0.001, η p 2 = 0.599), and the emotional arousal (F (2, 40) = 5.849, p = 0.005, η p 2 = 0.170). Further multiple comparisons (Figure 6a) revealed that L-AR had higher mean valence scores (M = 3.499, SD = 0.393) than H-AR (M = 2.194, SD = 0.599) (p < 0.001) and M-AR (M = 3.089, SD = 0.314) (p < 0.001), and the averaged valence of M-AR was greater than H-AR (p < 0.001). The averaged arousal scores of both L-AR (M = 3.230, SD = 0.707) and H-AR (M = 3.223, SD = 0.707) were significantly larger than M-AR (M = 2.510, SD = 0.409) (Figure 6b).
Likeability and warmth ratings: Results on the likeability of different anthropomorphic robots showed a significant effect (F (2, 40) = 31.829, p < 0.001, η p 2 = 0.352). The averaged likeability of L-AR (M = 3.421, SD = 0.385) was larger than H-AR (M = 2.429, SD = 0.866) (p < 0.001) and M-AR (M = 3.263, SD = 0.406) (p = 0.238), and averaged likeability of M-AR was significantly larger than H-AR (p < 0.001) (see Figure 7a). There was also a significant effect on warmth scores of the type of robots (F (2, 40) = 16.484, p < 0.001, η p 2 = 0.220). The mean warmth rating of H-AR (M = 2.404, SD = 0.563) was smaller than M-AR (M = 3.641, SD = 0.349) (p < 0.001) and L-AR (M = 3.534, SD = 0.394) (p < 0.001), whereas no significant difference (Figure 7b) between M-AR and L-AR was observed (p = 0.840).

4.2. ERP Results

4.2.1. N1 Component (110–140 ms)

ANOVA results of N1 showed significant main effects of type of robot (F (2, 78) = 7.514, p = 0.001, η p 2 = 0.162), and location (F (1.159, 45.191) = 16.927, p < 0.001, η p 2 = 0.303). Besides, there was a significant interaction between the type of robot and location (F (2.379, 92.774) = 4.699, p = 0.008, η p 2 = 0.108). Pairwise comparisons showed significant differences between types of robots, with larger negative amplitude for H-AR (M = −2.726, SE = 0.367) and L-AR (M = −2.513, SE = 0.362) than for M-AR (M = −1.410, SE = 0.437) (H-AR > M-AR, p = 0.005; L-AR > M-AR, p = 0.025). Bonferroni post hoc multiple comparisons (see Table 2) demonstrated that H-AR and L-AR elicited negative amplitude than M-AR at frontal and frontal-central locations than at central sites (F > C, p = 0.001; FC > C, p < 0.001). Regarding latency of the N1 component, there was also a significant main effect of the type of robot (F (2, 78) = 4.858, p = 0.01, η p 2 = 0.111). Compared to M-AR, the N1 peaked earlier for H-AR.

4.2.2. P2 Component (240–310 ms)

In the time course of 240 ms–310 ms after the stimuli, the main effects of types of robots (F (2, 78) = 53.392, p < 0.001, η p 2 = 0.600) and location (F (1.211, 47.217) = 55.613, p < 0.001, η p 2 = 0.588) arrived at significance. Also, there was a significant interaction between the type of robot and location (F (2.571, 100.282) = 20.057, p < 0.001, η p 2 = 0.340). Pairwise comparisons revealed significant differences between types of robots, with greater amplitude for M-AR (M = 7.946, SE = 0.695) than for L-AR (M = 6.927, SE = 0.672) and H-AR (M = 3.182, SE = 0.537) (M-AR > L-AR, p < 0.001; M-AR > L-AR, p = 0.021; L-AR > H-AR, p < 0.001). Type simple effects were significant for frontal (F (2, 78) = 41.62, p < 0.001), frontal-central (F (2, 78) = 61.78, p < 0.001), central (F (2, 78) = 68.39, p < 0.001), central-parietal (F (2, 78) = 69.57, p < 0.001), parietal (F (2, 38) = 45.92, p < 0.001) and parietal-occipital (F (2, 78) = 24.96, p < 0.001) sites.

4.2.3. LPP Component (400–800 ms)

Statistical analysis of the LPP component revealed the significant main effects of the type of robot (F (2, 78) = 15.816, p < 0.001, η p 2 = 0.289) and location (F (1.254, 48.916) = 12.926, p < 0.001, η p 2 = 0.249), and their interaction effect achieved significance (F (2.858, 111.473) = 4.828, p = 0.004, η p 2 = 0.110). Pairwise comparisons revealed significant differences between types of robots, with greater amplitude for M-AR (M = 8.402, SE = 0.954) than for L-AR (M = 6.371, SE = 0.967) and H-AR (M = 5.764, SE = 0.749) (M-AR > L-AR, p < 0.001; M-AR > L-AR, p < 0.001). Type simple effects were significant for frontal (F (2, 78) = 18.65, p < 0.001), frontal-central (F (2, 78) = 17.92, p < 0.001), central (F (2, 78) = 16.91, p < 0.001), central-parietal (F (2, 78) = 14.80, p < 0.001), parietal (F (2, 78) = 10.32, p < 0.001) and parietal-occipital region (F (2, 78) = 7.93, p = 0.001).

4.3. ERSP Results

ANOVA results of theta power across the early time window (50–380 ms) revealed that only the main effect of the cluster was statistically significant (F (1, 39) = 20.931, p < 0.001, η p 2 = 0.349), with power being greater for the parietal-occipital cluster (M = 2.571 dB, SE = 0.111) than for the frontal cluster (M = 2.044 dB, SE = 0.091). No significant main effect of the type of robot was observed (F (2, 78) = 1.213, p = 0.303, η p 2 = 0.030). There was a significant interaction between the type of robot and cluster (F (2, 78) = 4.182, p = 0.019, η p 2 = 0.097). Post hoc analyses revealed that theta power of H-AR (M = 2.214 dB, SE = 0.136) and M-AR (M = 2.229 dB, SE = 0.186) was larger than that of L-AR (M = 1.689 dB, SE = 0.151) (p = 0.044) in the frontal cluster.
Analysis of the late time window (400–1000 ms) showed that the main effect of the type of robot was significant (F (2, 78) = 5.256, p = 0.007, η p 2 = 0.119), with ERSP being greater for M-AR (M = 1.236 dB, SE = 0.181) than for H-AR (M = 0.573 dB, SE = 0.114) and L-AR (M = 0.893 dB, SE = 0.170). No main effect of cluster achieved significance (F (1, 39) = 2.438, p = 0.126, η p 2 = 0.059). There was a significant interaction between the type of robot and cluster (F (2, 78) = 8.633, p < 0.001, η p 2 = 0.181). Simple effect analysis revealed that the perturbations of H-AR (M = 0.453 dB, SE = 0.154) were smaller than M-AR (M = 1.353 dB, SE = 0.139; p < 0.001) and L-AR (M = 1.087, SE = 0.188; p = 0.022) in the parietal-occipital cluster (see Table 3 for group means, Figure 8 for spectrograms and Figure 9 for interaction effect of theta power).

4.4. Correlations between Emotional Responses, ERPs, and ERSP

The results of correlation analyses after Bonferroni correction are shown in Figure 10. The findings revealed that L-AR valence had significant positive correlations with L-AR arousal. The valence of H-AR correlated negatively with H-AR arousal. P2 and LPP of L-AR both had positive correlations with L-AR arousal. The P2 of H-AR had significant positive correlations with the LPP of H-AR, and the P2 of M-AR correlated significantly positively with the LPP of M-AR. Notably, we discovered that the frontal theta power of H-AR, M-AR, and L-AR all had significant positive correlations with both early and later theta rhythm power in parieto-occipital regions.

5. Discussion

Robots are becoming more prevalent in human social life and play a significant role in a range of industries. The emotional experience prompted by the visual appearance of an anthropomorphized robot plays a critical role in affecting users’ behaviors. Moreover, individual’s attitudes towards robots with varying degrees of anthropomorphism are also important. The purpose of this study is to tackle the time course of neural processing of human perceptions and emotional responses on three different types of robots (H-AR\M-AR\L-AR) with varying levels of anthropomorphism, and to evaluate individual’s subjective emotional valence and arousal, as well as attitudes, such as likeability, and perceived warmth toward the stimuli. This research combined subjective ratings, ERPs and ERSP measurements to characterize neural cognitive process components. The findings might contribute to a better understanding of users’ perceptual and emotional processing of anthropomorphized robots and facilitate the design of emotional anthropomorphic robots.

5.1. Behavioral Results Discussion

The subjective rating results manifested that the emotional valence is negatively correlated with the level of the robot’s anthropomorphism, while emotional arousal is accompanied by a U-shaped function of anthropomorphism, with higher arousal at the low anthropomorphic level, followed by a decrease at middle level and finally by a re-increase in arousal at high level. Consistent with Bradley and Lang [109], our results suggested that low anthropomorphic levels may induce positive emotional experiences and extreme emotional valence is often related to high emotional arousal for positive or negative stimuli. Regarding the likeability, the results showed that the likeability of M-AR was larger than H-AR, which is in line with previous studies [14,15,19]. These studies suggested that likeability increased with the increase in human likeness, whereas they would be perceived as really unlikeable and induce negative attitudes as robots became more human-like [10,14]. However, in the present study, we also found that L-AR has higher likeability than M-AR. The likely account for this might be that the experimental sample of L-AR includes more adorable non-human characteristics such as eyes, legs, or arms, compared to M-AR stimuli. In addition, the L-AR can be perceived as less threatening to human distinctiveness relative to M-AR. For human warmth, the averaged warmth rating was smallest during the H-AR condition, while the warmth means were relatively close between M-AR and L-AR conditions. The results were partially consistent with the findings of Kim et al. [13], suggesting that once robots become too human-like, an uncomfortable feeling of uncanniness could appear and lead to less positive attitudes. The slight difference between L-AR and M-AR might be because the experimental stimuli for L-AR were totally different from the experimental stimuli (Ethon 2) in Kim’s study, the stimuli for L-AR and M-AR in the current study all had likable product shapes and distinctive characteristics of appearance. Thus, L-AR might have a perceived warmth that is comparable to M-AR. These findings suggest that the degree of anthropomorphism of robots may play an important role in affecting users’ perceptual and emotional processing, as well as judgments of robots.

5.2. ERPs Related to Anthropomorphic Robots

5.2.1. N1 (110–140 ms): An Early Perceptual Detection of Anthropomorphic Robot Features (Hypotheses 1.1 and 3)

Consistent with prior emotional ERPs research that affective stimuli can elicit an enhanced N1 component compared to neutral stimuli [70,76,77], in the present study N1 amplitudes of H-AR (high arousal and low valence) and L-AR (high arousal and high valence) were significantly larger negative than that of M-AR (middle arousal and middle valence) in the anterior sites (supported H1.1 and H3). The possible explanation could be that H-AR and L-AR with high arousal scores may be more inclined to draw more attention from participants in the early information processing stream. Furthermore, a large number of studies on robot design have confirmed that human likeness [110], appearance [37], actions [22,24], and automation [26] can have an influence on a user’s perceptual process, attentional distribution, preference, and even the following behaviors. Prior ERP studies have found that the N1 component is firmly related to the physical properties of events [75,79], and it has also been associated with selective attention and discrimination [71,73,111]. In the present study, H-AR and L-AR all exhibit relatively prominent appearance compared to M-AR. Thus, both H-AR and L-AR received enhanced attention allocation. As 125 ms is a very early time point and the information processing at this phase probably occurred subconsciously, thus the nervous system might only detect a few features of stimulus pictures. We also observed shorter N1 latencies for H-AR than for L-AR. H-AR was perceived as more human-like compared with L-AR. H-AR with highly anthropomorphic faces might be more cognitively accessible and anthropomorphized during the face processing stage. Thus, H-AR might attract individuals’ attention more quickly for face form detection relative to L-AR in the early phase of perceptual processing (partially supported 3).

5.2.2. P2 (240–310 ms): A Selective Attentional Allocation of Anthropomorphic Robot Features (Hypotheses 1.2 and 3)

Previous studies have shown that P2 enhancement will be elicited by stimuli with positive or negative valence compared to stimuli with neutral valence [82]. In the present study; however, larger P2 amplitudes were observed for the M-AR and L-AR than for the H-AR over a wide region across the scalp, and M-AR has greater P2 amplitudes compared with L-AR in the parietal and occipital regions (partially supported H1.2 and H3). Prior research has revealed that P2 is sensitive to early stimulus classification and reflects the early attentional bias to the characteristics of the stimulus itself [85,112]. Several previously reported studies have also shown that natural selective attention can occur and account for the variation in amplitudes during middle latency [62,78,91]. These studies suggested that some features of the stimulus itself may be more emotionally stimulating in users, prompting them to allocate more or earlier attention to the stimulus pictures. We tentatively interpreted the above-mentioned results could be attributed to the different physical properties. In the present study, H-ARs (high arousal) were much more human-like and had a high level of anthropomorphism with more prominent characteristics of humans (with half-bodies), thus H-AR might be more cognitively accessible, more sought by perceivers, and could be identified or distinguished rapidly and automatically by users relative to L-AR and M-AR. Both M-AR and L-AR with relatively moderate or less prominent characteristics of human beings need users to give more effort and attention to acquiring information for the affective evaluation. Accordingly, P2 amplitude was more strongly activated by M-AR and L-AR than that by H-AR. Consequently, smaller P2 amplitudes of H-AR are probably indicative of a feature detection process that is responsive to high levels of anthropomorphism. In addition, relative to L-AR and M-AR, H-AR with more prominent humanlike design characteristics was perceived as much more human-like and also can give individuals a potentially weird, uncomfortable feeling of uncanniness (participants’ post hoc behavioral response supported this) [10,13,43,56,113], which might recruit participants’ more or earlier attentional resources to respond to stimuli rapidly and automatically [114,115]. Thus, in the current study, the face form detection and animacy perception of H-AR might be facilitated and finished in this time course [56,116,117]. Moreover, the P2 peaked earlier for H-AR than for L-AR and M-AR, suggesting that individuals can have a faster feature detection for high anthropomorphic stimuli relative to low ones. In congruence with Chammat et al. [20], this study suggests that the appearance of humanoid robots (L-AR and M-AR) can engage more attentional resources in detecting and encoding. Noticeably, the P2 of M-AR is greater in posterior areas than that of L-AR, while no difference has been found between M-AR and L-AR in the anterior regions. The main reason for this was that M-AR may recruit enhanced attentional resources for the working memory of encoding in posterior areas compared to the L-AR condition, whereas the L-AR and M-AR devoted approximative attentional resources for the rapid feature detection in frontal regions.

5.2.3. LPP (400–800 ms): An Affective Evaluation, Categorization and Motivated Attention of Anthropomorphic Robots (Hypotheses 2.1 and 3)

Prior research has demonstrated that LPP is linked to the stimuli’s arousal and valence level [76,88,90,93,118]. In contrast to neutral stimuli, stimuli with high or low valence could elicit enhanced LPP, and stimuli with higher arousal scores could evoke greater LPP than low-arousal stimuli [70,89,92]. However, in the present study, larger LPP (during the 400–800 ms interval) was elicited by M-AR than by L-AR and H-AR, while M-AR held low arousal and middle valence ratings (participants’ affective evaluation results) relative to L-AR and H-AR (not supported H2.1, but supported H3). The difference might be attributed to the experimental stimuli used in the study, which held different levels of anthropomorphism and appeared to induce distinguished neural patterns. Moderately human-like appearances appeared to be not fearful or ugly enough to evoke negative emotional experiences [37,119]. LPP has previously been reported to be associated with sustained attention allocation, top-down processing influences, evaluation of emotional stimuli, subjective affective experience, and categorization processes [40,78,79,89,118,120,121,122,123]. In the current study, when participants make an affective evaluation of the picture stimuli and then give the corresponding emotional valence and arousal scores, more attentional resources and more heavily weighted evaluative judgments might be motivationally assigned to M-AR caused by distinct physical attributes (i.e., relatively and moderately anthropomorphic characteristics, such as eyes, legs, and face) through the top-down modulation, inducing a higher level of arousal. LPP enhancement for M-AR might reflect the top-down control and motivated attention needed for the affective evaluation. Furthermore, this result was also in agreement with the findings of Jacobsen and Höfel [124], suggesting that aesthetic discrimination of preference induced sustained LPP. Evaluating M-AR appears to activate the arousal of inherent affect in preference formation for human-like appearances [37], and the association of knowledge in long-term memory and involves top-down processing. This may have contributed more to positive emotion. Thus, significantly greater LPP was observed for M-AR than for L-AR and H-AR. Noticeably, we also found that the LPP for M-AR and L-AR in this study had a wider scalp distribution than for H-AR. The scalp distribution was partially in line with the findings of Wang and Quadflieg [125], who suggested that more cortical regions would be active for the cognitive processing of humanoid robots than for human beings (H-AR was perceived as more human-like). In the same vein, it was also partially consistent with the findings of Cheetham et al. [36], who suggested a lower scalp topographical distribution of the LPP for human and humanlike faces than ambiguous ones.

5.3. ERSP Related to Anthropomorphic Robots (Hypotheses 1.1, 2.2 and 3)

Prior studies have shown that early theta-band oscillations are involved in the processing of affective stimuli, formation of affective preference during the early perceptual phase, and encoding of stimuli characteristics in working memory [37,73,96,98,126]. In the present study, greater theta band activation was elicited in the parietal-occipital cluster than for the frontal cluster within the early time window (50–380 ms). A possible explanation for this might be that enhanced allocation of attentional resources has been used for encoding anthropomorphic stimuli in working memory. It appeared that dissociations have occurred between the anterior theta-band oscillations and posterior theta-band oscillations in the mechanisms of visual feature detection and attentional processing they reflect. As defined by the two-stage concept of affect and attention, the information processing stream consists of two stages, in which further affective evaluation and categorization take place in the late stage based on the information processed in the early stage [91,127]. Moreover, studies have reported that the information processing in the late stage was typically modulated by a particular goal and the theta-band oscillations might vary as motivated attention [102,128]. In the present study, participants were required to make an affective evaluation of the picture stimuli and subsequently give the corresponding emotional valence and arousal ratings. With the task target, individuals might further motivationally process the pre-processed information of the early perceptual processing phase according to their own preferences. In the current study, individuals tended to prefer M-AR and L-AR compared with H-AR (not supported H1.1). M-AR and L-AR had larger theta-band ERS than H-AR across the late time window (400–1000 ms) (partially supported H2.2, and supported H3). The result was in accordance with prior studies that preferred appearances could draw more attentional resources with the inherent positive affect and elicit increased theta-band activity [37,129].

5.4. Correlations of EEG and Behavioral Measures, and the Two Stages

The results revealed that the valence of L-AR has a positive correlation with arousal, but is negative for H-AR. The possible explanation for this might be attributed to the differences in physical properties. Compared with the L-AR, H-AR appeared to be perceived as much more human-like and tended to give individuals a potentially uncomfortable feeling of uncanniness. The P2 of H-AR and M-AR have significant associations with LPP, suggesting associations between the induced information and perceptual processing reflected in ERP. With regard to the theta power of the two stages, the frontal-cluster theta power of H-AR, M-AR, and L-AR all had significant positive correlations with both early and later theta power in parieto-occipital-cluster regions. The correlations between the frontal-cluster regions and parieto-occipital-cluster regions might indicate the internal correlations of the brain. The correlations between the early time window and the late time window might signify the perceptual information transfers between the early perceptual processing of physical properties and the later further processing and evaluation of preference. This was supported by the two-stage concept of affect and attention [91,127].

5.5. Limitations and Future Research

This study has several limitations that should be acknowledged. Firstly, this research used static images, but real-world situations involved dynamic entities with artificially anthropomorphic speech and robot interaction. In the future, we can try to design experiments using dynamic robots or robots with verbal interaction. Secondly, in the current study, only three types of anthropomorphic robots were included, ignoring industrial robots and uncanny valley robots like Zombie and Animated characters. In future research, those robots as factors in the experimental design will be considered. Thirdly, some L-AR had appearances without control of face direction or picture views. The face direction might influence users’ attention direction. And the whole appearances in the L-AR and M-AR look significantly different from the half-bodies of H-AR. This difference may bias the results of this study. Thus, future studies will strictly control these factors. Another limitation of our study is the homogeneous sample consisting primarily of university students familiar with technology. This familiarity may have influenced their perception of the robots, potentially leading to more favorable usability and acceptance outcomes. Future research should consider including a more diverse sample with varying levels of technological exposure to examine how familiarity impacts interaction with robotic devices. Additionally, it would be beneficial to investigate the effects of training and exposure on populations less familiar with technology, as suggested by prior research [130,131]. Fifthly, although the current study provided a potential link between the P2 component and the anthropomorphism, further work at an electrophysiological level was needed to better comprehend the cognitive basis of anthropomorphism, and eventually define and manipulate anthropomorphisms to further explore human cognition and the uncanny valley hypothesis. Finally, ERPs and ERSP were utilized to analyze H-AR\M-AR\L-AR perceptual and emotional processing in time and the time-frequency dimensions. In future research, spatial location analysis for brain functional regions would be measured to complement the experimental results by combining the EEG and fNIRS or fMRI technology.

6. Conclusions

Robots are increasingly being used in human social life and play a vital role in numerous fields. The current study used electrophysiological techniques combining ERPs and ERSP to investigate the time course of how the degree of anthropomorphism of anthropomorphic robots affects users’ perceptual and emotional processing, as well as to assess individuals’ attitudes to them. Anthropomorphic robots with three levels of anthropomorphism were used as stimuli in an affective rating task. Forty-two participants viewed, perceived, and rated their emotional scores while EEG data were recorded.
The behavioral results suggest that emotional valence is negatively correlated with the level of the robot’s anthropomorphism. Emotional arousal is accompanied by a U-shaped function of anthropomorphism. The likeability of L-AR was highest, while the H-AR was lowest. The perceived warmth rating of high anthropomorphic robots is lowest compared to low and middle ones. These findings suggest that the degree of anthropomorphism of robots may play an important role in affecting users’ perceptual and emotional processing, as well as judgments of robots.
The EEG results of the present study suggest that in the early time window, H-AR and L-AR elicited increased exogenous frontal and central N1 than M-AR, reflecting increased attention in the early perceptual processing stage. However, M-AR and L-AR induced enhanced P2 than H-AR, indicating that more selective attention was attracted by M-AR and L-AR. The cognitive underpinnings of the uncanny valley may be indicated by the smaller P2 with peaked earlier latencies for H-AR. At a later stage, M-AR evoked larger LPP than H-AR and L-AR across a wide scape, indicating increased arousal, enhanced directed attention and affective preference. Theta-band ERS results indicate different neural patterns for H-AR, M-AR and L-AR in the early and late time windows. Specifically, theta-band oscillations are greater for the parietal-occipital cluster than the frontal cluster in the early time window, indicating that enhanced attention was used for stimuli information encoding in working memory and a dissociation between the anterior and posterior theta-band oscillations. In the late time window, M-AR and L-AR had larger theta-band ERS than H-AR, indicating enhanced attentional and affective preferred responses. These findings suggest that the degree of anthropomorphism of robots elicited differential neural perceptual and emotional processing of H-AR, M-AR and L-AR, which not only occurred in the early feature detection and selective attentional allocation phase but also took place in later affective appraisal processes. Early face form detection and animacy perception may have been completed in the early time phase, and the P2 related to high-order visual processing may be an indicator associated with the levels of anthropomorphism. These findings imply that robot designers can monitor and understand a user’s perceptual and emotional processing of an anthropomorphic robot based on the neurophysiology components. This may help them better able to design and evaluate anthropomorphic robots that meet consumers’ affective expectations. This study extends anthropomorphic robot design research in emotional processing by using electrophysiological methods. Both robot designers and manufacturers may benefit from this approach, which provides reference and guidelines for the emotional design, feature design, assessment, and promotion of anthropomorphic robots.

Author Contributions

Conceptualization, J.W.; Methodology, Y.L.; Software, J.W.; Formal analysis, J.W.; Investigation, J.W. and X.D.; Resources, W.T.; Writing—original draft, J.W.; Writing—review & editing, J.W.; Supervision, C.X. All authors have read and agreed to the published version of the manuscript.

Funding

The presented study was jointly sponsored by the Equipment Pre-Research Foundation of China (Grant No.: 41412040304), National Key R&D Program of China (Grant No. 2022YFF0607000), China Scholarship Council (Grant No. 202306090220), and the National Natural Science Foundation of China (Grant No.: 72271053 and 71871056).

Institutional Review Board Statement

The study was conducted according to the guidelines of the Declaration of Helsinki, and approved by the Institutional Review Board of IEC for Clinical Research of Zhongda Hospital, Affiliated to Southeast University (protocol code 2021ZDSYLL201-P01).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

References

  1. Broekens, J.; Heerink, M.; Rosendal, H. Assistive Social Robots in Elderly Care: A Review. Gerontechnol. Int. J. Fundam. Asp. Technol. Serve Ageing Soc. 2009, 8, 94–103. [Google Scholar] [CrossRef]
  2. Robertson, J. Robo Sapiens Japanicus: Robots, Gender, Family, and the Japanese Nation; University of California Press: Berkeley, CA, USA, 2019. [Google Scholar]
  3. Van Doorn, J.; Mende, M.; Noble, S.M.; Hulland, J.; Ostrom, A.L.; Grewal, D.; Petersen, J.A. Domo Arigato Mr. Roboto: Emergence of Automated Social Presence in Organizational Frontlines and Customers’ Service Experiences. J. Serv. Res. 2017, 20, 43–58. [Google Scholar] [CrossRef]
  4. Phillips, E.; Zhao, X.; Ullman, D.; Malle, B.F. What Is Human-like?: Decomposing Robots’ Human-like Appearance Using the Anthropomorphic roBOT (ABOT) Database. In Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, Chicago, IL, USA, 5–8 March 2018; ACM: Chicago, IL, USA, 2018; pp. 105–113. [Google Scholar]
  5. Stenzel, A.; Chinellato, E.; Bou, M.A.T.; Del Pobil, Á.P.; Lappe, M.; Liepelt, R. When Humanoid Robots Become Human-like Interaction Partners: Corepresentation of Robotic Actions. J. Exp. Psychol. Hum. Percept. Perform. 2012, 38, 1073. [Google Scholar] [CrossRef]
  6. Wiese, E.; Wykowska, A.; Zwickel, J.; Müller, H.J. I See What You Mean: How Attentional Selection Is Shaped by Ascribing Intentions to Others. PLoS ONE 2012, 7, e45391. [Google Scholar] [CrossRef] [PubMed]
  7. Epley, N. A Mind like Mine: The Exceptionally Ordinary Underpinnings of Anthropomorphism. J. Assoc. Consum. Res. 2018, 3, 591–598. [Google Scholar] [CrossRef]
  8. Wada, K.; Shibata, T.; Saito, T.; Tanie, K. Effects of Robot-Assisted Activity for Elderly People and Nurses at a Day Service Center. Proc. IEEE 2004, 92, 1780–1788. [Google Scholar] [CrossRef]
  9. Belpaeme, T.; Kennedy, J.; Ramachandran, A.; Scassellati, B.; Tanaka, F. Social Robots for Education: A Review. Sci. Robot. 2018, 3, eaat5954. [Google Scholar] [CrossRef]
  10. Mori, M. Bukimi No Tani [the Uncanny Valley]. Energy 1970, 7, 33–35. [Google Scholar]
  11. Lin, P.; Abney, K.; Bekey, G.A. Robot Ethics: The Ethical and Social Implications of Robotics; MIT Press: Cambridge, MA, USA, 2014. [Google Scholar]
  12. Ho, C.-C.; MacDorman, K.F. Revisiting the Uncanny Valley Theory: Developing and Validating an Alternative to the Godspeed Indices. Comput. Hum. Behav. 2010, 26, 1508–1518. [Google Scholar] [CrossRef]
  13. Kim, S.Y.; Schmitt, B.H.; Thalmann, N.M. Eliza in the Uncanny Valley: Anthropomorphizing Consumer Robots Increases Their Perceived Warmth but Decreases Liking. Mark. Lett. 2019, 30, 1–12. [Google Scholar] [CrossRef]
  14. Mathur, M.B.; Reichling, D.B. Navigating a Social World with Robot Partners: A Quantitative Cartography of the Uncanny Valley. Cognition 2016, 146, 22–32. [Google Scholar] [CrossRef] [PubMed]
  15. Mori, M.; MacDorman, K.F.; Kageki, N. The Uncanny Valley [from the Field]. IEEE Robot. Autom. Mag. 2012, 19, 98–100. [Google Scholar] [CrossRef]
  16. DiSalvo, C.F.; Gemperle, F.; Forlizzi, J.; Kiesler, S. All Robots Are Not Created Equal: The Design and Perception of Humanoid Robot Heads. In Proceedings of the 4th conference on Designing Interactive Systems: Processes, Practices, Methods, and Techniques, Cambridge, MA, USA, 1–4 August 2002; pp. 321–326. [Google Scholar]
  17. Gray, K.; Wegner, D.M. Feeling Robots and Human Zombies: Mind Perception and the Uncanny Valley. Cognition 2012, 125, 125–130. [Google Scholar] [CrossRef] [PubMed]
  18. Powers, A.; Kiesler, S. The Advisor Robot: Tracing People’s Mental Model from a Robot’s Physical Attributes. In Proceedings of the 1st ACM SIGCHI/SIGART Conference on Human-Robot Interaction, Salt Lake City, UT, USA, 2–3 March 2006; pp. 218–225. [Google Scholar]
  19. Rosenthal-Von Der Pütten, A.M.; Krämer, N.C. How Design Characteristics of Robots Determine Evaluation and Uncanny Valley Related Responses. Comput. Hum. Behav. 2014, 36, 422–439. [Google Scholar] [CrossRef]
  20. Chammat, M.; Foucher, A.; Nadel, J.; Dubal, S. Reading Sadness beyond Human Faces. Brain Res. 2010, 1348, 95–104. [Google Scholar] [CrossRef] [PubMed]
  21. Chiang, A.-H.; Trimi, S.; Lo, Y.-J. Emotion and Service Quality of Anthropomorphic Robots. Technol. Forecast. Soc. Change 2022, 177, 121550. [Google Scholar] [CrossRef]
  22. Saygin, A.P.; Chaminade, T.; Ishiguro, H.; Driver, J.; Frith, C. The Thing That Should Not Be: Predictive Coding and the Uncanny Valley in Perceiving Human and Humanoid Robot Actions. Soc. Cogn. Affect. Neurosci. 2012, 7, 413–422. [Google Scholar] [CrossRef]
  23. Urgen, B.A.; Plank, M.; Ishiguro, H.; Poizner, H.; Saygin, A.P. EEG Theta and Mu Oscillations during Perception of Human and Robot Actions. Front. Neurorobot. 2013, 7, 19. [Google Scholar] [CrossRef]
  24. Urgen, B.A.; Kutas, M.; Saygin, A.P. Uncanny Valley as a Window into Predictive Processing in the Social Brain. Neuropsychologia 2018, 114, 181–185. [Google Scholar] [CrossRef]
  25. Bainbridge, W.A.; Hart, J.; Kim, E.S.; Scassellati, B. The Effect of Presence on Human-Robot Interaction. In Proceedings of the RO-MAN 2008-The 17th IEEE International Symposium on Robot and Human Interactive Communication, Munich, Germany, 1–3 August 2008; pp. 701–706. [Google Scholar]
  26. Hancock, P.A.; Billings, D.R.; Schaefer, K.E.; Chen, J.Y.; De Visser, E.J.; Parasuraman, R. A Meta-Analysis of Factors Affecting Trust in Human-Robot Interaction. Hum. Factors 2011, 53, 517–527. [Google Scholar] [CrossRef] [PubMed]
  27. Hu, Y.; Abe, N.; Benallegue, M.; Yamanobe, N.; Venture, G.; Yoshida, E. Toward Active Physical Human–Robot Interaction: Quantifying the Human State during Interactions. IEEE Trans. Hum. Mach. Syst. 2022, 52, 367–378. [Google Scholar] [CrossRef]
  28. Chita-Tegmark, M.; Lohani, M.; Scheutz, M. Gender Effects in Perceptions of Robots and Humans with Varying Emotional Intelligence. In Proceedings of the 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Daegu, Republic of Korea, 11–14 March 2019; pp. 230–238. [Google Scholar]
  29. Nomura, T.; Shintani, T.; Fujii, K.; Hokabe, K. Experimental Investigation of Relationships between Anxiety, Negative Attitudes, and Allowable Distance of Robots. In Proceedings of the 2nd IASTED International Conference on Human Computer Interaction, Chamonix, France, 14–16 March 2007; ACTA Press: Calgary, AB, Canada, 2007; pp. 13–18. [Google Scholar]
  30. Paetzel-Prüsmann, M.; Perugia, G.; Castellano, G. The Influence of Robot Personality on the Development of Uncanny Feelings. Comput. Hum. Behav. 2021, 120, 106756. [Google Scholar] [CrossRef]
  31. Liu, Y.; Li, F.; Tang, L.H.; Lan, Z.; Cui, J.; Sourina, O.; Chen, C.-H. Detection of Humanoid Robot Design Preferences Using EEG and Eye Tracker. In Proceedings of the 2019 International Conference on Cyberworlds (CW), Kyoto, Japan, 2–4 October 2019; IEEE: Kyoto, Japan, 2019; pp. 219–224. [Google Scholar]
  32. Zhang, J.; Li, S.; Zhang, J.-Y.; Du, F.; Qi, Y.; Liu, X. A Literature Review of the Research on the Uncanny Valley. In Proceedings of the Cross-Cultural Design. User Experience of Products, Services, and Intelligent Environments: 12th International Conference, CCD 2020, Held as Part of the 22nd HCI International Conference, HCII 2020, Copenhagen, Denmark, 19–24 July 2020; Proceedings, Part I 22. pp. 255–268. [Google Scholar]
  33. Chaminade, T.; Zecca, M.; Blakemore, S.-J.; Takanishi, A.; Frith, C.D.; Micera, S.; Dario, P.; Rizzolatti, G.; Gallese, V.; Umiltà, M.A. Brain Response to a Humanoid Robot in Areas Implicated in the Perception of Human Emotional Gestures. PLoS ONE 2010, 5, e11577. [Google Scholar] [CrossRef] [PubMed]
  34. Shimada, M.; Minato, T.; Itakura, S.; Ishiguro, H. Uncanny Valley of Androids and Its Lateral Inhibition Hypothesis. In Proceedings of the RO-MAN 2007-The 16th IEEE International Symposium on Robot and Human Interactive Communication, Jeju, Republic of Korea, 26–29 August 2007; pp. 374–379. [Google Scholar]
  35. Mustafa, M.; Guthe, S.; Tauscher, J.-P.; Goesele, M.; Magnor, M. How Human Am I? EEG-Based Evaluation of Virtual Characters. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, Denver, CO, USA, 6–11 May 2017; pp. 5098–5108. [Google Scholar]
  36. Cheetham, M.; Wu, L.; Pauli, P.; Jancke, L. Arousal, Valence, and the Uncanny Valley: Psychophysiological and Self-Report Findings. Front. Psychol. 2015, 6, 981. [Google Scholar] [CrossRef] [PubMed]
  37. Guo, F.; Li, M.; Chen, J.; Duffy, V.G. Evaluating Users’ Preference for the Appearance of Humanoid Robots via Event-Related Potentials and Spectral Perturbations. Behav. Inf. Technol. 2022, 41, 1381–1397. [Google Scholar] [CrossRef]
  38. Kim, D.-G.; Kim, H.-Y.; Kim, G.; Jang, P.-S.; Jung, W.H.; Hyun, J.-S. Exploratory Understanding of the Uncanny Valley Phenomena Based on Event-Related Potential Measurement. Sci. Emot. Sensib. 2016, 19, 95–110. [Google Scholar] [CrossRef]
  39. Schindler, S.; Zell, E.; Botsch, M.; Kissler, J. Differential Effects of Face-Realism and Emotion on Event-Related Brain Potentials and Their Implications for the Uncanny Valley Theory. Sci. Rep. 2017, 7, 45003. [Google Scholar] [CrossRef] [PubMed]
  40. Vaitonytė, J.; Alimardani, M.; Louwerse, M.M. Scoping Review of the Neural Evidence on the Uncanny Valley. Comput. Hum. Behav. Rep. 2022, 9, 100263. [Google Scholar] [CrossRef]
  41. Epley, N.; Waytz, A.; Cacioppo, J.T. On Seeing Human: A Three-Factor Theory of Anthropomorphism. Psychol. Rev. 2007, 114, 864. [Google Scholar] [CrossRef]
  42. Bartneck, C.; Kulić, D.; Croft, E.; Zoghbi, S. Measurement Instruments for the Anthropomorphism, Animacy, Likeability, Perceived Intelligence, and Perceived Safety of Robots. Int. J. Soc. Robot. 2009, 1, 71–81. [Google Scholar] [CrossRef]
  43. Duffy, B.R. Anthropomorphism and the Social Robot. Robot. Auton. Syst. 2003, 42, 177–190. [Google Scholar] [CrossRef]
  44. Aggarwal, P.; McGill, A.L. Is That Car Smiling at Me? Schema Congruity as a Basis for Evaluating Anthropomorphized Products. J. Consum. Res. 2007, 34, 468–479. [Google Scholar] [CrossRef]
  45. Huang, F.; Wong, V.C.; Wan, E.W. The Influence of Product Anthropomorphism on Comparative Judgment. J. Consum. Res. 2020, 46, 936–955. [Google Scholar] [CrossRef]
  46. Landwehr, J.R.; McGill, A.L.; Herrmann, A. It’s Got the Look: The Effect of Friendly and Aggressive “Facial” Expressions on Product Liking and Sales. J. Mark. 2011, 75, 132–146. [Google Scholar] [CrossRef]
  47. Cao, Y.; Zhang, Y.; Ding, Y.; Duffy, V.G.; Zhang, X. Is an Anthropomorphic App Icon More Attractive? Evidence from Neuroergonomomics. Appl. Ergon. 2021, 97, 103545. [Google Scholar] [CrossRef] [PubMed]
  48. Kervyn, N.; Fiske, S.T.; Malone, C. Brands as Intentional Agents Framework: How Perceived Intentions and Ability Can Map Brand Perception. J. Consum. Psychol. 2012, 22, 166–176. [Google Scholar] [CrossRef]
  49. MacInnis, D.J.; Folkes, V.S. Humanizing Brands: When Brands Seem to Be like Me, Part of Me, and in a Relationship with Me. J. Consum. Psychol. 2017, 27, 355–374. [Google Scholar] [CrossRef]
  50. Puzakova, M.; Kwak, H.; Rocereto, J.F. When Humanizing Brands Goes Wrong: The Detrimental Effect of Brand Anthropomorphization amid Product Wrongdoings. J. Mark. 2013, 77, 81–100. [Google Scholar] [CrossRef]
  51. Bailenson, J.N.; Yee, N.; Brave, S.; Merget, D.; Koslow, D. Virtual Interpersonal Touch: Expressing and Recognizing Emotions through Haptic Devices. Hum. Comput. Interact. 2007, 22, 325–353. [Google Scholar]
  52. Lee, K.M.; Peng, W.; Jin, S.-A.; Yan, C. Can Robots Manifest Personality?: An Empirical Test of Personality Recognition, Social Responses, and Social Presence in Human–Robot Interaction. J. Commun. 2006, 56, 754–772. [Google Scholar] [CrossRef]
  53. Bates, J. The Role of Emotion in Believable Agents. Commun. ACM 1994, 37, 122–125. [Google Scholar] [CrossRef]
  54. MacDorman, K.F. Subjective Ratings of Robot Video Clips for Human Likeness, Familiarity, and Eeriness: An Exploration of the Uncanny Valley. In Proceedings of the ICCS/CogSci-2006 Long Symposium: Toward Social Mechanisms of Android Science, Vancouver, BC, Canada, 26 July 2006; pp. 26–29. [Google Scholar]
  55. Roesler, E.; Naendrup-Poell, L.; Manzey, D.; Onnasch, L. Why Context Matters: The Influence of Application Domain on Preferred Degree of Anthropomorphism and Gender Attribution in Human–Robot Interaction. Int. J. Soc. Robot. 2022, 14, 1155–1166. [Google Scholar] [CrossRef]
  56. Wang, S.; Lilienfeld, S.O.; Rochat, P. The Uncanny Valley: Existence and Explanations. Rev. Gen. Psychol. 2015, 19, 393–407. [Google Scholar] [CrossRef]
  57. Minato, T.; Shimada, M.; Itakura, S.; Lee, K.; Ishiguro, H. Does Gaze Reveal the Human Likeness of an Android? In Proceedings of the 4th International Conference on Development and Learning, Hong Kong, China, 31 July–3 August 2005; pp. 106–111. [Google Scholar]
  58. Burleigh, T.J.; Schoenherr, J.R.; Lacroix, G.L. Does the Uncanny Valley Exist? An Empirical Test of the Relationship between Eeriness and the Human Likeness of Digitally Created Faces. Comput. Hum. Behav. 2013, 29, 759–771. [Google Scholar] [CrossRef]
  59. MacDorman, K.F.; Green, R.D.; Ho, C.-C.; Koch, C.T. Too Real for Comfort? Uncanny Responses to Computer Generated Faces. Comput. Hum. Behav. 2009, 25, 695–710. [Google Scholar] [CrossRef] [PubMed]
  60. Seyama, J.; Nagayama, R.S. The Uncanny Valley: Effect of Realism on the Impression of Artificial Human Faces. Presence 2007, 16, 337–351. [Google Scholar] [CrossRef]
  61. Bradley, M.M.; Lang, P.J. Emotion and Motivation. Handb. Psychophysiol. 2000, 2, 602–642. [Google Scholar]
  62. Liu, W.; Liang, X.; Wang, X.; Guo, F. The Evaluation of Emotional Experience on Webpages: An Event-Related Potential Study. Cogn. Technol. Work 2019, 21, 317–326. [Google Scholar] [CrossRef]
  63. Bartneck, C.; Kulic, D.; Croft, E. Measuring the Anthropomorphism, Animacy, Likeability, Perceived Intelligence, and Perceived Safety of Robots. Tech. Rep. 2008, 8, 37–44. [Google Scholar]
  64. Bartneck, C.; Kanda, T.; Ishiguro, H.; Hagita, N. Is the Uncanny Valley an Uncanny Cliff? In Proceedings of the RO-MAN 2007-The 16th IEEE International Symposium on Robot and Human Interactive Communication, Jeju, Republic of Korea, 26–29 August 2007; pp. 368–373. [Google Scholar]
  65. Sobieraj, S. What Is Virtually Beautiful Is Good: Der Einfluss Physiognomischer Und Nonverbaler Gesichtsmerkmale Auf Die Attribution von Attraktivität, Sozialer Kompetenz Und Dominanz [[Elektronische Ressource]]; Duisburg, Essen, Universität Duisburg-Essen: Duisburg, Germany, 2012. [Google Scholar]
  66. Fiske, S.T.; Cuddy, A.J.; Glick, P. Universal Dimensions of Social Cognition: Warmth and Competence. Trends Cogn. Sci. 2007, 11, 77–83. [Google Scholar] [CrossRef]
  67. Zhou, X.; Kim, S.; Wang, L. Money Helps When Money Feels: Money Anthropomorphism Increases Charitable Giving. J. Consum. Res. 2019, 45, 953–972. [Google Scholar] [CrossRef]
  68. Wortman, J.; Wood, D. The Personality Traits of Liked People. J. Res. Personal. 2011, 45, 519–528. [Google Scholar] [CrossRef]
  69. Camerer, C.; Yoon, C. Introduction to the Journal of Marketing Research Special Issue on Neuroscience and Marketing. J. Mark. Res. 2015, 52, 423–426. [Google Scholar] [CrossRef]
  70. Olofsson, J.K.; Polich, J. Affective Visual Event-Related Potentials: Arousal, Repetition, and Time-on-Task. Biol. Psychol. 2007, 75, 101–108. [Google Scholar] [CrossRef]
  71. Else, J.E.; Ellis, J.; Orme, E. Art Expertise Modulates the Emotional Response to Modern Art, Especially Abstract: An ERP Investigation. Front. Hum. Neurosci. 2015, 9, 525. [Google Scholar] [CrossRef] [PubMed]
  72. Eimer, M.; Holmes, A. An ERP Study on the Time Course of Emotional Face Processing. Neuroreport 2002, 13, 427–431. [Google Scholar] [CrossRef] [PubMed]
  73. Guo, F.; Wang, X.; Liu, W.; Ding, Y. Affective Preference Measurement of Product Appearance Based on Event-Related Potentials. Cogn. Technol. Work 2018, 20, 299–308. [Google Scholar] [CrossRef]
  74. Handy, T.C.; Smilek, D.; Geiger, L.; Liu, C.; Schooler, J.W. ERP Evidence for Rapid Hedonic Evaluation of Logos. J. Cogn. Neurosci. 2010, 22, 124–138. [Google Scholar] [CrossRef]
  75. Luck, S.J.; Woodman, G.F.; Vogel, E.K. Event-Related Potential Studies of Attention. Trends Cogn. Sci. 2000, 4, 432–440. [Google Scholar] [CrossRef]
  76. Keil, A.; Bradley, M.M.; Hauk, O.; Rockstroh, B.; Elbert, T.; Lang, P.J. Large-Scale Neural Correlates of Affective Picture Processing. Psychophysiology 2002, 39, 641–649. [Google Scholar] [CrossRef]
  77. Keil, A.; Müller, M.M.; Gruber, T.; Wienbruch, C.; Stolarova, M.; Elbert, T. Effects of Emotional Arousal in the Cerebral Hemispheres: A Study of Oscillatory Brain Activity and Event-Related Potentials. Clin. Neurophysiol. 2001, 112, 2057–2068. [Google Scholar] [CrossRef] [PubMed]
  78. Bradley, M.M.; Hamby, S.; Löw, A.; Lang, P.J. Brain Potentials in Perception: Picture Complexity and Emotional Arousal. Psychophysiology 2007, 44, 364–373. [Google Scholar] [CrossRef] [PubMed]
  79. Luck, S.J. An Introduction to the Event-Related Potential Technique, 2nd ed.; The MIT Press: Cambridge, MA, USA, 2014; ISBN 978-0-262-52585-5. [Google Scholar]
  80. Hajcak, G.; Weinberg, A.; MacNamara, A.; Foti, D. ERPs and the Study of Emotion. In The Oxford Handbook of Event-Related Potential Components; Oxford University Press: Oxford, UK, 2012. [Google Scholar]
  81. Carretié, L.; Mercado, F.; Tapia, M.; Hinojosa, J.A. Emotion, Attention, and the ‘Negativity Bias’, Studied through Event-Related Potentials. Int. J. Psychophysiol. 2001, 41, 75–85. [Google Scholar] [CrossRef] [PubMed]
  82. Herbert, C.; Kissler, J.; Junghöfer, M.; Peyk, P.; Rockstroh, B. Processing of Emotional Adjectives: Evidence from Startle EMG and ERPs. Psychophysiology 2006, 43, 197–206. [Google Scholar] [CrossRef] [PubMed]
  83. Thorpe, S.; Fize, D.; Marlot, C. Speed of Processing in the Human Visual System. Nature 1996, 381, 520–522. [Google Scholar] [CrossRef]
  84. Yuan, J.; Zhang, Q.; Chen, A.; Li, H.; Wang, Q.; Zhuang, Z.; Jia, S. Are We Sensitive to Valence Differences in Emotionally Negative Stimuli? Electrophysiological Evidence from an ERP Study. Neuropsychologia 2007, 45, 2764–2771. [Google Scholar] [CrossRef]
  85. Huang, Y.-X.; Luo, Y.-J. Temporal Course of Emotional Negativity Bias: An ERP Study. Neurosci. Lett. 2006, 398, 91–96. [Google Scholar] [CrossRef]
  86. Pourtois, G.; Grandjean, D.; Sander, D.; Vuilleumier, P. Electrophysiological Correlates of Rapid Spatial Orienting towards Fearful Faces. Cereb. Cortex 2004, 14, 619–633. [Google Scholar] [CrossRef] [PubMed]
  87. Ma, Y.; Jin, J.; Yu, W.; Zhang, W.; Xu, Z.; Ma, Q. How Is the Neural Response to the Design of Experience Goods Related to Personalized Preference? An Implicit View. Front. Neurosci. 2018, 12, 760. [Google Scholar] [CrossRef]
  88. Schupp, H.T.; Cuthbert, B.N.; Bradley, M.M.; Cacioppo, J.T.; Ito, T.; Lang, P.J. Affective Picture Processing: The Late Positive Potential Is Modulated by Motivational Relevance. Psychophysiology 2000, 37, 257–261. [Google Scholar] [CrossRef]
  89. Cuthbert, B.N.; Schupp, H.T.; Bradley, M.M.; Birbaumer, N.; Lang, P.J. Brain Potentials in Affective Picture Processing: Covariation with Autonomic Arousal and Affective Report. Biol. Psychol. 2000, 52, 95–111. [Google Scholar] [CrossRef]
  90. Pastor, M.C.; Bradley, M.M.; Löw, A.; Versace, F.; Moltó, J.; Lang, P.J. Affective Picture Perception: Emotion, Context, and the Late Positive Potential. Brain Res. 2008, 1189, 145–151. [Google Scholar] [CrossRef] [PubMed]
  91. Schupp, H.T.; Flaisch, T.; Stockburger, J.; Junghöfer, M. Emotion and Attention: Event-Related Brain Potential Studies. Prog. Brain Res. 2006, 156, 31–51. [Google Scholar] [CrossRef] [PubMed]
  92. Luck, S.J. Event-Related Potentials. In APA Handbook of Research Methods in Psychology, Vol. 1. Foundations, Planning, Measures, and Psychometrics; Cooper, H., Camic, P.M., Long, D.L., Panter, A.T., Rindskopf, D., Sher, K.J., Eds.; American Psychological Association: Washington, DC, USA, 2012. [Google Scholar] [CrossRef]
  93. Schupp, H.T.; Schmälzle, R.; Flaisch, T.; Weike, A.I.; Hamm, A.O. Affective Picture Processing as a Function of Preceding Picture Valence: An ERP Analysis. Biol. Psychol. 2012, 91, 81–87. [Google Scholar] [CrossRef] [PubMed]
  94. Krompinger, J.W.; Moser, J.S.; Simons, R.F. Modulations of the Electrophysiological Response to Pleasant Stimuli by Cognitive Reappraisal. Emotion 2008, 8, 132. [Google Scholar] [CrossRef] [PubMed]
  95. Hajcak, G.; Moser, J.S.; Simons, R.F. Attending to Affect: Appraisal Strategies Modulate the Electrocortical Response to Arousing Pictures. Emotion 2006, 6, 517. [Google Scholar] [CrossRef]
  96. Aftanas, L.; Varlamov, A.; Pavlov, S.; Makhnev, V.; Reva, N. Affective Picture Processing: Event-Related Synchronization within Individually Defined Human Theta Band Is Modulated by Valence Dimension. Neurosci. Lett. 2001, 303, 115–118. [Google Scholar] [CrossRef] [PubMed]
  97. Aftanas, L.; Golocheikine, S.A. Human Anterior and Frontal Midline Theta and Lower Alpha Reflect Emotionally Positive State and Internalized Attention: High-Resolution EEG Investigation of Meditation. Neurosci. Lett. 2001, 310, 57–60. [Google Scholar] [CrossRef]
  98. Kawasaki, M.; Yamaguchi, Y. Effects of Subjective Preference of Colors on Attention-Related Occipital Theta Oscillations. NeuroImage 2012, 59, 808–814. [Google Scholar] [CrossRef]
  99. Hasselmo, M.E.; Stern, C.E. Theta Rhythm and the Encoding and Retrieval of Space and Time. Neuroimage 2014, 85, 656–666. [Google Scholar] [CrossRef]
  100. Aftanas, L.I.; Pavlov, S.V.; Reva, N.V.; Varlamov, A.A. Trait Anxiety Impact on the EEG Theta Band Power Changes during Appraisal of Threatening and Pleasant Visual Stimuli. Int. J. Psychophysiol. 2003, 50, 205–212. [Google Scholar] [CrossRef]
  101. Brier, M.R.; Ferree, T.C.; Maguire, M.J.; Moore, P.; Spence, J.; Tillman, G.D.; Hart, J., Jr.; Kraut, M.A. Frontal Theta and Alpha Power and Coherence Changes Are Modulated by Semantic Complexity in Go/NoGo Tasks. Int. J. Psychophysiol. 2010, 78, 215–224. [Google Scholar] [CrossRef] [PubMed]
  102. Womelsdorf, T.; Vinck, M.; Leung, L.S.; Everling, S. Selective Theta-Synchronization of Choice-Relevant Information Subserves Goal-Directed Behavior. Front. Hum. Neurosci. 2010, 4, 210. [Google Scholar] [CrossRef] [PubMed]
  103. Lang, P.J. International Affective Picture System (IAPS): Affective Ratings of Pictures and Instruction Manual. In Handbook of Emotion Elicitation and Assessment; Oxford University Press: Oxford, UK, 2005. [Google Scholar]
  104. Homan, R.W.; Herman, J.; Purdy, P. Cerebral Location of International 10–20 System Electrode Placement. Electroencephalogr. Clin. Neurophysiol. 1987, 66, 376–382. [Google Scholar] [CrossRef]
  105. Wu, J.; Du, X.; Tong, M.; Guo, Q.; Shao, J.; Chabebe, A.; Xue, C. Neural Mechanisms behind Semantic Congruity of Construction Safety Signs: An EEG Investigation on Construction Workers. Hum. Factors Ergon. Manuf. Serv. Ind. 2023, 33, 229–245. [Google Scholar] [CrossRef]
  106. Wu, J.; Chen, X.; Zhao, M.; Xue, C. Cognitive Characteristics in Wayfinding Tasks in Commercial and Residential Districts during Daytime and Nighttime: A Comprehensive Neuroergonomic Study. Adv. Eng. Inform. 2024, 61, 102534. [Google Scholar] [CrossRef]
  107. Lydon, E.A.; Nguyen, L.T.; Shende, S.A.; Chiang, H.-S.; Spence, J.S.; Mudar, R.A. EEG Theta and Alpha Oscillations in Early versus Late Mild Cognitive Impairment during a Semantic Go/NoGo Task. Behav. Brain Res. 2022, 416, 113539. [Google Scholar] [CrossRef] [PubMed]
  108. Blankertz, B.; Lemm, S.; Treder, M.; Haufe, S.; Müller, K.-R. Single-Trial Analysis and Classification of ERP Components—A Tutorial. NeuroImage 2011, 56, 814–825. [Google Scholar] [CrossRef] [PubMed]
  109. Bradley, M.M.; Lang, P.J. Emotion and Motivation. In Handbook of Psychophysiology, 3rd ed.; Cacioppo, J.T., Tassinary, L.G., Berntson, G.G., Eds.; Cambridge University Press: Cambridge, UK, 2007. [Google Scholar] [CrossRef]
  110. Sundar, S.S.; Jung, E.H.; Waddell, T.F.; Kim, K.J. Cheery Companions or Serious Assistants? Role and Demeanor Congruity as Predictors of Robot Attraction and Use Intentions among Senior Citizens. Int. J. Hum. Comput. Stud. 2017, 97, 88–97. [Google Scholar] [CrossRef]
  111. Vogel, E.K.; Luck, S.J. The Visual N1 Component as an Index of a Discrimination Process. Psychophysiology 2000, 37, 190–203. [Google Scholar] [CrossRef]
  112. García-Larrea, L.; Lukaszewicz, A.-C.; Mauguiére, F. Revisiting the Oddball Paradigm. Non-Target vs Neutral Stimuli and the Evaluation of ERP Attentional Effects. Neuropsychologia 1992, 30, 723–741. [Google Scholar] [CrossRef]
  113. Kiesler, S.; Goetz, J. Mental Models and Cooperation with Robotic Assistants. Retrieved on 24 November 2004; 2002. [Google Scholar]
  114. Hansen, C.H.; Hansen, R.D. Finding the Face in the Crowd: An Anger Superiority Effect. J. Pers. Soc. Psychol. 1988, 54, 917. [Google Scholar] [CrossRef]
  115. Li, X.; Li, X.; Luo, Y.-J. Anxiety and Attentional Bias for Threat: An Event-Related Potential Study. Neuroreport 2005, 16, 1501–1505. [Google Scholar] [CrossRef]
  116. Looser, C.E.; Guntupalli, J.S.; Wheatley, T. Multivoxel Patterns in Face-Sensitive Temporal Regions Reveal an Encoding Schema Based on Detecting Life in a Face. Soc. Cogn. Affect. Neurosci. 2013, 8, 799–805. [Google Scholar] [CrossRef]
  117. Wheatley, T.; Weinberg, A.; Looser, C.; Moran, T.; Hajcak, G. Mind Perception: Real but Not Artificial Faces Sustain Neural Activity beyond the N170/VPP. PLoS ONE 2011, 6, e17960. [Google Scholar] [CrossRef] [PubMed]
  118. Foti, D.; Hajcak, G.; Dien, J. Differentiating Neural Responses to Emotional Pictures: Evidence from Temporal-Spatial PCA. Psychophysiology 2009, 46, 521–530. [Google Scholar] [CrossRef] [PubMed]
  119. Tung, F.-W. Child Perception of Humanoid Robot Appearance and Behavior. Int. J. Hum.-Comput. Interact. 2016, 32, 493–502. [Google Scholar] [CrossRef]
  120. Hajcak, G.; Nieuwenhuis, S. Reappraisal Modulates the Electrocortical Response to Unpleasant Pictures. Cogn. Affect. Behav. Neurosci. 2006, 6, 291–297. [Google Scholar] [CrossRef] [PubMed]
  121. Ito, T.A.; Cacioppo, J.T. Electrophysiological Evidence of Implicit and Explicit Categorization Processes. J. Exp. Soc. Psychol. 2000, 36, 660–676. [Google Scholar] [CrossRef]
  122. Olofsson, J.K.; Nordin, S.; Sequeira, H.; Polich, J. Affective Picture Processing: An Integrative Review of ERP Findings. Biol. Psychol. 2008, 77, 247–265. [Google Scholar] [CrossRef]
  123. Spreckelmeyer, K.N.; Kutas, M.; Urbach, T.P.; Altenmüller, E.; Münte, T.F. Combined Perception of Emotion in Pictures and Musical Sounds. Brain Res. 2006, 1070, 160–170. [Google Scholar] [CrossRef] [PubMed]
  124. Jacobsen, T.; Höfel, L. Descriptive and Evaluative Judgment Processes: Behavioral and Electrophysiological Indices of Processing Symmetry and Aesthetics. Cogn. Affect. Behav. Neurosci. 2003, 3, 289–299. [Google Scholar] [CrossRef] [PubMed]
  125. Wang, Y.; Quadflieg, S. In Our Own Image? Emotional and Neural Processing Differences When Observing Human–Human vs Human–Robot Interactions. Soc. Cogn. Affect. Neurosci. 2015, 10, 1515–1524. [Google Scholar] [CrossRef]
  126. Brenner, C.A.; Rumak, S.P.; Burns, A.M.; Kieffaber, P.D. The Role of Encoding and Attention in Facial Emotion Memory: An EEG Investigation. Int. J. Psychophysiol. 2014, 93, 398–410. [Google Scholar] [CrossRef]
  127. Knyazev, G.; Slobodskoj-Plusnin, J.Y.; Bocharov, A. Event-Related Delta and Theta Synchronization during Explicit and Implicit Emotion Processing. Neuroscience 2009, 164, 1588–1600. [Google Scholar] [CrossRef]
  128. Zhang, W.; Li, X.; Liu, X.; Duan, X.; Wang, D.; Shen, J. Distraction Reduces Theta Synchronization in Emotion Regulation during Adolescence. Neurosci. Lett. 2013, 550, 81–86. [Google Scholar] [CrossRef]
  129. Guillem, K.; Ahmed, S.H. Reorganization of Theta Phase-Locking in the Orbitofrontal Cortex Drives Cocaine Choice under the Influence. Sci. Rep. 2020, 10, 1–12. [Google Scholar] [CrossRef]
  130. Czaja, S.J.; Charness, N.; Fisk, A.D.; Hertzog, C.; Nair, S.N.; Rogers, W.A.; Sharit, J. Factors Predicting the Use of Technology: Findings from the Center for Research and Education on Aging and Technology Enhancement (CREATE). Psychol. Aging 2006, 21, 333. [Google Scholar] [CrossRef] [PubMed]
  131. Mitzner, T.L.; Boron, J.B.; Fausset, C.B.; Adams, A.E.; Charness, N.; Czaja, S.J.; Dijkstra, K.; Fisk, A.D.; Rogers, W.A.; Sharit, J. Older Adults Talk Technology: Technology Usage and Attitudes. Comput. Hum. Behav. 2010, 26, 1710–1721. [Google Scholar] [CrossRef] [PubMed]
Figure 1. The uncanny valley function, as proposed by Mori (1970) [10].
Figure 1. The uncanny valley function, as proposed by Mori (1970) [10].
Sensors 24 04809 g001
Figure 2. Schematic time course of the experimental procedure and the stimuli used in the experiment.
Figure 2. Schematic time course of the experimental procedure and the stimuli used in the experiment.
Sensors 24 04809 g002
Figure 3. Experimental equipment set up and electrodes. (a) Brain vision actiCHamp EEG system; (b) 64 electrodes (with names and number labels) used in the experiment.
Figure 3. Experimental equipment set up and electrodes. (a) Brain vision actiCHamp EEG system; (b) 64 electrodes (with names and number labels) used in the experiment.
Sensors 24 04809 g003
Figure 4. The grand-averaged ERP waveforms in response to high, middle, and low anthropomorphic robots.
Figure 4. The grand-averaged ERP waveforms in response to high, middle, and low anthropomorphic robots.
Sensors 24 04809 g004
Figure 5. Topographic scalp maps of ERPs component during selected time course for high, middle, and low anthropomorphic robots.
Figure 5. Topographic scalp maps of ERPs component during selected time course for high, middle, and low anthropomorphic robots.
Sensors 24 04809 g005
Figure 6. Figure (a) and figure (b) showed emotional valence and emotional arousal results of the 40 participants toward the three types for anthropomorphic robots, respectively.
Figure 6. Figure (a) and figure (b) showed emotional valence and emotional arousal results of the 40 participants toward the three types for anthropomorphic robots, respectively.
Sensors 24 04809 g006
Figure 7. Figure (a) and figure (b) showed the subjective likeability and warmth rating results of the 40 participants toward the three types for anthropomorphic robots, respectively.
Figure 7. Figure (a) and figure (b) showed the subjective likeability and warmth rating results of the 40 participants toward the three types for anthropomorphic robots, respectively.
Sensors 24 04809 g007
Figure 8. Spectrograms of theta-band (3–8 Hz) ERS at the frontal cluster and parietal-occipital cluster associated with H-AR\M-AR\L-AR conditions: (a) the time–frequency representations of ERD/ERS related to H-AR\M-AR\L-AR conditions at the frontal and parietal-occipital clusters and (b) the channel electrode clusters of interest. The red represents frontal cluster, while the green represents parietal-occipital cluster.
Figure 8. Spectrograms of theta-band (3–8 Hz) ERS at the frontal cluster and parietal-occipital cluster associated with H-AR\M-AR\L-AR conditions: (a) the time–frequency representations of ERD/ERS related to H-AR\M-AR\L-AR conditions at the frontal and parietal-occipital clusters and (b) the channel electrode clusters of interest. The red represents frontal cluster, while the green represents parietal-occipital cluster.
Sensors 24 04809 g008
Figure 9. The (left) and (right) plots represented the interaction effect of theta power of frontal cluster and parietal-occipital cluster across the early time window (50–380 ms) and the late time window (400–1000 ms), respectively.
Figure 9. The (left) and (right) plots represented the interaction effect of theta power of frontal cluster and parietal-occipital cluster across the early time window (50–380 ms) and the late time window (400–1000 ms), respectively.
Sensors 24 04809 g009
Figure 10. (a) represents Spearman’s r of emotional responses, ERPs and ERSP. (b) represents Spearman’s p of emotional responses, ERPs and ERSP. The statistical method used the Spearman correlation coefficient; a and p indicate the frontal and posterior regions, respectively, e and l represent the early time window and late time window, respectively.
Figure 10. (a) represents Spearman’s r of emotional responses, ERPs and ERSP. (b) represents Spearman’s p of emotional responses, ERPs and ERSP. The statistical method used the Spearman correlation coefficient; a and p indicate the frontal and posterior regions, respectively, e and l represent the early time window and late time window, respectively.
Sensors 24 04809 g010
Table 2. The descriptive statistics and results of ERPs of user’s perception of different anthropomorphic robots.
Table 2. The descriptive statistics and results of ERPs of user’s perception of different anthropomorphic robots.
LocationsPairwisep of Comparing N1 Component Amplitudep of Comparing P2 Component Amplitudep of Comparing LPP Component Amplitude
FrontalH–M0.003 **<0.001 ***<0.001 ***
H–L0.430<0.001 ***1.000
M–L0.0640.205<0.001 ***
Frontal-centralH–M0.004 **<0.001 ***<0.001 ***
H–L1.000<0.001 ***0.325
M–L0.030 *0.727<0.001 ***
Central H–M0.017 *<0.001 ***<0.001 ***
H–L1.000<0.001 ***0.271
M–L0.010 *0.800<0.001 **
Centro-parietalH–M-<0.001 ***<0.001 ***
H–L-<0.001 ***0.483
M–L-0.3690.002 **
ParietalH–M-<0.001 ***0.001 **
H–L-<0.001 ***1.000
M–L-0.010 *0.003 **
Parietal-occipitalH–M-<0.001 ***0.003 **
H–L-0.027 *0.688
M–L-<0.001 ***0.013 *
Notes: L, M, and H represented low, middle, and high anthropomorphic robots, respectively. * p < 0.05; ** p < 0.01; *** p < 0.001.
Table 3. Group means and SD of theta ERSPs for high, middle, and low anthropomorphic robots.
Table 3. Group means and SD of theta ERSPs for high, middle, and low anthropomorphic robots.
Regions of Interest (ROIs)Time of Interest
(TOIs)
H-ARM-ARL-AR
MeanSDMean SDMeanSD
Frontal cluster50–380 ms2.2140.1362.2290.1861.6890.151
Parietal-occipital cluster2.3570.1612.7590.1922.5960.248
Frontal cluster400–1000 ms0.6930.1001.1190.1830.6990.171
Parietal-occipital cluster0.4530.1541.3530.1991.0870.188
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wu, J.; Du, X.; Liu, Y.; Tang, W.; Xue, C. How the Degree of Anthropomorphism of Human-like Robots Affects Users’ Perceptual and Emotional Processing: Evidence from an EEG Study. Sensors 2024, 24, 4809. https://doi.org/10.3390/s24154809

AMA Style

Wu J, Du X, Liu Y, Tang W, Xue C. How the Degree of Anthropomorphism of Human-like Robots Affects Users’ Perceptual and Emotional Processing: Evidence from an EEG Study. Sensors. 2024; 24(15):4809. https://doi.org/10.3390/s24154809

Chicago/Turabian Style

Wu, Jinchun, Xiaoxi Du, Yixuan Liu, Wenzhe Tang, and Chengqi Xue. 2024. "How the Degree of Anthropomorphism of Human-like Robots Affects Users’ Perceptual and Emotional Processing: Evidence from an EEG Study" Sensors 24, no. 15: 4809. https://doi.org/10.3390/s24154809

APA Style

Wu, J., Du, X., Liu, Y., Tang, W., & Xue, C. (2024). How the Degree of Anthropomorphism of Human-like Robots Affects Users’ Perceptual and Emotional Processing: Evidence from an EEG Study. Sensors, 24(15), 4809. https://doi.org/10.3390/s24154809

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop