Next Article in Journal
A Cooperative Game Approach for Optimal Design of Shared Energy Storage System
Previous Article in Journal
Characteristics of Microplastic Pollution in Agricultural Soils in Xiangtan, China
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Sustainable Impact of Stance Attribution Design Cues for Robots on Human–Robot Relationships—Evidence from the ERSP

School of Business Administration, Huaqiao University, Quanzhou 362021, China
*
Author to whom correspondence should be addressed.
Sustainability 2024, 16(17), 7252; https://doi.org/10.3390/su16177252 (registering DOI)
Submission received: 2 August 2024 / Revised: 17 August 2024 / Accepted: 21 August 2024 / Published: 23 August 2024

Abstract

:
With the development of large language model technologies, the capability of social robots to interact emotionally with users has been steadily increasing. However, the existing research insufficiently examines the influence of robot stance attribution design cues on the construction of users’ mental models and their effects on human–robot interaction (HRI). This study innovatively combines mental models with the associative–propositional evaluation (APE) model, unveiling the impact of the stance attribution explanations of this design cue on the construction of user mental models and the interaction between the two types of mental models through EEG experiments and survey investigations. The results found that under the influence of intentional stance explanations (compared to design stance explanations), participants displayed higher error rates, higher θ- and β-band Event-Related Spectral Perturbations (ERSPs), and phase-locking value (PLV). Intentional stance explanations trigger a primarily associatively based mental model of users towards robots, which conflicts with the propositionally based mental models of individuals. Users might adjust or “correct” their immediate reactions caused by stance attribution explanations after logical analysis. This study reveals that stance attribution interpretation can significantly affect users’ mental model construction of robots, which provides a new theoretical framework for exploring human interaction with non-human agents and provides theoretical support for the sustainable development of human–robot relations. It also provides new ideas for designing robots that are more humane and can better interact with human users.

1. Introduction

The intersection of technology with everyday human life has become an increasingly common aspect of contemporary society, and social robots stand as the pioneers of this fusion. In recent years, with the advancement of large model technologies, social robots have become more flexible and naturalistic in mimicking human communication and emotional responses, being extensively deployed and designed as robot bartenders [1], conversational partners [2], negotiation mediators [3], and shopping assistants [4]. Social robots are capable of engaging with users in more nuanced and seemingly empathetic interactions [5], and thus significantly affect the quality of long-term relationships between humans and machines. The long-term relationship between humans and machines heavily relies on shared emotional experiences [6]. There is a widespread tendency among people to project emotions onto these machines, reflecting the fundamental mental models they hold about robots; mental models play a crucial role in the sustainability of human–computer relationships. Mental models refer to the cognitive structures individuals hold that describe, explain, or predict system functionalities, and design in human–robot interaction should match users’ cognitive structures [7]. The role of emotion and user modeling in the domain of social robots is notably worth exploring [8]. Insights into psychological models can guide robot developers in designing robots that better fulfill human users’ emotional and social needs, thereby enhancing the overall quality of human–robot interaction to promote the sustainable development of human–machine relationships.
The attribution of stance in the field of human–robot interaction refers to how individuals interpret and predict the behavior of artificial intelligence (AI) or robots [9]. The intentional stance is a predictive and interpretative strategy that assigns “intentional” states, such as beliefs and desires, to living beings and inanimate systems, and then rationalizes the actions of agents based on these states [10]. When users adopt the intentional stance, they assume that the AI entities they interact with possess human-like ways of thinking and emotional experiences, whereas the users taking the design stance perceive artificial intelligence as a system operating based on algorithms and developers’ predefined goals [11]. A recent study found that the act of designing a robot through programming can influence the likelihood that people will attribute intentionality to the robot [12], which informs the use of stance attribution as a design cue. However, current research often views stance attribution as users’ ultimate attitude towards artificial intelligence and robots, with less consideration given to whether the use of stance attribution in the design of human–robot interaction to explain such metaphorical design cues might influence people’s mental models of robots.
In the research of human–robot interaction (HRI), there is a surprising phenomenon that people’s assessments of their attitudes towards robots fundamentally rely on explicit measures such as subjective ratings and questionnaires [13]. Although this method seems effective, it appears to overlook the importance of implicit measurements. The use of subjective ratings and questionnaires often leads participants to think in a way that is guided by the test questions, encouraging them to think according to human standards and to attribute human characteristics to robots. For instance, asking whether “Do you think robots can have morals?” essentially imposes human traits onto robots, potentially misleading participants to evaluate an entity inherently governed by programming and algorithms with inappropriate anthropomorphized standards, resulting in certain limitations in research design and paradigms. The associative–propositional evaluation model provides a deeper understanding of this phenomenon. This model distinguishes cognitive processes into two types: associative processes and propositional processes. During the attitude formation process, associative processes play a critical role involving rapid, spontaneous psychological responses, while propositional processes are based on information analysis and reasoning from prior experience [14]. In the formation of mental models, users tend not to operate entirely according to formal logic rules, usually not using all the available information for comprehensive logical deduction. Instead, users tend to focus on the task at hand and only reason when they believe their objectives are related to system behavior [15]. Therefore, we speculate that the formation process of mental models similarly relies on an immediate engagement process dependent on associations and a process driven by propositional past experiences. Understanding this dual psychological model is crucial for deepening our knowledge of the nature of human–robot interaction within HRI.
To address the existent research gaps, this study proposes to integrate the mental model and the associative–propositional process evaluation models, focusing on investigating how users construct their mental models of robots from both propositional processing and associative processing perspectives. Additionally, this research aims to examine the impact of stance attribution on individuals’ affect-related mental models of robots. Mental models are internal cognitive structures that are challenging to measure directly. Previous studies have employed cognitive mapping techniques to elicit mental models [16]. This study will utilize electroencephalogram (EEG) experimental methods to detect participants’ micro psychological and cognitive processing activities when encountering robots. This method provides real-time information about brain activity, which is advantageous for an in-depth understanding of the cognitive mechanisms during human–robot interaction [17]. Through survey methodologies, the study also measures users’ proposition-based mental models of robots, i.e., preconceived ideas and beliefs about robots formed based on prior experiences and knowledge. By integrating results from EEG experiments and surveys, this research comprehensively investigates the formation and interaction of associative and proposition-based mental models in the individual mind, thereby offering new perspectives on understanding how humans interpret the behavior and performance of robots.
The primary goal of this study is to explore whether stance attribution explanations influence users’ emotional interpretation and psychological model construction towards robots, as well as the interaction between the association-based psychological models of users towards robots and proposition-analysis-based psychological models, further uncovering the psychological cognitive mechanisms operating in human–robot interaction. We hypothesized that belief suggestion and guidance can be realized through the interpretation of stance attribution, and then the user’s mental model can be adjusted. The association-oriented mental model may be affected by the interpretation of stance attribution, which leads to differences in the cognitive conflict caused by go/nogo tasks and is reflected in the key response and EEG indicators. The proposition-oriented mental model is more influenced by inherent experience. The position attribution explanation can affect the association-oriented mental model of immediate initiation but cannot affect the proposition-oriented mental model of logical analysis. This study innovatively applies stance attribution interpretation as a metaphorical design cue in the study of human–robot interaction, providing a new research path for application in the design of emotional robots. By integrating the associative proposition evaluation model with the mental model theory, this study proposes that there exist two types of mental models—namely, the rapidly initiated associative-based mental models and the experience-based propositional analysis mental models. It unveils how this dual mental model system influences and interferes with users’ attitudes and understanding of robots and their ability to express emotions. The structure of this thesis is as follows: The first part provides the research background, existing gaps, and the objective of the study. The second part reviews the existing literature on stance attribution, mental models, and the associative–propositional evaluation model, and presents the hypotheses of this study. The third part details the research methodology, including the participants, experimental design, data collection, and analysis methods. The fourth part reports the experimental results. The fifth part discusses the results and explores their implications for human–robot interaction design and future research. Finally, the sixth part conveys the main conclusions of this study. This study not only unveils the cognitive dilemmas of users in emotional interactions with robots but also highlights new challenges that need to be addressed in HCI design: coordinating users’ internal psychological models to reduce cognitive errors and enhance the naturalness and fluidity of interactive experiences. Moreover, validating the theoretical model through experimental means not only provides a methodological reference for subsequent research but also offers empirical evidence for practical human–robot interaction design. This holds significant importance for understanding how humans interact with increasingly “emotional” machines to promote the sustainable development of human–machine relationships.

2. Literature Review and Research Hypotheses

2.1. Associative–Propositional Evaluation Model

The associative–propositional evaluation (APE) model is a theoretical framework for explaining attitudes and attitude change, introduced by Gawronski and Bodenhausen in 2006. This model suggests that an individual’s evaluative judgment about an attitude object is based on two distinct psychological processes: associative processes and propositional processes [14]. Associative processes refer to the rapid and automatic affective reactions that constitute an evaluative attitude primarily through associative processing. In contrast, propositional processes involve the logical reasoning and evaluation of an attitude object, which constitute an evaluative attitude primarily through propositional content. The APE model highlights that the associative and propositional processes interact with each other, jointly determining individuals’ attitudes and changes in attitudes [18]. The APE model represents a significant advancement over the traditional theories of attitudes, such as the dual-process models [19] or models focusing solely on implicit attitudes within a single framework. These views tend to conceive implicit and explicit attitudes as a unitary structure that displays different characteristics under varying conditions. In contrast, the APE model proposes that implicit and explicit attitudes are based on distinct psychological mechanisms, have different ways of formation and change, and serve different functions and effects [20].

2.2. Mental Models and Design Stance and Intentional Stance

Mental models are internal representations that individuals use to understand, predict, and explain their external environment, describing the cognitive structures that depict, explain, or anticipate the functionality of systems [7]. Mental models embody the long-term knowledge concerning systems stored that can be invoked when needed to interact with corresponding systems. These internally developed models assist in effectively directing limited attention [21]. Mental models are typically automatically activated, guiding individuals’ attitudes, judgment processes, decision making, and actions. Previous studies have employed cognitive mapping techniques to elicit mental models [16].
When constructing mental models, users do not necessarily adhere to the rules of formal logic. For instance, people generally do not make all the possible logical inferences from the available information. Instead, users tend to concentrate their attention on the task at hand and make inferences only when they perceive a connection between their goals and the system behavior [15]. The construction of mental models, given that humans are active processors of information, is dynamically adjusted and modified over time, and mental models exhibit strong resistance to change, leading to persistent inaccuracies that cause frustration among users [15].
Mental models hold significant importance in the human–robot interaction field, aiding designers and researchers in understanding user needs, preferences, and difficulties, thereby enhancing the usability, learnability, and acceptability of systems or interfaces [22,23]. A study by Revell and Stanton (2018) on the design of domestic heating interfaces specifically aimed at conveying how the system operating through users’ mental models can have a positive impact on achieving domestic heating goals [24]. Brennen et al. (2020), through structured interviews, revealed the complex multimodal mental models of mobile health application users, emphasizing the importance of integrating interdisciplinary trends and balancing multidimensional, conflicting dimensions in design [25]. Urgen and colleagues reasoned that although participants form initial mental models (and expectations) based solely on appearances, preconceived models become untenable when a robot’s actions reveal its true nature, resulting in increased feelings of incredulity in such instances [26].
Although current studies have found that mental models are automatically activated and continually changing, there still remains a gap in exploring how mental models, similar to attitude formation, could be rapidly and automatically activated and how they follow a logical reasoning and evaluation process based on prior experiences at a subconscious level.
Design stance and intentional stance are two fundamental modes of thinking for explaining and predicting the behavior of complex systems, as introduced by Dennett. The design stance starts from the functions and purposes of the system, assuming the system operates according to certain design principles or rules, thus inferring the system’s behavior and state. For example, “computers are designed to execute programs, so they will operate according to the program’s instructions”. On the other hand, the intentional stance begins with the system’s mental states, assuming the system possesses psychological attributes such as beliefs, desires, and intentions. People tend to attribute autonomy, purposiveness, and even emotional states to the agent, thereby deducing the system’s behavior and state [10]. As artificial agents become increasingly common in daily life and their interactions more complex, attributing “intentionality” to their behavior becomes a key strategy for non-specialist users to understand, predict, and learn about them. Furthermore, under the premise of transparency in agent design, such attribution is both necessary and ethically sound [27]. The design stance and intentional stance play significant roles and carry important implications in the field of human–robot interaction, widely applied within the domain of social robots [28], deeply influencing people’s expectations and evaluations of robot behavior [29]. When humans adopt the intentional stance towards robots, they perceive them as more trustworthy [30]. By adopting a conscious stance, humans can enhance the processing of social signals through the efficient informational processing system of social cognition; therefore, socially aware robots that can induce an intentional orientation in humans might offer a superior user experience in interactions [31]. Studies have shown that in short-term interactions, people are more inclined to attribute intentionality to robots, while long-term exposure to robots’ gaze behaviors can weaken such attributions of intentionality, revealing the influence of gaze behaviors on the cognitive attribution of robotic intentionality and its dynamic change over time [32]. Bossi et al. (2020) found that the brainwave activity of individuals in a resting state could predict whether they perceive robot behavior as intentional or as a result of mechanical design [9]. Understanding these two stances is crucial not only for analyzing how humans interact with advanced technologies but also serves as an important consideration in designing and promoting new technological products.
However, current research primarily explores the effects of attributing intentionality to robots [31], with less emphasis on how adopting stance attribution in human–machine interaction design to explain such metaphors and implications might influence people’s attitudes towards robots.

2.3. Research Related to Robotics and Emotion

With the advancement of robotics, the field of human–robot interaction (HRI) has increasingly focused on how humans perceive and respond to the emotional expressions displayed by robots, as well as the significance of this interaction process in enhancing human–robot relationships. Picard (1997) first introduced the concept of “affective computing”, emphasizing the central role of sensing, understanding, and expressing emotion in human–robot interaction, marking the beginning of the broad recognition of the importance of emotion in the research field of human–robot interaction [33]. Emotion is vital for improving the naturalness, authenticity, and effectiveness of human–robot interaction, constituting a significant research direction in the fields of human–robot interaction and affective computing. This involves the extraction and analysis of emotional information from various human interactive modes [34]. People react to the emotions of robots and expect these emotional responses to remain consistent throughout multiple interactions [6]. From a technical perspective, Pessoa (2017) advocated for the inclusion of components related to emotion in the information processing architecture of intelligent robots, emphasizing cognitive–emotional integration as a core design principle to achieve the comprehensive integration of emotion in robotic systems [35]. Ficocelli et al. (2016) proposed an emotion–behavior module for assisting HRI, capable of adaptively displaying appropriate emotions based on human well-being and enhancing adaptability to new scenes through online updates, thereby proving the effectiveness of utilizing emotional assistance in behavior within HRI [36]. Shao et al. (2022) demonstrated a new method of inducing and detecting emotions by combining robotic nonverbal behavior with emotional music, showing the effectiveness of a neural network model trained on electroencephalogram signals in detecting emotional valence and arousal levels [37].
Subsequently, with the development of technology and theory, researchers began to explore the impact of robot emotions on the user experience in human–robot interaction. Hieida and Nagai (2022) outlined the research on social emotions in robotic technology, highlighting the transition from basic emotions to advanced social emotions studied in psychology and neuroscience [38]. Becker et al. (2022) explored the research agenda on the role of emotional communication in service robots and its impact on customer experience, aiming to understand how service robots can address labor shortages, enhance customer interaction through emotional communication, and replace service personnel [39]. Spekman et al. (2021) conducted two experiments to investigate how actual emotional coping affects the perceptions of robots, concluding that there is an interaction between emotion and coping potential with robotic perception, and found that actual interactions with robots may offset the anticipated emotional impact [40]. Spatola et al. (2021) showed that people’s explicit and implicit attributions of primary and secondary emotions to robots could predict their anthropomorphization of robots, whereas secondary emotional attributions were related to perceiving the robot as having more warmth and capability [41]. Johnson (2022) revealed that users interact with the social chat robot Replika for reasons such as interest, social support, and addressing health issues, discussing a wide range of topics, suggesting that a social chat robot can provide multifaceted support while emphasizing that user interest is an important motivational factor [42]. Andreasson et al. (2018) explored the possibility of conveying positive and negative emotions through touch using the Nao robot in HRI, finding that female participants used a longer time and more diverse interactions to convey emotions than men. Based on emotional valence, different emotions could be distinguished through the amount of touch and duration, providing new insights into human–robot touch interaction [43]. Zheng et al. (2021) explored the impact of the appropriate timing of grip and touch interactions on expressing emotions, such as warmth and fear, finding that participants tended to grip before the climax of fearful scenes and after the climax of warm scenes. By modeling the probability distribution of human touch behavior and robot implementation, this research offers new insights into human–robot touch interaction [44]. However, most of these studies focus on the impact of robot emotional expression on users’ immediate experience, and less attention is paid to how users interpret the motivation and intention behind robot emotional expression, and how this interpretation affects users’ mental model construction of robots. This study focuses on the standpoint attribution interpretation of robot emotion expression and explores how different types of standpoint attribution interpretation affect users’ mental model construction of robots and human–computer interaction behavior.

2.4. Research Related to EEG in Human–Robot Interaction

Electroencephalography (EEG) is a non-invasive technique for recording the electrical activity of the cerebral cortex. It offers high temporal resolution, portability, and low cost, making it capable of reflecting users’ cognitive and emotional states [45]. Due to these advantages, EEG is widely utilized in the field of human–robot interaction, particularly for monitoring brain activity in HRI contexts [46]. Researchers are able to identify attention levels, emotional states, and fatigue in users during interactions with robots or computer systems by analyzing EEG signals [47]. Importantly, using EEG to explore brain activity patterns when users perform specific tasks provides a novel perspective for understanding complex cognitive processes and emotional feedback. Additionally, EEG has been applied in evaluating user experience (UX) methods to achieve more refined adjustments and optimizations in human–machine interface design, directly impacting users’ acceptance and satisfaction levels. EEG technology is employed to examine action planning and outcome monitoring in HCI [48], and is extensively used in the field of brain–computer interfaces [49]. Perez-Osorio and colleagues (2021) discovered through EEG that even gaze signals from robots unrelated to the task significantly interfere with people’s understanding of others’ mental states. This influence is manifested by the increased rates of judgment errors, more tortuous eye movement paths, and the induction of cognitive conflicts in EEG activity [50]. The neurocognitive mechanisms of human–computer interaction will be critical to optimizing social contact between humans and robots [51], and must be combined including neuroscience tools such as EEG to explore long-term, embodied human–computer interaction relationships.
The time–frequency analysis of EEG is an analytical method that observes the frequency of signals without losing information about their temporal evolution [46]. Event-Related Spectral Perturbation (ERSP) is an EEG indicator, following the time–frequency analysis of EEG signals, pivotal for revealing brain response mechanisms during conflict-processing tasks such as decision making, error monitoring, and attention control [52]. The theta (θ) frequency ERSP plays a critical role in illuminating the brain’s response mechanisms when performing tasks that require conflict processing [53]. The anterior cingulate cortex (ACC) is a crucial brain region for conflict detection, and theta-band activity is a key neurological feature revealing this involvement [54]. Studies have used theta-band ERSP as a cognitive conflict indicator [55]. Functional connectivity is often quantified through phase-locking value (PLV), a measure for assessing the synchrony in neural oscillations between different brain regions in the theta wave band. The synchronicity between the medial and lateral prefrontal lobes in the theta band serves as evidence of physiological responses during conflict processing [56]. Research has found that specific activities in both the beta (β) and theta bands are indicators of the basal ganglia’s influence on the prefrontal cortex to suppress impulsive responses during conflicts [57]. Changes in theta and beta bands are associated with cognitive conflict and error adaptation [58]. Studies have discovered that power changes in the beta band in go/nogo tasks are related to inhibitory functions [59].

2.5. Research Hypotheses

This study employed the go/nogo paradigm to explore users’ cognitive and emotional reactions when confronted with different stances in attribution explanations. The go/nogo paradigm is a classical task paradigm effectively measuring an individual’s response inhibition and conflict control capabilities when faced with different types of stimuli [59]. By requiring participants to respond to emotional words (go stimuli) and not to neutral words (nogo stimuli) and comparing the error rates and event-related perturbation changes under different stance attribution explanation conditions, the impact of stance attribution explanations on users’ emotional interpretation and mental model construction can be effectively assessed. The go/nogo paradigm is sensitive in capturing participants’ cognitive conflicts [60], reflected in the prolonged reaction times and increased error rates when faced with stimuli that contradict expectations. It also reveals the processing of automatic emotional regulation [61], where participants need to respond quickly to emotional words without much time for deep consideration.
The study explored whether the explanation of standpoints in attributing causes to robots could influence users’ affect-related mental models (see Figure 1). According to the research conducted by Pataranutaporn et al. (2023), the activation of beliefs plays a vital role in the formation of users’ mental models, which is crucial for how users understand and react to AI behavior [62]. These mental models serve as guides for their understanding of and interaction with artificial intelligence systems [63]. In human–robot interaction, we hypothesize that the suggestion and guidance of beliefs can be achieved through explanations attributing standpoints, thereby adjusting users’ mental models. Mental models influence users’ expectations and attitudes towards robots [64]. Metaphors, seen as mental models, simplify the understanding of program operations but often fail to accurately map the actual workings of systems. Users might misunderstand or make errors when applying specific metaphors to other aspects of the system due to the expectations formed based on metaphors that do not match reality [15].
According to the associative–propositional evaluation model, we speculate that users’ mental models of robots can be divided into two kinds: one dominated by associative processes aided by propositional processes, also known as immediately activated mental models, and the other influenced by users’ prior experiences, prioritizing analytical propositional processes over associative processes. Mental models dominated by association can be influenced by explanations attributing standpoints, whereas those prioritized by proposition are more influenced by inherent experiences. Under immediate tasks, the quick activation of images with standpoint attribution explanations could lead users to be more easily influenced by metaphorical content, thereby perceiving robots as capable of emotions. However, in questionnaires, even when explanations attributing standpoints are present, the logical analysis evoked by questionnaire items is processed propositionally. In this process, prior experiences play a decisive role, meaning once a propositional cognitive assessment is stimulated, the effect of the explanation attributing standpoints may be overridden by inherent experiences. Thus, in mental models prioritized by propositions, the impact of explanations attributing standpoints might not be significant. Figure 1 presents the theoretical model framework of this study, illustrating the impact of stance attribution explanations on users’ mental models through a combination of electroencephalogram (EEG) experiments and questionnaire surveys. The diagram elucidates how two types of mental models interact within the individual cognitive process, relating to users’ emotional understanding of robots. We hypothesize that users, influenced by images explaining robot behavior through stance attribution, would activate an associative mental model. In this model, under the interpretation of intentional stance, individuals perceive robots as having emotions and consciousness. However, in a subsequent task of determining whether words are emotional, users initiate an analytical propositional pathway influenced by prior experience, thereby believing robots lack emotions. This contrasts with the rapidly activated mental model under the intentional stance explanation, leading to discrepancies in error rates and event-related perturbation indices. Conversely, when the mental model activated by stance explanations guides users to align with their prior belief that robots lack emotions, no differences are observed in the subsequent task of identifying emotional vocabulary.
Reflecting on the primary aim of this study, which is to explore whether stance attribution explanations influence users’ emotional interpretations and the mental model construction of robots, based on the aforementioned theoretical analysis, this study proposes the following hypothesis:
Hypothesis H1.
Explanations attributing standpoints (intentional and design standpoints) will influence users’ association-dominated mental models, thereby causing cognitive conflicts in go/nogo tasks, observable in both keystroke responses and EEG indicators.
Hypothesis H2.
Explanations attributing standpoints (intentional and design standpoints) will cause differences in their influence on users’ association-dominated mental models and proposition-dominated mental models. Explanations attributing standpoints can affect immediately activated association-dominated mental models but not analytical proposition-dominated mental models.

3. Methodology

3.1. Participants

For electroencephalogram (EEG) experiments, considering that more than 50 repetitive stimuli are required for each experimental condition for an individual subject, the most suitable range of participant numbers is 12 to 30 [65]. The sample size estimation was performed using the G.Power 3.1 software, establishing that 24 participants were needed to meet the sample requirements of the experiment, assuming a statistical power of 0.8 and an effect size of 0.25. This study recruited a total of 25 participants, including 12 males and 13 females, with an average age of 23.64 years and a standard deviation of 7.319, through the participant pool of the Behavioral and Decision-Making Laboratory at Huaqiao University and via community recruitment. All the participants had normal vision or corrected normal vision and were right-handed, with no history of psychiatric disorders. Before the commencement of the experiment, all the participants signed an informed consent form, and they all received a certain amount of compensation after the conclusion of the experiment. Table 1 shows the demographic information of the participants. This study was approved by the Ethics Committee of Huaqiao University.

3.2. Materials

The position of robots was elucidated using images from The Intentional Stance Test-2 in the study conducted by Spatola et al. (2021) [29]. Each image was accompanied by two interpretations: the intentional stance and the design stance. These interpretations were further elaborated upon to meet the requirements of the number of trials in the electroencephalogram (EEG) experiment. These pictures were expanded and supplemented based on the requisite number of trials for an EEG experiment. To conduct the study, emotional and neutral words were each selected, featuring 50 terms from both categories. Emotional words refer to the emotional words studied by Zhao (2016), but this study includes negative and positive emotional words such as sadness, happiness, anger, surprise, etc. [66]. Neutral words, conversely, involve vocabulary not associated with human emotions or sentiments, exemplified by terms like “signal” and “document”. In the visual recognition tasks, images were presented, each prominently featuring a robot’s face coupled with either an emotional or neutral word beneath. Each image displayed only one word and was presented for a period of 2000 ms, ensuring the participants had ample time for recognition and judgment. A total of 100 images were utilized, correlating to the 100 lexical items (comprising 50 emotional and 50 neutral words). The experiment consisted of 200 trials, with emotional and neutral words each appearing in 100 trials, and every image being shown twice. All the experimental images and information were developed by the authors and assessed and approved by four experts.
The questionnaire used in this research drew primarily from the scales for the perception of psychology by Shank and DeSanti (2018) [67] and for perceiving anthropomorphism by Lee, Park, and Chung (2023) [68]. It selected items pertaining to the emotional dimension, with specific items posing statements such as “I think robots have feelings”, “I believe robots have the capacity to experience emotions”, and “I consider robots to have their own emotions”. The structure of the questionnaire began with images for stance attribution explanations, followed by items from the mind perception scale regarding the existence of emotions in robots, and ended with the collection of personal information.

3.3. Procedures

The stimulus procedure of the experiment was presented using the e-prime program, and the entire experimental process consisted of 200 trials. The design of the experiment followed a go/nogo paradigm, employing a 2 × 2 within-subject factorial design: stance interpretation type (intentional stance explanation vs. design stance explanation) × 2 lexical type (affective lexicon, i.e., go stimulus vs. neutral lexicon, i.e., nogo stimulus). The participants sat comfortably in an electromagnetically shielded room with dimmed light and reduced noise, facing a computer screen at a distance of 100 cm. The go/nogo paradigm is a widely used method for measuring the ability to inhibit responses, requiring subjects to react or not react to different types of stimuli [69]. In this experiment, images explaining different stances of robot attribution (intentional explanation and design explanation) were presented first, followed by a robot avatar and a word. Word stimuli were divided into affective and neutral lexicons, instructing the participants to press a button in response to affective words and to refrain from responding to neutral words. Their brainwave activity was recorded simultaneously. After all the conditions of intentional explanation or design explanation were presented, the participants were asked to complete questionnaires. Each participant took part in all the conditions of brainwave trials and filled out two questionnaires. The order of different intentional explanation conditions was randomized among the participants, meaning one participant might first undergo the intentional explanation condition and then the design explanation condition, while the next participant could have the order reversed. Each participant conducted 10 practice trials before the formal experiment to become familiar with the task. The official experiment was divided into two sessions, each comprising 100 trials, with two intermissions for rest. The experimental procedure is illustrated in Figure 2. In a single trial, the participants first viewed an image of a fixation point for 500 ms, followed by an image presenting an explanation for the robot’s stance for 2000 ms. There was a 500 ms blank page in between these images. Lastly, an image displaying the robot’s face and a word was presented for 2000 ms, and the participants were asked to judge whether the word was related to emotions or feelings. If it was, they were instructed to press the “F” key; otherwise, no key press was required.

3.4. Data Acquisition and Analysis

The EEG data were collected using a Neuroscan Synamp2 Amplifier (Curry 7, Neurosof Labs, Inc., Virginia, USA) with a sampling rate set at 1000 Hz. All the electrode impedances were maintained below 10 kΩ. The FCz electrode was used as the online reference, and the EEG data were processed using the EEGLAB2023 plugin. During the process, data were first referenced to both mastoids, then the sampling rate was reduced to 500 Hz. The filtering settings were from 0.1 to 40 Hz, with a band-pass filter at 48~52 Hz to eliminate power line interference, and the components containing artifacts such as blinks, eye movements, and electromyographic signals were removed using independent component analysis. A time–frequency analysis was conducted on the segments from 1000 ms before to 2000 ms after stimulus presentation, and baseline correction was applied from −800 to −200 ms by performing a short-time Fourier transform (STFT) within a fixed 400 ms time window. The beta- and theta-band Event-Related Spectral Perturbations (ERSPs) were primarily observed in the frontal and central brain regions [59]. This study analyzed six electrodes (Fcz, fc1, fc2, cz, c1, and c2) in these regions. In the functional connectivity analysis, to reduce computational complexity, the data sampling rate was reduced to 250 Hz, and the electrodes located at the edges, including P7, P8, F7, and F8, were removed. The data were then transformed using STFT, with a time window from −600 to −200 ms as the reference for baseline correction. By examining the synchrony in the θ band between the medial and lateral prefrontal cortex, cognitive conflicts occurring on an unconscious level were captured. This study specifically analyzed the phase-locking value (PLV) between electrode FCz (located in the medial prefrontal cortex) and F6 and F4 (located in the lateral prefrontal cortex) to assess functional connectivity attributes between these two brain regions [56].

4. Results

4.1. Behavioral Results

The behavioral outcomes of the participant error rates are depicted in Figure 3. There was a significant main effect of the stance interpretation type [F(1, 24) = 51.107, p < 0.001, ηp2 = 0.680]; a significant main effect of the lexical type [F(1, 24) = 61.567, p < 0.001, ηp2 = 0.720]; and a significant interaction between the electrode and lexical type [F(1, 24) = 63.548, p < 0.001, ηp2 = 0.726]. Pairwise comparisons revealed that under design stance interpretations, the participants’ error rates (M = 3.0%, SE = 1.0%) were significantly lower than under intention stance interpretations (M = 37.9%, SE = 4.2%, p < 0.001). The error rates for the emotional lexical items (M = 38.1%, SE = 2.8%) were significantly higher than those for the neutral lexical items (M = 2.8%, SE = 1.1%, p < 0.001). A simple effects analysis found that in the case of recognizing emotional lexicon, under design stance interpretations, the participants’ error rates (M = 3.0%, SE = 1.4%) were significantly lower than under intention stance interpretations (M = 73.3%, SE = 8.5%, p < 0.001), whereas for neutral lexical items, there was no significant difference between the two stances of interpretation (p = 0.714 > 0.05). The type of stance attribution explanation, intended stance vs. designed stance, had a significant effect on the participants’ error rates, and H1 held.

4.2. Time Frequency Results

Brain electric spectrum maps were obtained through the short-time Fourier transform (STFT). The time–frequency distribution of the Cz electrode in the range of 0.1~30 Hz is shown in Figure 4. A 2 (stance interpretation type) × 2 (lexical type) × 6 (electrode) repeated measures analysis was conducted on the energy values in the theta frequency band (4–7 HZ) from 1500 ms to 1600 ms; the results of the Event-Related Spectral Perturbation (ERSP) found that the main effect of the electrode was not significant (F(4, 21) = 2.159, p = 0.109 > 0.05, ηp2 = 0.291); the main effect of the stance interpretation type was significant (F(1, 24) = 4.622, p = 0.042 < 0.05, ηp2= 0.161); the main effect of the lexical type was significant (F(1, 24) = 4.341, p = 0.048 < 0.05, ηp2 = 0.153); and the interaction effect between the electrode and lexical type was not significant (F(4, 21) = 1.698, p = 0.188 > 0.05, ηp2 = 0.244). The interaction effect between the electrode and stance interpretation type was not significant (F(4, 21) = 0.622, p = 0.652 > 0.05, ηp2 = 0.106). The interaction effect between the lexical type and stance interpretation type was significant (F(1, 24) = 6.505, p = 0.018 < 0.05, ηp2 = 0.213). Pairwise comparisons revealed that the energy values induced in the theta frequency band (4–7 HZ) by emotional vocabulary (M = 1.159, SE = 0.369) were significantly higher than those by neutral vocabulary (M = 0.301, SE = 0.214, p = 0.048 < 0.05). The energy values in the theta band induced by intentional stance interpretation (M = 0.908, SE = 0.279) were significantly higher than those by design stance (M = 0.552, SE = 0.181, p = 0.042 < 0.05). Simple effects found that under the condition of emotional vocabulary, the energy of the theta band induced by intentional stance interpretation (M = 1.765, SE = 0.561) was significantly higher than design stance interpretation (M = 0.553, SE = 0.237, p = 0.012 < 0.05), H1 is valid, while with neutral vocabulary, there was no significant difference between the two interpretations of stance (p = 0.094 > 0.05). Since the participants were asked to respond to emotional vocabulary while ignoring neutral vocabulary, leading to additional recognition and suppression work by the participants’ brains, thus, our study did not need to analyze the difference in EEG indicators caused by emotional vocabulary versus neutral vocabulary, i.e., go versus nogo, under different attribution stance interpretations, which would be affected by cognitive inhibition interference from the experimental paradigm.
For the analysis in the beta frequency band (18–22 HZ) energy values from 900 ms to 1000 ms, a 2 (stance interpretation type) × 2 (lexical type) × 6 (electrode) repeated measures analysis was conducted; the results of the ERSP found that the main effect of the electrode was significant (F(5, 20) = 3.719, p = 0.015 < 0.05, ηp2 = 0.482); the main effect of the stance interpretation type was not significant (F(1, 24) = 0.345, p = 0.562 > 0.05, ηp2 = 0.014); the main effect of the lexical type was not significant (F(1, 24) = 2.267, p = 0.145 > 0.05, ηp2 = 0.086); and the interaction effect between the electrode and lexical type was not significant (F(5, 20) = 2.322, p = 0.081 > 0.05, ηp2 = 0.367). The interaction effect between the electrode and stance interpretation type was not significant (F(5, 20) = 0.875, p = 0.515 > 0.05, ηp2 = 0.179). The interaction effect between the lexical type and stance interpretation type was significant (F(1, 24) = 6.813, p = 0.015 < 0.05, ηp2 = 0.221). In the context of emotional vocabulary, the energy in the beta band induced by intentional stance interpretation (M = 0.115, SE = 0.036) was significantly higher than that by design stance interpretation (M = 0.047, SE = 0.024, p = 0.032 < 0.05), H1 is valid, whereas, with neutral vocabulary, there was no significant difference between the two stances of interpretation (p = 0.212 > 0.05).

4.3. Functional Connectivity Analysis Results

To validate the selection of the time–frequency results, a functional connectivity analysis related to the brain regions associated with conflict was conducted (see Figure 5). Functional connectivity between the FCz and F6 electrode pairs and the FCz and F4 electrode pairs was examined within a 4–7 Hz time–frequency range during the 1550–1650 ms period. As there was no need to explore the indicators under the nogo condition, the study focused solely on the go condition under emotional word recognition, conducting a 2 (stance interpretation type) × 2 (electrode pair) repeated measures analysis. The results of the phase-locking value (PLV) revealed a significant main effect for the stance interpretation type (F(1, 24) = 12.536, p = 0.002 < 0.05, ηp2 = 0.343); however, the main effect for the electrode was not significant (F(1, 24) = 0.447, p = 0.510 > 0.05, ηp2 = 0.018), nor was the interaction between the electrode and stance interpretation type (F(1, 24) = 0.059, p = 0.810 > 0.05, ηp2 = 0.002). Pairwise comparisons revealed that the PLV under the stance interpretation (M = 0.031, SE = 0.009) was significantly higher than that under the designed stance (M = −0.015, SE = 0.011, p = 0.002 < 0.05).

4.4. Questionnaire Results

The results of interpreting the participants’ attitudes towards whether robots have emotions from different standpoints are as follows (see Figure 6): The main effect of the interpretation type of standpoint was not significant (F(1, 24) = 2.083, p = 0.162 > 0.05, ηp2 = 0.080). Regarding the intention standpoint interpretation, there was no significant difference in the participants’ emotional attitudes (M = 2.400, SE = 0.887) compared to the design standpoint (M = 2.066, SE = 0.751, p = 0.162 > 0.05). The results, combined with the behavioral results and EEG indicators, proved that H 2 was valid.

5. Discussion

Few studies have delved deeply into the mind models of robots, and there has been a lack of research treating stance attribution as a form of human–robot interaction design that influences users. To understand people’s attitudes towards emotional robots, it is imperative to extensively explore human cognitive processes and our interactions with technology. This study aims to examine the impact of stance attribution on users’ interpretation of robot emotions and the construction of mind models, as well as to investigate the interaction between mind models based on associative and propositional analysis, revealing the mechanisms of psychological cognition in human–robot interaction. Innovatively applying stance attribution to human–robot interaction, this research expands the domains of affective computing and robot design by integrating associative predicate models with mind models, and verifies the duality of mind models—based on association and proposition analysis—and how it affects users’ attitudes and understanding of robotic emotional expressions through EEG experiments and questionnaire.
Event-Related Spectral Perturbation (ERSP) is a parameter commonly used in neuroscience research that offers insights into the specific activity changes in the frequency bands of brain responses under various scenarios or task stimuli [52]. Theta-band ERSP, frequently an indicator of cognitive conflict [50], reflects the neural foundation by which the brain mobilizes additional resources to evaluate information, select coping strategies, and integrate contradictory processes. Phase-locking value (PLV) measures the degree of synchronization in neural electrical signals, with a higher PLV indicating more effective inter-region brain communication and information integration. Synchronization between the medial frontal cortex (MFC) and lateral frontal cortex (LFC) in the theta band relates to cognitive conflict, error detection, and other cognitive functions [56]. The medial frontal cortex is responsible for error monitoring (the functionality of the anterior cingulate cortex) and conflict monitoring (related to task changes and social signal processing), whereas the lateral frontal cortex participates in setting and maintaining goals, responsible for more comprehensive cognitive control. Research has shown that changes in both the theta and beta bands are related to cognitive conflict and error adaptation [58]. The frequency-specific activity of both the beta and theta bands plays a role in influencing the thalamocortical nuclei to then affect the frontal cortical areas to inhibit impulsive responses during conflicts [57]. Beta-band activity is considered to be related to cognitive functions, sensorimotor integration, and various cognitive states, especially closely linked to cognitive inhibition [59]. The results combining the time–frequency analysis and functional connectivity analysis found that under intentional stance interpretation (as opposed to design stance interpretation), the participants exhibited higher response inhibition and cognitive conflicts during the emotional word recognition tasks. The behavioral outcomes indicated that under design stance interpretation conditions, the participants exhibited significantly lower error rates than under the intention stance conditions during the emotional word recognition tasks. The questionnaire results found no significant difference in emotional attitudes between the participants under intentional stance interpretation and design stance interpretation, both considering that robots do not have emotions.
Mental models are understood by users as what the system contains, how it works, and why it operates in that manner [15]. Combining the experimental and questionnaire results, we hypothesize that the intentional stance explanation activates an associative mental model in users concerning robots (relating robots to emotions), which conflicts with people’s ingrained experiences (predicative mental models, i.e., believing that robots lack emotions), thereby leading to an increased error rate in the task of recognizing emotional words under the intentional stance interpretation. The questionnaire results revealed that in the cognitive process of logical analysis dominated by predicative propositions, stance attribution explanation does not affect users’ mental models, and it is a common belief that robots do not possess emotions. However, in the electroencephalogram (EEG) experiments, it was found that under the influence of intentional stance explanation (compared to design stance explanation), users exhibited higher levels of vigilance and cognitive conflict, and were more prone to make mistakes. Our experimental condition on stance attribution explanation, acting as a metaphorical implication, demonstrated that metaphor can influence users’ mental models. Users may form expectations based on metaphors that do not correspond to reality, leading to misunderstandings or errors [15]. Other studies have shown that the construction of mental models depends not only on the user’s internal cognition but is also influenced by external cognitive structures [70]. We speculate that under immediate conditions, the stance attribution explanation affects users’ associative, i.e., rapid and uncontrollable mental models, which differ from the mental models formed by analytical reasoning, thus causing direct conflict. This conflict not only triggers internal cognitive dissonance but also affects overt task performance, specifically reflected in the high error rate in recognizing emotional words under the influence of intentional stance explanation.
This discovery reflects the inherent complexity exhibited by the human brain when processing ambiguous information. The formation of human attitudes can be divided into two modes: associative processes (fast and intuitive) and propositional processes (slow and logic-based analysis) [14]. In this context, the interpretation of intent seems to activate rapid intuitive associations, proving that mental models can be automatically activated [16]. These results are consistent with previous research findings. For instance, the study by Suzuki et al. (2015) found that humans unconsciously attribute human-like qualities, such as empathy, to robots [71]. Spatola et al. (2022) discovered that cognitive load leads users to attribute mental properties to social robots more quickly, highlighting how humans adjust their attitudes towards robots under different cognitive loads [72]. These studies corroborate that while people can be guided to attribute intelligence or emotional characteristics to robots, such attributions are often influenced by the discrepancies in individuals’ inherent experiences and knowledge structures when performing specific cognitive tasks. During the process of constructing narrative understanding, individuals build meaning by activating the relevant mental models. In the context of human–robot interaction, when external information (such as stance attribution interpretation) conflicts with an individual’s inherent beliefs, the activation and use of mental models become complex and difficult, leading to delays and errors in information processing. Mental models exhibit strong resistance to change, and this persistent inaccuracy causes distress to users [15]. This conclusion indicates that, in the context of human–machine interaction, humans’ interpretation and recognition of emotions are influenced by multiple psychological and cognitive mechanisms. The conflict between expectations and reality may also trigger psychological adaptation and adjustment mechanisms, which may gradually adjust over long-term human–machine interaction practices, thereby affecting people’s attitudes and behaviors towards robots. It also shows that mental models can serve as a framework for guiding user research and system design [70].
After a period of reflection and analysis, even if the initial activation of mental models is based on associations, individuals can still adjust their emotional understanding of robots through logical analysis and rational thought. In other words, a propositional-dominant mental model, after profound contemplation and analysis, has the capacity to regulate or “correct” the immediate emotional response caused by stance attribution interpretation. The construction of mental models, owing to humans being active processors of information, dynamically adjusts and modifies mental models over time [15]. The dynamic nature of mental models indicates that an iterative approach is necessary [70]. This dynamic psychological adjustment process signifies the flexibility and plasticity of individual mental models, as well as the complex interplay between cognition and emotion.
In technological interactive environments, rational thinking and logical analysis capabilities are also important regulatory factors. They can help users overcome initial emotional responses and adjust their understanding of robot behaviors through in-depth analysis. From a cognitive psychology perspective, this process can be seen as a manifestation of metacognitive processes, that is, the ability to monitor and regulate one’s own cognitive processes [73]. Metacognition is closely related to conflict control [74]. When individuals realize that their initial emotional responses may be influenced by certain preconceived notions, metacognitive regulation is triggered, prompting individuals to re-evaluate and analyze the information that elicited these emotional responses. While the initial mental models may be based on intuitive responses to emotions and cognition, individuals’ subsequent cognitive processing can optimize or revise these models through rational analysis and review. Even in interactions with non-human actors such as robots, people still apply advanced cognitive correction mechanisms to optimize their behavior and decision making. In the field of human–robot interaction research, understanding the specific role and impact of this capability is crucial. It also reveals how human preconceptions or biases (such as the notion that robots do not have genuine emotions) affect our psychological and cognitive processes in interacting with robots. If robots with consciousness and emotions were to truly emerge in the future, even if users perceive them during interactions, prior experiences would form a preconceived notion that could modify people’s attitudes towards robots, thus preventing individuals from accurately and quickly recognizing the robots’ consciousness and emotions, which is undoubtedly a very dangerous possibility. It is not conducive to the sustainable development of human–machine relationships.

5.1. Theoretical Implication

The theoretical significance of this study lies in demonstrating that in human–computer interaction scenarios, individuals do not rely solely on a single type of mental model to process information, but dynamically utilize different types of mental models according to the situational context of the interaction. This study explored the impact of stance attribution explanation on users’ emotional interpretation and mental model construction towards robots, expanding the application of the associative proposition evaluation model. The interplay between mental models based on association and mental models based on propositional analysis experience provides a new theoretical framework for revealing the principles of human cognitive processing mechanisms, especially in exploring how humans process complex, multimodal information when interacting with non-human intelligent agents. This is particularly pertinent in the current context of widespread robot application, where understanding how users construct cognitive models of robots and their emotional responses is especially important. Moreover, this study reveals that stance attribution explanation significantly affects users’ emotional interpretation and mental model construction of robots, emphasizing the importance of associative and propositional analysis processes in interpreting robot behaviors and potential emotional expressions. Lastly, this study offers new insights for designing robots that are more humanized and capable of establishing better interactive relationships with human users.

5.2. Managerial Implication

From a practical standpoint, the findings of this study provide a scientific basis for designing more natural and efficient human–computer interaction interfaces. The results indicate that the application of intentional stance interpretations can significantly impact users’ associative psychological models of robots. Particularly, this study highlights the importance of considering how to guide users’ attributional stances in robot design to enhance user experience and satisfaction. This research could elevate the emotional design of robots, understanding the differences between users distinguishing intentional stances and design stances. Designers can more accurately adjust the interaction modes and emotional expressions of artificial intelligence systems, creating robots that better meet users’ emotional expectations and thereby enhance user experience. By incorporating implicit intentions and emotional communication in design, designers can enhance users’ trust and sense of belonging towards robots. For example, service robots can demonstrate emotional intentions similar to humans through appropriate language and behavior, strengthening users’ dependency and interaction satisfaction. This strategy not only improves the naturalness and fluidity of human–robot interaction but also further supports the sustainable development of human–robot relationships. Moreover, this study can help reduce cognitive conflicts in human–computer interactions. When users’ expectations do not match the robots’ behavior, it leads to cognitive conflicts [51], which in turn affects the user experience. Designers can reduce such conflicts through multiple approaches, such as increasing transparency about the robots’ capabilities and limitations, adopting more natural and understandable interaction methods, and dynamically adjusting the robots’ behavior based on users’ emotional states and contexts. Additionally, by customizing interaction modes based on users’ mental models, designers can tailor the robots’ interaction modes to achieve better communication effects. For instance, for those who tend to view robots as beings with intentions and emotions, designers can design more anthropomorphic interaction modes; for those who view robots as tools or programs, designers can adopt more straightforward and direct interaction methods. The findings of this study can also be applied in various real-world scenarios, such as customer service, healthcare, and education. For example, in customer service, designers could develop robots capable of recognizing users’ emotions and responding accordingly, thereby enhancing customer satisfaction. In healthcare, designers could develop robots that provide emotional support and companionship to patients, thereby improving patients’ treatment experiences. Lastly, this study provides guidance for developing AI systems capable of recognizing and adapting to users’ emotional states. Understanding how users psychologically interpret robot behavior and output information—especially emotional expressions—can help developers design AI systems that better mimic human emotional interactions, thus achieving smoother human–computer interaction goals. It underscores the importance of interdisciplinary research in advancing human–robot collaboration and symbiosis, laying the foundation for building a future society coexisting with artificial intelligence and robots, thereby promoting the sustainable development of human–robot relationships.

6. Conclusions and Future Prospects

This study explores the impact of stance attribution explanations on emotional interpretation and mental model construction during human–robot interaction, revealing the operational mechanism of psychological cognition in such interactions. The research discovered that stance attribution explanations significantly influence users’ emotional interpretation and construction of mental models regarding robots. Specifically, intention stance explanations, compared to design stance explanations, lead to a higher error rate in go/nogo tasks and trigger stronger Event-Related Spectral Perturbation and phase-locking value in the θ and β frequency bands, indicating that users experience greater cognitive conflict when processing intention stance explanations. The questionnaire results showed that the type of stance attribution explanation did not significantly affect users’ attitudes towards whether robots have emotions. The findings suggest that stance attribution explanations can act as design clues through metaphorical hints, influencing users’ mental models. Users might form expectations based on metaphors that do not match reality, leading to misunderstandings or errors. This reveals the interplay between association-based mental models and proposition-based mental models in users’ perceptions of robots, expanding our understanding of the psychological cognitive mechanisms in human–robot interaction. This has significant implications for robot design and the design of human–robot interaction interfaces. Interface designers can clarify the behavior logic and decision-making basis of robots to users in various ways, helping them better understand the robots’ behavioral intentions and establish accurate mental models. By incorporating suggestive intentions and emotional communication in the design, designers can enhance users’ trust in and sense of belonging to robots.
This study provides a new path; in addition to designing robots with more “emotional” characteristics to meet human emotional needs, it is also possible to educate and train users to enhance their metacognitive abilities. This enables them to interpret robot behavior more rationally and effectively, reducing overreliance on initial impressions. Furthermore, this capability for cognitive and emotional adjustment also points to the importance of considering human factors in the design of interactive systems, where the cognitive subject is an active participant rather than just a passive recipient. Users’ mental models and cognitive presets can be shaped and adjusted through interactive experiences, further emphasizing the necessity of considering users’ cognitive and emotional aspects in the design of interactive systems. Ultimately, this ability to adjust emotional cognition through reflection and analysis highlights the significance of the emotional experience in human–robot interaction. Understanding how people adjust their emotions and cognition can provide insights for designing more natural, intimate, and adaptable human–robot interaction models, ensuring these interactions meet human needs not only on a technical level but also on emotional and cognitive levels to promote the long-term sustainable development of human–machine relationships.
While this study provides valuable insights into the psychological processes involved in human–robot interaction to a certain extent, it also has some limitations. The use of the go/nogo paradigm and specific attribution stances may not fully simulate the naturalness and multidimensionality of everyday human–robot interactions, failing to capture completely the emotional and psychological state changes in users in natural human–robot interaction environments. Furthermore, this study unveiled the short-term effects of stance attribution explanations on users’ mental models and emotional responses, providing insights for designing more effective social robots. However, it overlooked the changes and adaptative processes that users’ psychological models might undergo during long-term interactions given the dynamic and prolonged nature of human–robot relationships. In long-term interactions, users establish emotional bonds with robots, and the impact of stance attribution explanations could change. For instance, users might initially interpret the robot’s behavior based on preset beliefs, but with accumulating interactive experiences, their interpretation might gradually shift to judgments based on experience. Moreover, the influence of stance attribution on users’ emotional responses might also evolve in long-term interactions, for instance, from initial novelty and surprise to more stable and subtle emotional experiences. Understanding this dynamic change is crucial for designing social robots capable of establishing long-term relationships with humans.
Addressing the shortcomings of this research, future studies could continue to expand and deepen understanding. Future research could explore a wider range of interaction contexts and tasks and how they affect users’ emotional interpretations and the construction of their psychological models. It could also focus on a diverse and more representative user group for long-term studies to observe how psychological models evolve over prolonged human–robot interactions. By tracking users’ interactions with the same robot over extended periods (e.g., weeks or months), researchers could observe the trajectories of changes in their mental models, emotional responses, and behavioral patterns. Combining quantitative research methods (e.g., surveys and physiological indicator measurements) and qualitative research methods (e.g., interviews and diary studies) could provide a comprehensive and in-depth exploration of users’ experiences and changes during long-term human–robot interactions. Conducting research in scenarios closer to real life, such as applying robots in homes, workplaces, or nursing homes, would better understand the impact of long-term human–robot interactions on users’ daily lives.

Author Contributions

Methodology, D.L.; writing—original draft, D.L.; writing—review and editing, D.L. and Q.Z.; visualization, J.Z. and S.Q.; supervision, R.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research project was supported by the National Social Sciences funded general projects, PRC (Grant No. 22BGL006), and Humanities and Social Sciences Planning Project of the Ministry of Education, PRC (Grant No. 20YJA630054), and National Social Sciences later funded projects, PRC (Grant No. 21FGLB041).

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Ethics Committee of Huaqiao University (M2023009).

Informed Consent Statement

Informed consent was obtained from all the subjects involved in the study.

Data Availability Statement

The data presented in this study are available upon request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

AcronymsFull name
ERSPEvent-Related Spectral Perturbation
PLVPhase-locking value
HRIHuman–robot interaction
EEGElectroencephalography
APEAssociative–propositional evaluation

References

  1. Foster, M.E.; Gaschler, A.; Giuliani, M.; Isard, A.; Pateraki, M.; Petrick, R.P.A. Two People Walk into a Bar: Dynamic Multi-Party Social Interaction with a Robot Agent. In Proceedings of the 14th ACM International Conference on Multimodal Interaction, Santa Monica, CA, USA, 22–26 October 2012; Association for Computing Machinery: New York, NY, USA, 2012; pp. 3–10. [Google Scholar]
  2. Hoffman, G.; Zuckerman, O.; Hirschberger, G.; Luria, M.; Shani Sherman, T. Design and Evaluation of a Peripheral Robotic Conversation Companion. In Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction, Portland, OR, USA, 2–5 March 2015; Association for Computing Machinery: New York, NY, USA, 2015; pp. 3–10. [Google Scholar]
  3. Bevan, C.; Stanton Fraser, D. Shaking hands and cooperation in tele-present human-robot negotiation. In Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction, Portland, OR, USA, 2–5 March 2015; Association for Computing Machinery: New York, NY, USA, 2015; pp. 247–254. [Google Scholar]
  4. Brščić, D.; Kidokoro, H.; Suehiro, Y.; Kanda, T. Escaping from children’s abuse of social robots. In Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction, Portland, OR, USA, 2–5 March 2015; Association for Computing Machinery: New York, NY, USA, 2015; pp. 59–66. [Google Scholar]
  5. Breazeal, C. Toward Sociable Robots. Robot. Auton. Syst. 2003, 42, 167–175. [Google Scholar] [CrossRef]
  6. Kirby, R.; Forlizzi, J.; Simmons, R. Affective Social Robots. Robot. Auton. Syst. 2010, 58, 322–332. [Google Scholar] [CrossRef]
  7. Fuchs-Frothnhofen, P.; Hartmann, E.A.; Brandt, D.; Weydandt, D. Designing Human-Machine Interfaces to Match the User’s Mental Models. Control Eng. Pract. 1996, 4, 13–18. [Google Scholar] [CrossRef]
  8. Foster, M.E. Natural Language Generation for Social Robotics: Opportunities and Challenges. Philos. Trans. R. Soc. B Biol. Sci. 2019, 374, 20180027. [Google Scholar] [CrossRef]
  9. Bossi, F.; Willemse, C.; Cavazza, J.; Marchesi, S.; Murino, V.; Wykowska, A. The Human Brain Reveals Resting State Activity Patterns That Are Predictive of Biases in Attitudes toward Robots. Sci. Robot. 2020, 5, eabb6652. [Google Scholar] [CrossRef] [PubMed]
  10. Dennett, D.C. Précis of The Intentional Stance. Behav. Brain Sci. 1988, 11, 495–505. [Google Scholar] [CrossRef]
  11. Ziemke, T. Understanding Social Robots: Attribution of Intentional Agency to Artificial and Biological Bodies. Artif. Life 2023, 29, 351–366. [Google Scholar] [CrossRef]
  12. Navare, U.P.; Ciardo, F.; Kompatsiari, K.; De Tommaso, D.; Wykowska, A. When Performing Actions with Robots, Attribution of Intentionality Affects the Sense of Joint Agency. Sci. Robot. 2024, 9, eadj3665. [Google Scholar] [CrossRef] [PubMed]
  13. Li, Z.; Terfurth, L.; Woller, J.P.; Wiese, E. Mind the machines: Applying implicit measures of mind perception to social robotics. In Proceedings of the 2022 17th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Sapporo, Japan, 7–10 March 2022; pp. 236–245. [Google Scholar]
  14. Gawronski, B.; Bodenhausen, G.V. Associative and Propositional Processes in Evaluation: An Integrative Review of Implicit and Explicit Attitude Change. Psychol. Bull. 2006, 132, 692–731. [Google Scholar] [CrossRef]
  15. Potesnak, K. Mental Models: Helping Users Understand Software. IEEE Softw. 1989, 6, 85–86. [Google Scholar] [CrossRef]
  16. van den Broek, K.L.; Luomba, J.; van den Broek, J.; Fischer, H. Evaluating the Application of the Mental Model Mapping Tool (M-Tool). Front. Psychol. 2021, 12, 761882. [Google Scholar] [CrossRef] [PubMed]
  17. Roselli, C.; Navare, U.P.; Ciardo, F.; Wykowska, A. Type of Education Affects Individuals’ Adoption of Intentional Stance Towards Robots: An EEG Study. Int. J. Soc. Robot. 2024, 16, 185–196. [Google Scholar] [CrossRef]
  18. McLaren, I.P.L.; Forrest, C.L.D.; McLaren, R.P.; Jones, F.W.; Aitken, M.R.F.; Mackintosh, N.J. Associations and Propositions: The Case for a Dual-Process Account of Learning in Humans. Neurobiol. Learn. Mem. 2014, 108, 185–195. [Google Scholar] [CrossRef]
  19. Evans, J.S.B.T. Dual-Processing Accounts of Reasoning, Judgment, and Social Cognition. Annu. Rev. Psychol. 2008, 59, 255–278. [Google Scholar] [CrossRef]
  20. Gawronski, B.; Bodenhausen, G.V. Unraveling the Processes Underlying Evaluation: Attitudes from the Perspective of the Ape Model. Soc. Cogn. 2007, 25, 687–717. [Google Scholar] [CrossRef]
  21. Naderpour, M.; Lu, J.; Zhang, G. A Human-System Interface Risk Assessment Method Based on Mental Models. Saf. Sci. 2015, 79, 286–297. [Google Scholar] [CrossRef]
  22. Phillips, E.; Ososky, S.; Grove, J.; Jentsch, F. From Tools to Teammates: Toward the Development of Appropriate Mental Models for Intelligent Robots. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 2011, 55, 1491–1495. [Google Scholar] [CrossRef]
  23. Storjak, I.; Krzic, A.S.; Jagust, T. Elementary School Pupils’ Mental Models Regarding Robots and Programming. IEEE Trans. Educ. 2022, 65, 297–308. [Google Scholar] [CrossRef]
  24. Revell, K.M.A.; Stanton, N.A. Mental Model Interface Design: Putting Users in Control of Home Heating. Build. Res. Inf. 2018, 46, 251–271. [Google Scholar] [CrossRef]
  25. Brennen, J.S.; Lazard, A.J.; Adams, E.T. Multimodal Mental Models: Understanding Users’ Design Expectations for mHealth Apps. Health Inform. J. 2020, 26, 1493–1506. [Google Scholar] [CrossRef]
  26. Urgen, B.A.; Kutas, M.; Saygin, A.P. Uncanny Valley as a Window into Predictive Processing in the Social Brain. Neuropsychologia 2018, 114, 181–185. [Google Scholar] [CrossRef]
  27. Papagni, G.; Koeszegi, S. A Pragmatic Approach to the Intentional Stance Semantic, Empirical and Ethical Considerations for the Design of Artificial Agents. Minds Mach. 2021, 31, 505–534. [Google Scholar] [CrossRef]
  28. Veit, W.; Browning, H. Social Robots and the Intentional Stance. Behav. Brain Sci. 2023, 46, e47. [Google Scholar] [CrossRef]
  29. Spatola, N.; Marchesi, S.; Wykowska, A. The Intentional Stance Test-2: How to Measure the Tendency to Adopt Intentional Stance Towards Robots. Front. Robot. AI 2021, 8, 666586. [Google Scholar] [CrossRef]
  30. Kiesler, S.; Powers, A.; Fussell, S.R.; Torrey, C. Anthropomorphic Interactions with a Robot and Robot–like Agent. Soc. Cogn. 2008, 26, 169–181. [Google Scholar] [CrossRef]
  31. Schellen, E.; Wykowska, A. Intentional Mindset Toward Robots—Open Questions and Methodological Challenges. Front. Robot. AI 2019, 5, 139. [Google Scholar] [CrossRef]
  32. Abubshait, A.; Wykowska, A. Repetitive Robot Behavior Impacts Perception of Intentionality and Gaze-Related Attentional Orienting. Front. Robot. AI 2020, 7, 565825. [Google Scholar] [CrossRef]
  33. Picard, R.W. Affective Computing; MIT Press: Cambridge, MA, USA, 2000; ISBN 978-0-262-66115-7. [Google Scholar]
  34. Spezialetti, M.; Placidi, G.; Rossi, S. Emotion Recognition for Human-Robot Interaction: Recent Advances and Future Perspectives. Front. Robot. AI 2020, 7, 532279. [Google Scholar] [CrossRef]
  35. Pessoa, L. Do Intelligent Robots Need Emotion? Trends Cogn. Sci. 2017, 21, 817–819. [Google Scholar] [CrossRef]
  36. Ficocelli, M.; Terao, J.; Nejat, G. Promoting Interactions Between Humans and Robots Using Robotic Emotional Behavior. IEEE Trans. Cybern. 2016, 46, 2911–2923. [Google Scholar] [CrossRef]
  37. Shao, M.; Snyder, M.; Nejat, G.; Benhabib, B. User Affect Elicitation with a Socially Emotional Robot. Robotics 2020, 9, 44. [Google Scholar] [CrossRef]
  38. Hieida, C.; Nagai, T. Survey and Perspective on Social Emotions in Robotics. Adv. Robot. 2022, 36, 17–32. [Google Scholar] [CrossRef]
  39. Becker, M.; Efendić, E.; Odekerken-Schröder, G. Emotional Communication by Service Robots: A Research Agenda. J. Serv. Manag. 2022, 33, 675–687. [Google Scholar] [CrossRef]
  40. Spekman, M.L.C.; Konijn, E.A.; Hoorn, J.F. How Physical Presence Overrides Emotional (Coping) Effects in HRI: Testing the Transfer of Emotions and Emotional Coping in Interaction with a Humanoid Social Robot. Int. J. Soc. Robot. 2021, 13, 407–428. [Google Scholar] [CrossRef]
  41. Spatola, N.; Wudarczyk, O.A. Ascribing Emotions to Robots: Explicit and Implicit Attribution of Emotions and Perceived Robot Anthropomorphism. Comput. Hum. Behav. 2021, 124, 106934. [Google Scholar] [CrossRef]
  42. Ta-Johnson, V.P.; Boatfield, C.; Wang, X.; DeCero, E.; Krupica, I.C.; Rasof, S.D.; Motzer, A.; Pedryc, W.M. Assessing the Topics and Motivating Factors Behind Human-Social Chatbot Interactions: Thematic Analysis of User Experiences. JMIR Hum. Factors 2022, 9, e38876. [Google Scholar] [CrossRef]
  43. Andreasson, R.; Alenljung, B.; Billing, E.; Lowe, R. Affective Touch in Human–Robot Interaction: Conveying Emotion to the Nao Robot. Int. J. Soc. Robot. 2018, 10, 473–491. [Google Scholar] [CrossRef]
  44. Zheng, X.; Shiomi, M.; Minato, T.; Ishiguro, H. Modeling the Timing and Duration of Grip Behavior to Express Emotions for a Social Robot. IEEE Robot. Autom. Lett. 2021, 6, 159–166. [Google Scholar] [CrossRef]
  45. Wei, Q.; Lv, D.; Fu, S.; Zhu, D.; Zheng, M.; Chen, S.; Zhen, S. The Influence of Tourist Attraction Type on Product Price Perception and Neural Mechanism in Tourism Consumption: An ERP Study. Psychol. Res. Behav. Manag. 2023, 16, 3787–3803. [Google Scholar] [CrossRef]
  46. Lahane, P.; Jagtap, J.; Inamdar, A.; Karne, N.; Dev, R. A Review of Recent Trends in EEG Based Brain-Computer Interface. In Proceedings of the 2019 International Conference on Computational Intelligence in Data Science (ICCIDS), Las Vegas, NV, USA, 5–7 December 2019; pp. 1–6. [Google Scholar]
  47. Gerjets, P.; Walter, C.; Rosenstiel, W.; Bogdan, M.; Zander, T.O. Cognitive State Monitoring and the Design of Adaptive Instruction in Digital Environments: Lessons Learned from Cognitive Workload Assessment Using a Passive Brain-Computer Interface Approach. Front. Neurosci. 2014, 8, 385. [Google Scholar] [CrossRef]
  48. Hinz, N.-A.; Ciardo, F.; Wykowska, A. ERP Markers of Action Planning and Outcome Monitoring in Human–Robot Interaction. Acta Psychol. 2021, 212, 103216. [Google Scholar] [CrossRef]
  49. Saha, S.; Mamun, K.A.; Ahmed, K.; Mostafa, R.; Naik, G.R.; Darvishi, S.; Khandoker, A.H.; Baumert, M. Progress in Brain Computer Interface: Challenges and Opportunities. Front. Syst. Neurosci. 2021, 15, 578875. [Google Scholar] [CrossRef]
  50. Perez-Osorio, J.; Abubshait, A.; Wykowska, A. Irrelevant Robot Signals in a Categorization Task Induce Cognitive Conflict in Performance, Eye Trajectories, the N2 Component of the EEG Signal, and Frontal Theta Oscillations. J. Cogn. Neurosci. 2021, 34, 108–126. [Google Scholar] [CrossRef]
  51. Henschel, A.; Hortensius, R.; Cross, E.S. Social Cognition in the Age of Human–Robot Interaction. Trends Neurosci. 2020, 43, 373–384. [Google Scholar] [CrossRef]
  52. Vecchio, F.; Nucci, L.; Pappalettera, C.; Miraglia, F.; Iacoviello, D.; Rossini, P.M. Time-Frequency Analysis of Brain Activity in Response to Directional and Non-Directional Visual Stimuli: An Event Related Spectral Perturbations (ERSP) Study. J. Neural Eng. 2022, 19, 66004. [Google Scholar] [CrossRef]
  53. Cavanagh, J.F.; Frank, M.J. Frontal Theta as a Mechanism for Cognitive Control. Trends Cogn. Sci. 2014, 18, 414–421. [Google Scholar] [CrossRef]
  54. Carter, C.S.; Van Veen, V. Anterior Cingulate Cortex and Conflict Detection: An Update of Theory and Data. Cogn. Affect. Behav. Neurosci. 2007, 7, 367–379. [Google Scholar] [CrossRef]
  55. Pscherer, C.; Wendiggensen, P.; Mückschel, M.; Bluschke, A.; Beste, C. Alpha and Theta Band Activity Share Information Relevant to Proactive and Reactive Control during Conflict-Modulated Response Inhibition. Hum. Brain Mapp. 2023, 44, 5936–5952. [Google Scholar] [CrossRef]
  56. Cohen, M.; Cavanagh, J.F. Single-Trial Regression Elucidates the Role of Prefrontal Theta Oscillations in Response Conflict. Front. Psychol. 2011, 2, 30. [Google Scholar] [CrossRef]
  57. Zavala, B.; Damera, S.; Dong, J.W.; Lungu, C.; Brown, P.; Zaghloul, K.A. Human Subthalamic Nucleus Theta and Beta Oscillations Entrain Neuronal Firing During Sensorimotor Conflict. Cereb. Cortex 2017, 27, 496–508. [Google Scholar] [CrossRef]
  58. Zavala, B.; Jang, A.; Trotta, M.; Lungu, C.I.; Brown, P.; Zaghloul, K.A. Cognitive Control Involves Theta Power within Trials and Beta Power across Trials in the Prefrontal-Subthalamic Network. Brain 2018, 141, 3361–3376. [Google Scholar] [CrossRef]
  59. Alegre, M.; Gurtubay, I.G.; Labarga, A.; Iriarte, J.; Valencia, M.; Artieda, J. Frontal and Central Oscillatory Changes Related to Different Aspects of the Motor Process: A Study in Go/No-Go Paradigms. Exp. Brain Res. 2004, 159, 14–22. [Google Scholar] [CrossRef]
  60. Randall, W.M.; Smith, J.L. Conflict and Inhibition in the Cued-Go/NoGo Task. Clin. Neurophysiol. 2011, 122, 2400–2407. [Google Scholar] [CrossRef]
  61. Zinchenko, A.; Chen, S.; Zhou, R. Affective Modulation of Executive Control in Early Childhood: Evidence from ERPs and a Go/Nogo Task. Biol. Psychol. 2019, 144, 54–63. [Google Scholar] [CrossRef]
  62. Pataranutaporn, P.; Liu, R.; Finn, E.; Maes, P. Influencing Human–AI Interaction by Priming Beliefs about AI Can Increase Perceived Trustworthiness, Empathy and Effectiveness. Nat. Mach. Intell. 2023, 5, 1076–1086. [Google Scholar] [CrossRef]
  63. Horstmann, A.C.; Strathmann, C.; Lambrich, L.; Krämer, N.C. Alexa, what’s inside of you: A qualitative study to explore users’ mental models of intelligent voice assistants. In Proceedings of the 23rd ACM International Conference on Intelligent Virtual Agents, Würzburg, Germany, 19–22 September 2023; Association for Computing Machinery: New York, NY, USA, 2023; pp. 1–10. [Google Scholar]
  64. Grimes, G.M.; Schuetzler, R.M.; Giboney, J.S. Mental Models and Expectation Violations in Conversational AI Interactions. Decis. Support Syst. 2021, 144, 113515. [Google Scholar] [CrossRef]
  65. Wang, Q.; Meng, L.; Liu, M.; Wang, Q.; Ma, Q. How Do Social-Based Cues Influence Consumers’ Online Purchase Decisions? An Event-Related Potential Study. Electron. Commer. Res. 2016, 16, 1–26. [Google Scholar] [CrossRef]
  66. Zhao, X.; He, X.; Zhang, W. A Heavy Heart: The Association between Weight and Emotional Words. Front. Psychol. 2016, 7, 920. [Google Scholar] [CrossRef]
  67. Shank, D.B.; DeSanti, A. Attributions of Morality and Mind to Artificial Intelligence after Real-World Moral Violations. Comput. Hum. Behav. 2018, 86, 401–411. [Google Scholar] [CrossRef]
  68. Lee, S.; Park, G.; Chung, J. Artificial Emotions for Charity Collection: A Serial Mediation through Perceived Anthropomorphism and Social Presence. Telemat. Inform. 2023, 82, 102009. [Google Scholar] [CrossRef]
  69. Liu, Y.; Zhan, X.; Li, W.; Han, H.; Wang, H.; Hou, J.; Yan, G.; Wang, Y. The Trait Anger Affects Conflict Inhibition: A Go/Nogo ERP Study. Front. Hum. Neurosci. 2015, 8, 1076. [Google Scholar] [CrossRef]
  70. Zhang, Y. The Development of Users’ Mental Models of MedlinePlus in Information Searching. Libr. Inf. Sci. Res. 2013, 35, 159–170. [Google Scholar] [CrossRef]
  71. Suzuki, Y.; Galli, L.; Ikeda, A.; Itakura, S.; Kitazaki, M. Measuring Empathy for Human and Robot Hand Pain Using Electroencephalography. Sci. Rep. 2015, 5, 15924. [Google Scholar] [CrossRef]
  72. Spatola, N.; Marchesi, S.; Wykowska, A. Cognitive Load Affects Early Processes Involved in Mentalizing Robot Behaviour. Sci. Rep. 2022, 12, 14924. [Google Scholar] [CrossRef]
  73. Schraw, G.; Moshman, D. Metacognitive Theories. Educ. Psychol. Rev. 1995, 7, 351–371. [Google Scholar] [CrossRef]
  74. Bürgler, S.; Hennecke, M. Metacognition and Polyregulation in Daily Self-Control Conflicts. Scand. J. Psychol. 2024, 65, 179–194. [Google Scholar] [CrossRef]
Figure 1. The intentional stance explanation initiated a conflict between the user’s association-based mental model of robots (robots are associated with emotions) and the people’s logical analytic proposition-based mental model (robots do not have emotions), which resulted in a heightened cognitive conflict component for the participants.
Figure 1. The intentional stance explanation initiated a conflict between the user’s association-based mental model of robots (robots are associated with emotions) and the people’s logical analytic proposition-based mental model (robots do not have emotions), which resulted in a heightened cognitive conflict component for the participants.
Sustainability 16 07252 g001
Figure 2. The experiment consisted of presenting a fixation point picture of 500 ms for each trial, followed by a position attribution explanation picture of 2000 ms for the robot, the content of which was either an intentional attribution explanation or a design attribution explanation. Finally, the screen was presented with a 2000-ms robot avatar and words, which were divided into emotional words (go stimulus) and neutral words (nogo stimulus). The participants were asked to determine whether the words were related to emotional feelings: if so, they pressed the “F” key (go); if not, do not press the button (nogo) and finally fill out a questionnaire.
Figure 2. The experiment consisted of presenting a fixation point picture of 500 ms for each trial, followed by a position attribution explanation picture of 2000 ms for the robot, the content of which was either an intentional attribution explanation or a design attribution explanation. Finally, the screen was presented with a 2000-ms robot avatar and words, which were divided into emotional words (go stimulus) and neutral words (nogo stimulus). The participants were asked to determine whether the words were related to emotional feelings: if so, they pressed the “F” key (go); if not, do not press the button (nogo) and finally fill out a questionnaire.
Sustainability 16 07252 g002
Figure 3. Participant error rate in go/nogo tasks.
Figure 3. Participant error rate in go/nogo tasks.
Sustainability 16 07252 g003
Figure 4. ERSD spectrograms and topographic maps evoked by conditioned stimulation of Cz electrode.
Figure 4. ERSD spectrograms and topographic maps evoked by conditioned stimulation of Cz electrode.
Sustainability 16 07252 g004
Figure 5. Functional connectivity diagrams.
Figure 5. Functional connectivity diagrams.
Sustainability 16 07252 g005
Figure 6. Results of the questionnaire on the participants’ attitudes that robots have emotions.
Figure 6. Results of the questionnaire on the participants’ attitudes that robots have emotions.
Sustainability 16 07252 g006
Table 1. Sample characteristics (n = 25).
Table 1. Sample characteristics (n = 25).
MeasureValueFrequency%
GenderMale1248%
Female1352%
Age18~252392%
26~3028%
31~4000%
41~5000%
51~6000%
60+00%
Other informationNormal vision or corrected normal vision,
right-handed, and no history of mental illness
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lv, D.; Sun, R.; Zhu, Q.; Zuo, J.; Qin, S. Sustainable Impact of Stance Attribution Design Cues for Robots on Human–Robot Relationships—Evidence from the ERSP. Sustainability 2024, 16, 7252. https://doi.org/10.3390/su16177252

AMA Style

Lv D, Sun R, Zhu Q, Zuo J, Qin S. Sustainable Impact of Stance Attribution Design Cues for Robots on Human–Robot Relationships—Evidence from the ERSP. Sustainability. 2024; 16(17):7252. https://doi.org/10.3390/su16177252

Chicago/Turabian Style

Lv, Dong, Rui Sun, Qiuhua Zhu, Jiajia Zuo, and Shukun Qin. 2024. "Sustainable Impact of Stance Attribution Design Cues for Robots on Human–Robot Relationships—Evidence from the ERSP" Sustainability 16, no. 17: 7252. https://doi.org/10.3390/su16177252

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop