Next Article in Journal
V-Cockpit: A Platform for the Design, Testing, and Validation of Car Infotainment Systems through Virtual Reality
Previous Article in Journal
Research on Speed Guidance Strategies for Mixed Traffic Flow Considering Uncertainty of Leading Vehicles at Signalized Intersections
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Exploring the Effects of Multi-Factors on User Emotions in Scenarios of Interaction Errors in Human–Robot Interaction

1
Co-Innovation Center of Efficient Processing and Utilization of Forest Resources, Nanjing Forestry University, Nanjing 210037, China
2
College of Furnishings and Industrial Design, Nanjing Forestry University, Nanjing 210037, China
3
College of Automobile and Traffic Engineering, Nanjing Forestry University, Nanjing 210037, China
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2024, 14(18), 8164; https://doi.org/10.3390/app14188164
Submission received: 16 August 2024 / Revised: 6 September 2024 / Accepted: 9 September 2024 / Published: 11 September 2024

Abstract

:
Interaction errors are hard to avoid in the process of human–robot interaction (HRI). User emotions toward interaction errors could further affect the user’s attitudes to robots and experiences of HRI and so on. In this regard, the present study explores the effects of different factors on user emotions when interaction errors occur in HRI. There is sparse research directly studying this perspective. In so doing, three factors, including robot feedback, passive and active contexts, and previous user emotions, were considered. Two stages of online surveys with 465 participants were implemented to explore attitudes to robots and the self-reporting of emotions in active and passive HRI. Then, a Yanshee robot was selected as the experimental platform, and 61 participants were recruited for a real human–robot empirical study based on the two surveys. According to the results of statistical analysis, we conclude some design guides can cope with scenarios of interaction errors. For example, feedback and previous emotions have impacts on user emotions after encountering interaction errors, but contexts do not. There are no interactive effects between the three factors. The approach to reduce negative emotions in the cases of interaction errors in HRI, such as providing irrelevant feedback and so on, is also illustrated in the contributions.

1. Introduction

HRI needs users and robots to understand each other through various interactive cues and exchange interaction information. With the expansion of robot application scenarios, such as exhibitions [1], classrooms [2], shopping malls [3], homes [4], etc., HRI often happens in unstructured environments, with unpredicted noises, uncertain user behaviour, etc., which make interaction errors unavoidable [5]. From an interactive task perspective, HRI can be effectively implemented to complete the task when a user’s behavior and the robot’s feedback meet the expectations of each other. If not, the user will think that there is an interaction error, and the task fails. However, from an emotional experience perspective, it may not be unacceptable or unsuccessful to the user. The violation of existing expectancy may lead to humor, and a breach in the interaction norms may amuse a user and increase the memorability and fun for the user in HRI [6,7]. Just like the process in which humans communicate with each other, even if a task is not completed, it does not necessarily mean it is a bad experience for the user. The context and a person’s expression, such as their voice or physical behavior, may result in feelings of surprise, humor, and so on.
Emotions are important for social understanding between humans and robots. The emotional change in users during the process of HRI is not only a kind of subjective experience for users but also the internal reaction of the users’ social signals. It can enhance or regulate the behavior and decision-making of users. Studies by Mirnig et al. and Tian et al. show the categories of interaction errors in HRI and demonstrate that users respond differently to different situations and types of interaction errors [5,6]. This means that the emotions of users change accordingly. Users’ emotions are influenced by their motives and behaviors [8]. For example, when users are angry, they may tend to resist interacting with robots, and when happy, they may show a more active attitude to establish social connections with robots. However, there are relative few studies on designing handling strategies for interaction errors in HRI from the perspective of emotion.
Emotional experiences are personal and context-specific [9]. They are defined by the Merriam-Webster Dictionary as “the interrelated conditions in which something occurs or exists” [10]. In HRI, some researchers focus on the context of information in conversations [11] and contexts that are observed by robots, including viewpoints, operating backgrounds, and so on [12], and refer to contexts as the simulated “closer to nature” experience [13]. For example, Kim et al. investigated the interaction errors that occurred in contexts with and without prior knowledge [14]. However, there are still few studies that consider the impact of context on interaction errors. Meanwhile, it is worth noting that interaction is bidirectional. Both the user and the robot can be the starting point of interactive information transmission. From a user-centered view, the common context that users operate the robots and then the robots give feedback can be seen as an active interaction process for users. When the robot actively expresses information to the user, the starting point of information comes from the robot. It is a passive interaction for users. In the process of gradual deployment in different scenarios, robots can adjust reactions and behaviors according to the requirements of the environments and users. This means that, in recent years, passive and active contexts in HRI are not uncommon. However, for these two contexts, there are also relatively few studies on the emotional changes in users when they encounter interaction errors.
As mentioned above, this paper mainly focuses on the following four research questions to explore the effects of multi-factors on user emotions when encountering interaction errors in HRI. The corresponding contributions are helpful in promoting the effective design of interaction error-handling strategies in HRI.
Research question 1 (RQ1): How does the feedback from the robot affect user emotions in the case of interaction errors in HRI?
Research question 2 (RQ2): Do passive and active contexts affect user emotions in the case of interaction errors in HRI?
Research question 3 (RQ3): Do previous emotions affect user emotions after encountering interaction errors in HRI? For convenience in writing, previous emotions and user emotions in the following sections, respectively, represent the emotions before and after users encounter interaction errors.
Research question 4 (RQ4): Are there any interaction effects between feedback, contexts, and previous emotions?
The sections are organized as follows: Section 2 briefly illustrates the background. Section 3 shows the methodology, including definitions, study aims, and the procedures of the surveys and lab experiments. Section 4 describes the results and corresponding analysis. Section 5 illustrates the discussions and limitations. Section 6 details the conclusions.

2. Background

2.1. Interaction Errors in HRI

Interaction errors can lead to failures or dilemmas in HRI, as well as possible consequences, such as interaction results deviating from users’ goals and robots showing some behaviors in unexpected ways. Steinbauer revealed that interaction errors refer to problems that arise from uncertainties in the interaction with the environment, other agents, and humans [15]. The symptoms of errors discussed in HRI include uncompleted tasks [16,17], no action or speech [17,18,19], repeating statements or body movements, etc. [16,19]. Following the work of Honig and Oron-Gilad, interaction failures can be categorized by functional severity and social severity regardless of their sources [20]. Tian et al. classified interaction errors into two types, performance errors and social errors, considering their influence on the user’s perception [6]. Performance errors refer to inadequate perception or autonomy of the robot, such as the robot cannot recognize user commands in an unstructured interaction environment. In studies by Mirnig et al., they illustrated this as a kind of technical failure [5,21]. Recently, a lot of work has been undertaken to improve robot performance in the fields of simultaneous localization and mapping (SLAM), autonomous ability, and so on [22]. For example, optimizing algorithms are proposed for the improvement of robots’ localization, mapping, recognition of speech and motion, etc. [23,24,25]. Social errors refer to social norm violations [5,21]. These often lead to problems in social understanding, such as users may not understand robots’ interaction cues since they are irrelevant to the interaction aims. The user’s expectation of the robot’s expected expression can improve their understanding [26]. Raising the expectation could be a considerable error-handling strategy for social errors. For example, apologizing for errors is one expression that conforms to social norms. In some studies, robots were designed to apologize when errors occurred [27,28,29]. In addition, some researchers have explored the impact of human errors on interactive service robotic scenarios [29]. For interaction errors in HRI, most of the relevant studies focus on robots’ technical failures, and a few studies focus on robots’ social norm violations [20].

2.2. Active and Passive Contexts in HRI

Scenarios play an important role in the decision-making of users. For the purpose of improving the effectiveness of interaction and attracting the users’ attention in HRI, robots can be designed to imitate human behaviors, such as handshaking, gazing, active greeting, etc., and express the appropriate feedback to users according to scenarios. For example, a robot generates a handshake behavior when meeting a user for the first time to reduce the user’s feeling of aversion from interacting with the robot [30], and the robot greets a user with appropriate behavior prediction for each scenario to improve the level of humanness and intelligence [31].
The contexts of active and passive decision-making are common in the scenarios of daily environments. However, the user’s emotional experiences are different for these two decision situations [32]. Yu et al. found that active decision-making is associated with a more positive emotional experience, including more happiness, a greater sense of control and achievement, and less distress [33]. With the development of autonomous robots in HRI, there are scenarios in which robots express themselves or actively guide users using different interaction cues without the users’ commands. In this case, the robot is the starting point of interaction, and the user receives the information and gives corresponding feedback. For example, robots use physical actions, like push-and-pull motions, to play games with users [34,35] and guide users using active interactions, involving programming the robot to wait for users and explain the contents around [36]. This is a passive decision-making situation from the user’s view and an active interaction from the perspective of the robot.

2.3. Emotion in HRI

Emotion is one of the important interaction cues for perceiving robots’ behavior and establishing an understandable interactive relationship between users and robots. For example, robots can give positive feedback with verbal emotional expressions to improve the user experience [37]. Chuah et al. found that expressions of surprise and happiness by emotional robots are key to creating positive impacts on potential consumers [38]. Christou et el revealed that negative feelings, such as anger, fear, and sadness during HRI, should not be overlooked since they could lead to subsequent negative reactions [39]. There are also some studies exploring emotions to provide planning and reasoning for intelligent agents to change agents’ behavior [40,41]. From the view of robots, these related explorations of emotion-based behaviors can make external observers believe that robots have emotional preferences for certain behaviors [42,43,44].
In addition, emotion models are used in emotional descriptions in HRI. The Ortony, Clore, and Collins (OCC) model, Ekman’s six common emotions, and Pleasure–Arousal–Dominance (PAD) space, etc., are popular emotion models used in robotic systems [45,46,47]. The original OCC model consists of 22 emotions, but it is hard for robots to express all these kinds of emotions [48]. Hence, the original OCC model is simplified in many studies of HRI. For example, Hyunsik et al. simplified the OOC model and adopted 10 types of emotion, with 5 emotional pairs for the robot’s emotion appraisal [45]. Ekman’s emotion theory and the PAD space model have been used in many robots, such as ROMAN, ERICA, and NAO [45,46,49,50,51].

3. Methodology

3.1. Definitions

From a user-centered view, users determine the interaction errors by the robots’ feedback when they interact with robots. In this case, they find that robots give them irrelevant feedback, such as responses irrelevant to their questions, or the robots express confusion, like “I don’t understand what you said.”, and so on. It is unnecessary for users to identify clearly whether this is a technical failure or a violation of social norms since the working principle of robots and the types of interaction errors in HRI are problems of specialization and unimportant in their daily lives. According to the classified interaction errors by Mirnig et al. and Tian et al. [5,6,17], when a robot gives irrelevant feedback (IF), the interaction error is a typical social norm violation, and when a robot expresses confusing feedback (CF), it can be considered a technical failure. We explore these two cases and define two types of feedback, respectively, from a user-centered perspective. The details of the corresponding definitions are below:
IF: Robots give feedback that is not relative to their intentions to users.
CF: Robots do not complete the tasks of recognition or interpretation. Robots give no feedback or confusing feedback, such as “I don’t understand you”, to users.
According to the starting point of interaction information, the definitions of contexts are given to show the active and passive decision-making processes of users in HRI. A context of passive interaction with a robot (CPIR) is one in which the interaction begins with the robot, and the user responds depending on the robot’s behaviors. A context of active interaction with a robot (CAIR) is one in which the interaction begins with the user, which means that the user operates the robot first, and then the robot gives the corresponding feedback.

3.2. Aims and Procedures

This study is divided into three stages: the first survey, the second survey, and experiments in the lab. The first and second surveys were implemented online, and the corresponding results were used to design and implement the experiments. The experiments were implemented based on a human-like robot platform in real environments. The respective purposes and procedures are shown as follows:

3.2.1. Aims

The main aim of the first survey was to select the appropriate interactive cues for lab experiments. It mainly covers investigations of the basic experience, the emotional capability, and the interaction cues for users. The aim of the second survey was to explore users’ emotional changes when encountering IF and CF, respectively, in HRI. Both the CPIR and CAIR were considered. It is a preliminary study and provides references for the design of lab experiments. The aim of the lab experiments was to explore the impacts of multi-factors, including different types of interaction errors, contexts, and previous emotions on user emotions in the case of interaction errors in HRI in real environments.

3.2.2. The Procedure of the First Survey

A total of 201 participants in the age range of 19–35 years were recruited online. This survey covers three main categories: the basic experience, the emotional capability, and the interaction cues. The corresponding questionnaire is shown in Table 1. The questions are ordered from Q1 to Q6. For Q4, participants were asked to select from Ekman’s six emotions, which include happiness, surprise, sadness, anger, fear, and disgust. For Q5 and Q6, participants were asked to select and rank from the interaction cues, which include voice, face, arm action, and sound.

3.2.3. The Procedure of the Second Survey

A total of 215 participants in the age range of 19–35 years were recruited online. However, in consideration that the data from participants with practical experiences were much more reliable, the participants were further categorized. The number of participants who had experience with CPIRs and CAIRs is 73 and 49, respectively. They were asked for two cases in which they met a single interaction error and repeated interaction errors in this survey. The main flow diagram shown in Figure 1 is for the case of a single interaction error. It includes 3 steps: the recording of initial emotion, the process of the CPIR and CAIR, and the recording of emotion after robot feedback. In the case of repeated interaction errors, CF can be taken as an example, and Steps 2 and 3 in Figure 1 were repeated three times. The corresponding questionnaire is shown in Table 2. The questions are ordered from Q7 to Q11. For Q7, Q9, and Q10, participants were asked to select their current emotions from Ekman’s six emotions, like Q4 in Table 1. For Q8 and Q11, user emotion was evaluated with a numbered scale from 1 to 5, in which 5 means strong and 1 means weak.

3.2.4. The Procedure of Experiments in Real Environments

A Yanshee robot, shown in Figure 2a, utilizes the open hardware platform architecture of Raspberry Pi+STM32. It can achieve multiple interactions, such as voice, arm action, movement, sound, etc. It is designed to provide consulting services, which is one of the most common scenarios of HRI in the fields of education, healthcare, shopping, etc. In these experiments, Yanshee was engaged as a consultant. It can interact with users and provide information about Nanjing Forestry University. Two contexts, including the CPIR and CAIR, were considered, respectively. For the CPIR, Yanshee actively asked participants what they would like to know about the university. For the CAIR, participants needed to actively ask Yanshee questions about the university.
The main experimental framework is shown in Figure 2b, where n represents the repeated times that IF or CF happens. The flow of participants in the experiments is shown in Figure 3a. At the beginning, they were asked about their basic information, including their name, age, and so on, and were required to report their initial emotions, as shown in Step 1 in Figure 2b. The emotions include emotional types from Ekman’s six common emotions and emotional intensity on a 5-point scale. On the scale, 1 and 5 represent extremely weak and extremely strong emotions, respectively. Then, the interactions with Yanshee began. Take an experiment in the CPIR as an example to illustrate the process in detail. The interaction followed the CPIR flow of Step 2 shown in Figure 2b, and n = 3 in the experiments. The starting point of interaction was the Yanshee robot. Participants answered the robot’s question and encountered IF or CF from Yanshee for the first time. Since the case that participants encountered continuous repeated interaction errors was also considered, the above interaction was implemented another five times to make sure that IF and CF occurred three times, respectively.
Undergraduate and graduate students aged 19–35 years from the university were recruited for the CPIR group (M = 24.03, SD = 2.55) and the CAIR group (M = 21.8, SD = 2.51). The number of participants in the CPIR group was 30, and there were 45 participants in the CAIR group. The total number was 61, and 14 of them were involved in both the CPIR and CAIR groups. They participated after being fully informed of the experimental procedures. Voice was used as the main interaction cue according to the results of the first survey, as shown in Section 4.1.1. Example scenarios, including two contexts and two examples of feedback for HRI with interaction errors, are shown in Figure 3b.

4. Results and Analysis

This section briefly illustrates the results of the surveys and experiments, respectively. The data obtained from the surveys and experiments were analyzed by the statistical analysis software, JASP 0.18.1.

4.1. Results and Analysis of Surveys

4.1.1. Interaction Cues in HRI

Most participants recruited for the first survey thought that voice was the most important interaction cue, and the other three items were face, arm action, and sound in the rankings, as shown in Figure 4a. The data shown in Figure 4a were verified as random using run tests, where M = 2.36 and p = 0.752.
The participants were also asked to rank the corresponding level of importance with a 5-point scale. The scores of unimportance and great importance were set as 0 and 4, respectively. As shown in Figure 4b and the run tests, conversation is the most important interaction cue, with an average score of 3.29 (p = 0.881), and face, arm action, and sound have average scores of 1.94 (p = 0.880), 1.95 (p = 0.508), and 1.64 (p = 0.571), respectively. This indicates that conversation and sound are the interaction cues of the most and the least concern, and face and arm action are almost the same in view of the degree of importance for users. In addition, the correlations between the importance levels of four interaction cues were analyzed by Spearman correlation coefficients. The results show that only the p-value between conversation and sound is smaller than 0.05 and statistically significant. The Spearman correlation coefficient between conversation and sound is −0.155. This reveals there are no correlations between the four interaction cues, except that conversation has a negative correlation with sound.

4.1.2. Effects of Feedback and Contexts in Surveys

According to the data from the second survey, the effects of feedback and contexts in HRI with interaction errors, i.e., RQ1 and RQ2, were analyzed by Fisher’s exact test, respectively. The corresponding results are shown in Table 3. χ2, p, and Cramer’s V represent the values of chi-square, significance, and Cramer’s V, respectively. They show that, for contexts, the difference between the count and the expected count is quite small as χ2 is small. In addition, they show significant differences for feedback (p < 0.05) and no significant differences for contexts (p > 0.05). The Cramer’s V coefficients in Table 3 indicate the magnitude of the effect size of contexts is quite small, while that of feedback is medium. There also exists a statistical difference between the count and the expected count in the cases where the user emotions are happiness, surprise, anger, and disgust, respectively, as feedback. In the case of repeated CF, the effects of contexts were considered, and the corresponding results are shown in Table 4, where i represents the number of times that CF is repeated. It is obvious that contexts do not show an impact on user emotions, but feedback does.
The log-linear model analysis method was employed to analyze the corresponding interaction effects, i.e., RQ4. The main variables include contexts, feedback, and emotions before and after users encounter interaction errors. The corresponding results show that all p-values in the step summary of the model are larger than 0.05. This reveals there are no interaction effects between contexts, feedback, and previous emotions.

4.1.3. Users’ Previous Emotions and Their Effects in Surveys

Considering initial emotions, happiness and surprise make up the vast majority in the two surveys. For the cases when users encounter a single interaction error, the effects of previous emotions, i.e., RQ3, were also obtained by Fisher’s exact test, and the results are shown in Table 3. It shows significant differences (p < 0.05), and the difference between the count and the expected count is relatively large (χ2 = 44.149) for previous emotions. The Cramer’s V coefficient is 0.248, which means the magnitude of the effect size is small. Furthermore, when the previous emotion is happiness, there exists a statistical difference between the count and the expected count for the cases where the user emotion is surprise, and when the previous emotion is surprise, there is no statistical difference, but the largest adjusted residual of 1.9 is shown in the cases where the user emotion is anger.
For the case of repeated interaction errors, the example of three times of CF was used, and the abbreviations of user emotions were recorded by the number of times, as Emotion 1, Emotion 2, and Emotion 3, respectively. The percentages of users’ emotion types are shown in Figure 5. The percentage of happiness gradually decreases from the initial emotion to Emotion 3, and the percentages of anger and disgust clearly increase. The analyzed results by Fisher’s exact test are shown in Table 4. The Cramer’s V coefficients shown in Table 4 indicate the magnitude of the effect size is medium, and it increases with the number of times of CF. This reveals that previous emotions show effects on emotions after users encounter interaction errors.

4.2. Results and Analysis of Experiments

4.2.1. Effects of Feedback and Contexts in Real Environments

Using Fisher’s exact test, RQ1 and RQ2 in real HRI experiments were analyzed, and the results are shown in Table 5. The meaning of n is the same as in Figure 2b. Since all p-values for contexts shown in Table 5 are larger than 0.05, this reveals that contexts do not show significant differences in user emotions in real environments. In addition, feedback shows significant differences in the cases of first and second interaction errors but no significant differences in the case of third interaction errors. Comparing the adjusted residuals between IF and CF, statistical differences exist. Relatively, IF tends to evoke happiness and surprise, and CF tends to evoke sadness and anger. The effects of feedback on user emotions in the CAIR and the CPIR were analyzed, respectively, and the results are shown in Table 6. It shows significant differences when an interaction error occurred for the first time in the CAIR. It also reveals that, for feedback, the magnitude of the effect size in the CAIR and the CPIR reduces with the number of times of interaction errors.

4.2.2. Users’ Previous Emotions and Their Effects in Real Environments

The initial emotions of participants were happiness and surprise with 75.56% and 22.22% in the CAIR and 73.33% and 23.33% in the CPIR, respectively. The percentages of emotions after they were exposed to IF or CF three times in the CPIR and the CAIR are shown in Figure 6. Clearly, when users encounter CF, the percentages of negative emotions, such as sadness, anger, and disgust, are larger.
Fisher’s exact test was employed to analyze the effects of users’ previous emotions, i.e., RQ3, in real environments, and the results show significant differences. χ2 = 85.123, p < 0.001, and the Cramer’s V coefficient is 0.357 for the second repeated interaction error, and χ2 = 111.454, p < 0.001, and the Cramer’s V coefficient is 0.452 for the third time. The magnitude of the effect size is medium, and it grows with the number of times of interaction errors. Statistical differences exist for user emotions after they encounter interaction errors. For example, in the case of the second repeated interaction errors, when the previous emotion is happiness, the adjusted residuals are 2.7 in happiness and −2.2 in anger, respectively, and this means participants tend to be happy after the error. When the previous emotion is surprise, the corresponding adjusted residuals are 3.4 in surprise and −2.7 in disgust, respectively, and this means participants tend to be surprised after the error. In addition, when the previous emotions are negative, the adjusted residuals are larger than 2 in the negative emotions of the users’ emotions after they encounter interaction errors. For example, for the three repeated interaction errors, when the previous emotion is disgust, the adjusted residual is 5.6 in disgust. The cases of IF and CF and the cases of the CAIR and the CPIR were considered, respectively. The corresponding results are shown in Table 7. It indicates that the previous emotions in these four cases also show significant differences.
Then, for RQ4, using log-linear model analysis, no interaction effects between feedback, contexts, and previous emotions are shown, as all p-values are larger than 0.05 in the step summary of the model.

4.2.3. The Effects on the Intensity of User Emotions in Real Environments

The emotional intensity of user emotions after they encountered interaction errors in real environments was investigated. The emotional categories of happiness, surprise, anger, and disgust were considered. The data for each category show normal distributions and pass homogeneity of variance. One-way analysis of variance (ANOVA), followed by a Least Significant Difference (LSD) test, was used to compare the general intensity of emotions, respectively, obtained after three interaction errors. The results show that there are no significant differences among the general intensity of happiness, surprise, and disgust but significant differences in anger (p < 0.05) when users encounter repeated interaction errors. In addition, all the p-values are larger than 0.05 when considering the effects of feedback and contexts. This reveals that the types of feedback and contexts do not affect the general intensity of each emotion.

5. Discussions

The four research questions in this paper can be discussed by combining the above results of the surveys and experiments. Overall, for RQ1, RQ2, and RQ3, the type of feedback and the type of previous emotions affect the type of user emotions when users encounter interaction errors in HRI, but the type of contexts, CPIR and CAIR, do not. Specifically, whether users actively interact with robots or not does not affect their emotions when encountering interaction errors, and users rarely feel fear. When previous emotions show one type of emotion, this emotion also has the largest proportion in user emotions after encountering interaction errors. For example, when the previous emotion was happiness, the user’s emotion tends to be happiness. Hence, it is better to design robot expressions that can make users feel happier to reduce the possibility of negative emotions and help them cope when encountering interaction errors. Since the emotional intensity perceived by users does not show an impact on user emotions, the designed robot’s expressions of happiness should be recognized by users; however, the corresponding intensity is unimportant in handling interaction errors.
Furthermore, regarding the feedback mentioned in RQ2, compared with CF, IF tends to evoke happiness and surprise according to the results of Fisher’s exact test. As shown in Table 4 and Table 7, in the case of IF, the corresponding magnitude of the effect size decreases with the frequency of interaction errors, while it increases in the case of CF. Considering the uncontrollability of previous emotions, it is better to reduce the corresponding magnitude of the effect on user emotions in real HRI. Hence, if we hope user emotions will be in a more positive state when users encounter interaction errors, an irrelevant answer to the user is a better choice when handling interaction errors. Further, the overall magnitude of the effect size of feedback on user emotions reduces with the times of interaction errors, as shown in Table 5. This trend is also evident in the CAIR and the CPIR, respectively, and the decreasing amplitude of effect size in the CAIR is faster than that in the CPIR, as shown in Table 6. This indicates that, although the CAIR and the CPIR do not affect user emotions, different feedback from robots shows different effects on users in the cases of the CAIR and the CPIR. Significant differences were only found in the first interaction error in the CAIR. This means that the impact of the different feedback, IF and CF, is manifested in the CAIR. Combined with Figure 6, IF is better than CF in making users express fewer negative emotions in the CAIR. As mentioned above, if interaction errors occur when users operate robots actively, an irrelevant answer given to users may be better for their emotions, even though the irrelevant answer is a social norm violation of interaction. The type of previous emotions and the type of feedback are influencing factors on user emotions in this case. But when the interaction begins with the robot, expressions, including social norm violations and technical failure from the robot, given to users do not affect their emotions, and the type of previous emotions is the influencing factor.
For RQ4, there are no interaction effects between the three factors. This means that the selection of contexts, the expression of feedback, and the regulation of previous emotions can be considered, respectively. Through the appropriate design of the factors, users’ negative emotions toward interaction errors can be reduced. The intensity of user emotions is not affected by contexts or feedback. With increasing frequency of repeated interaction errors, the general intensity of anger increases. Hence, it is necessary to deal with interaction errors as soon as they are found to reduce the accumulation of anger. In addition, the effects of gender were also considered. In the experiments, a total of 34 males (M = 22.56, SD = 3.34) and 28 females (M = 22.44, SD = 2.26) were recruited. According to the results of Fisher’s exact test and one-way ANOVA, all the p-values are smaller than 0.05, and there are no significant differences in the types and the intensity of user emotions. This reveals that gender does not affect user emotions after encountering interaction errors.
The findings in this study could be utilized to design interaction from the view of emotion to handle interaction errors in real HRI environments. The limitations are described from three aspects. One is that the experimental platform, the Yanshee robot, could not exhaustively represent robots that are used in real environments. Humanoid robots used in many public areas may show larger sizes and much more interactive expressions. These could affect user emotions toward robots [49,50]. However, considering the results of the online surveys, the data analyzed were derived from participants who had experience with different robots. This means that the outcome of this study can be used as a directional guideline for design in the cases of interaction errors in HRI. Another is that the results were obtained from the data of participants aged 19–35. If the participants were from a wider range of ages, the findings of this study would be more widely generalizable. Finally, the data obtained in this study were from subjective reports by users. Objective methods, such as tests using an eye-tracker system or EEG system, which may provide better comparisons, were not considered. This is a question that we will address in our future research.

6. Conclusions

This paper mainly studies the influence of different factors on user emotions in the scenarios of human–robot interaction errors. We analyzed three factors, including different types of robot feedback, passive and active contexts, and previous emotions. Online surveys were implemented to explore the influence of the three factors preliminarily and provide support for designing appropriate experiments in real environments. A humanoid robot named Yanshee was selected as the experimental platform and designed to provide consulting services. The results of our experiments show some design guidelines to cope with the scenario of unavoidable or sudden interaction errors in HRI, and the main contributions are as follows:
  • The contexts of active and passive interactions with robots have no impact on user emotions, but we need to concentrate on different affecting factors in the two contexts. When designing active interactions with robots, the type of feedback and previous emotions need to be considered, and when designing passive interactions in HRI, previous emotions are the key element;
  • Different feedback from robots has impacts on user emotions and is mainly significant in the context where users begin interactions with robots actively. The irrelevant feedback given to users is better from the view of user emotions. Although this kind of feedback violates social norms, users are less likely to feel negatively compared with the case of experiencing technical failure in HRI;
  • Previous emotions affect user emotions, and the corresponding type tends to be consistent to some degree. If designers want users to show fewer negative emotions when they encounter interaction errors, it is better to provide users with robot expressions that make them happy;
  • There are no interaction effects between contexts, feedback, and previous emotions. Designers can consider the three factors separately. For the general intensity of user emotions, contexts and feedback have no impact. When the interaction errors are repeated, the general intensity of anger increases. Hence, designers should provide an approach to prevent interaction errors from repeating themselves.
Overall, this study is meaningful in that it makes a feasible design guide for dealing with interaction errors in HRI from the view of user emotions, and the corresponding contributions give direction for user-centered and affective HRI design.

Author Contributions

Conceptualization, formal analysis, methodology, writing—draft preparation, writing—review, funding acquisition, W.G.; investigation, formal analysis, Y.T. and S.S.; investigation, Y.J.; project administration, N.S.; visualization, W.S. and W.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China, grant numbers 52105262 and U2013602.

Institutional Review Board Statement

Ethical review and approval were waived for this study, due to that the questionnaires and experiments in the study were not invasive examination, did not disturb anyone’s ethical standards and ask any private information.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to privacy.

Acknowledgments

The authors would like to thank all of the participants who took the time and effort to participate in the testing.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Iio, T.; Satake, S.; Kanda, T.; Hayashi, K.; Ferreri, F.; Hagita, N. Human-like Guide Robot that Proactively Explains Exhibits. Int. J. Soc. Robot. 2020, 12, 549–566. [Google Scholar] [CrossRef]
  2. Komatsubara, T.; Shiomi, M.; Kaczmarek, T.; Kanda, T.; Ishiguro, H. Estimating Children’s Social Status through Their Interaction Activities in Classrooms with a Social Robot. Int. J. Soc. Robot. 2019, 11, 35–48. [Google Scholar] [CrossRef]
  3. Schneider, S.; Liu, Y.; Tomita, K.; Kanda, T. Stop Ignoring Me! On Fighting the Trivialization of Social Robots in Public Spaces. ACM Trans. Human-Robot Interact. 2022, 11, 11. [Google Scholar] [CrossRef]
  4. Mcginn, C.; Sena, A.; Kelly, K. Controlling Robots in the Home: Factors that Affect the Performance of Novice Robot Operators. Appl. Ergon. 2017, 65, 23–32. [Google Scholar] [CrossRef] [PubMed]
  5. Mirnig, N.; Giuliani, M.; Stollnberger, G.; Stadler, S.; Buchner, R.; Tscheligi, M. Impact of Robot Actions on Social Signals and Reaction Times in HRI Error Situations. In Proceedings of the International Conference on Social Robotics, Paris, France, 26–30 October 2015. [Google Scholar]
  6. Tian, L.; Oviatt, S.L. A Taxonomy of Social Errors in Human-Robot Interaction. ACM Trans. Human-Robot Interact. 2021, 10, 13. [Google Scholar] [CrossRef]
  7. Deckers, L.; Devine, J. Humor by Violating an Existing Expectancy. J. Psychol. 1981, 108, 107–110. [Google Scholar] [CrossRef]
  8. Isen, A.M. A Role for Neuropsychology in Understanding the Facilitating Influence of Positive Affect on Social Behavior and Cognitive Processes. In Oxford Handbook of Positive Psychology, 1st ed.; Lopez, S.J., Snyder, C.R., Eds.; Oxford University Press: Oxford, UK, 2009; pp. 503–518. [Google Scholar]
  9. Jurist, E. Review of How Emotions Are Made: The Secret Life of the Brain. J. Theor. Philos. Psych. 2019, 39, 155–157. [Google Scholar] [CrossRef]
  10. Aldao, A. The Future of Emotion Regulation Research: Capturing Context. Perspect. Psychol. Sci. 2013, 8, 155–172. [Google Scholar] [CrossRef]
  11. Li, W.; Shao, W.; Ji, S.; Cambria, E. BiERU: Bidirectional Emotional Recurrent Unit for Conversational Sentiment Analysis. arXiv 2020. [Google Scholar] [CrossRef]
  12. Qian, Z.; You, M.; Zhou, H.; Xu, X.; He, B. Robot Learning from Human Demonstrations with Inconsistent Contexts. Robot. Auton. Syst. 2023, 166, 104466. [Google Scholar] [CrossRef]
  13. Feng, Y.; Perugia, G.; Yu, S.; Barakova, E.I.; Hu, J.; Rauterberg, G.W.M. Context-Enhanced Human-Robot Interaction: Exploring the Role of System Interactivity and Multimodal Stimuli on the Engagement of People with Dementia. Int. J. Soc. Robot. 2022, 14, 807–826. [Google Scholar] [CrossRef]
  14. Kim, S.K.; Kirchner, E.A.; Schloßmüller, L.; Kirchner, F. Errors in Human-Robot Interactions and Their Effects on Robot Learning. Front. Robot. AI 2020, 7, 558531. [Google Scholar] [CrossRef] [PubMed]
  15. Steinbauer, G. A Survey about Faults of Robots Used in RoboCup, Lecture Notes in Computer Science; van der, Z.T., Chen, X., Stone, P.L., Sucar, E., Eds.; Springer: Berlin/Heidelberg, Germany, 2013; Volume 7500, pp. 344–355. [Google Scholar]
  16. Kwon, M.; Huang, S.H.; Dragan, A.D. Expressing Robot Incapability. In Proceedings of the Thirteenth Annual ACM/IEEE International Conference on Human-Robot Interaction, Chicago, IL, USA, 23 February 2018. [Google Scholar]
  17. Mirnig, N.; Stollnberger, G.; Miksch, M.; Stadler, S.; Giuliani, M.; Tscheligi, M. To Err Is Robot: How Humans Assess and Act Toward an Erroneous Social Robot. Front. Robot. AI 2017, 4, 21. [Google Scholar] [CrossRef]
  18. Lucas, G.M.; Boberg, J.; Traum, D.; Artstein, R.; Gratch, J.; Gainer, A.; Johnson, E.; Leuski, A.; Nakano, M. Getting to Know Each Other: The Role of Social Dialogue in Recovery from Errors in Social Robots. In Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, Chicago, IL, USA, 26 February 2018. [Google Scholar]
  19. Lucas, G.M.; Boberg, J.; Traum, D.; Artstein, R.; Gratch, J.; Gainer, A.; Johnson, E.; Leuski, A.; Nakano, M. The Role of Social Dialogue and Errors in Robots. In Proceedings of the 5th International Conference on Human Agent Interaction, New York, NY, USA, 17 October 2017. [Google Scholar]
  20. Honig, S.; Oron-Gilad, T. Understanding and Resolving Failures in Human-Robot Interaction: Literature Review and Model Development. Front. Psychol. 2018, 9, 861. [Google Scholar] [CrossRef]
  21. Giuliani, M.; Mirnig, N.; Stollnberger, G.; Stadler, S.; Buchner, R.; Tscheligi, M. Systematic Analysis of Video Data from Different Human–Robot Interaction Studies: Acategorization of Social Signals During Error Situations. Front. Psychol. 2015, 6, 931. [Google Scholar] [CrossRef]
  22. Chen, W.; Zhou, C.; Shang, G.; Wang, X.; Li, Z.; Xu, C.; Hu, K. SLAM Overview: From Single Sensor to Heterogeneous Fusion. Remote Sens. 2022, 14, 6033. [Google Scholar] [CrossRef]
  23. Fahn, C.-S.; Chen, S.-C.; Wu, P.-Y.; Chu, T.-L.; Li, C.-H.; Hsu, D.-Q.; Wang, H.-H.; Tsai, H.-M. Image and Speech Recognition Technology in the Development of an Elderly Care Robot: Practical Issues Review and Improvement Strategies. Healthcare 2022, 10, 2252. [Google Scholar] [CrossRef]
  24. Balmik, A.; Jha, M.; Nandy, A. NAO Robot Teleoperation with Human Motion Recognition. Arab. J. Sci. Eng. 2022, 47, 1137–1146. [Google Scholar] [CrossRef]
  25. Kulesza, T.; Stumpf, S.; Burnett, M.; Yang, S.; Kwan, I.; Wong, W.K. Too Much, Too Little, or Just Right? Ways Explanations Impact End Users’ Mental Models. In Proceedings of the IEEE Symposium on Visual Languages and Human-Centric Computing, San Jose, CA, USA, 23–26 September 2013. [Google Scholar]
  26. Fratczak, P.; Goh, Y.; Kinnell, P.; Justham, L.; Soltoggio, A. Robot Apology as a Post-Accident Trust-Recovery Control Strategy in Industrial Human-Robot Interaction. Int. J. Ind. Ergonom. 2021, 82, 103078. [Google Scholar] [CrossRef]
  27. Oono, S.; Ishii, H.; Kihara, R.; Katagami, D.; Shuzo, M.; Maeda, E. Interaction Strategy for Robotic Apology Based on Human Orientation Toward Service. Adv. Robot. 2024, 38, 226–245. [Google Scholar] [CrossRef]
  28. Aymerich-Franch, L.; Kishore, S.; Slater, M. When Your Robot Avatar Misbehaves You Are Likely to Apologize: An Exploration of Guilt During Robot Embodiment. Int. J. Soc. Robot. 2020, 12, 217–226. [Google Scholar] [CrossRef]
  29. Lestingi, L.; Manglaviti, A.; Marinaro, D.; Marinello, L.; Askarpour, M.; Bersani, M.M.; Rossi, M. Analyzing the Impact of Human Errors on Interactive Service Robotic Scenarios Via Formal Verification. Softw. Syst. Model. 2024, 23, 473–502. [Google Scholar] [CrossRef]
  30. Ota, S.; Jindai, M.; Fukuta, T.; Watanabe, T. Small-Sized Handshake Robot System for Generation of Handshake Behavior with Active Approach to Human. J. Adv. Mech. Des. Syst. 2019, 13, JAMDSM0026. [Google Scholar] [CrossRef]
  31. Okuda, M.; Takahashi, Y.; Tsuichihara, S. Human Response to Humanoid Robot That Responds to Social Touch. Appl. Sci. 2022, 12, 9193. [Google Scholar] [CrossRef]
  32. Stock-Homburg, R. Survey of Emotions in Human–Robot Interactions: Perspectives from Robotic Psychology on 20 Years of Research. Int. J. Soc. Robot. 2022, 14, 389–411. [Google Scholar] [CrossRef]
  33. Pan, Y.; Lai, F.; Fang, Z.; Xu, S.; Gao, L.; Robertson, D.C.; Rao, H. Risk Choice and Emotional Experience: A Multi-Level Comparison between Active and Passive Decision-making. J. Risk Res. 2019, 22, 1239–1266. [Google Scholar] [CrossRef]
  34. Hu, Y.; Abe, N.; Benallegue, M.; Yamanobe, N.; Venture, G.; Yoshida, E. Toward Active Physical Human–Robot Interaction: Quantifying the Human State During Interactions. IEEE Trans. Human-Mach. Syst. 2022, 52, 367–378. [Google Scholar] [CrossRef]
  35. Hu, Y.; Benallegue, M.; Venture, G.; Yoshida, E. Interact with Me: An Exploratory Study on Interaction Factors for Active Physical Human-Robot Interaction. IEEE Robot. Autom. Let. 2020, 5, 6764–6771. [Google Scholar] [CrossRef]
  36. Zhang, B.; Nakamura, T.; Kaneko, M.; Lim, H.O. Development of an Autonomous Guide Robot based on Active Interactions with Users. In Proceedings of the 2020 IEEE/SICE International Symposium on System Integration, Honolulu, HI, USA, 12–15 January 2020. [Google Scholar]
  37. Trinh, H.; Asadi, R.; Edge, D.; Bickmore, T. Robocop: A Robotic Coach for Oral Presentations. Proc. ACM Interact. Mob. Wear. Ubiq. Technol. 2017, 1, 1–24. [Google Scholar] [CrossRef]
  38. Chuah, S.H.W.; Yu, J. The Future of Service: The Power of Emotion in Human-Robot Interaction. J. Retail. Consum. Serv. 2021, 61, 102551. [Google Scholar] [CrossRef]
  39. Christou, P.; Simillidou, A.; Stylianou, M.C. Tourists’ perceptions regarding the use of anthropomorphic robots in tourism and hospitality. Int. J. Contemp. Hosp. Manag. 2020, 32, 3665–3683. [Google Scholar] [CrossRef]
  40. Kirtay, M.; Vannucci, L.; Albanese, U.; Laschi, C.; Oztop, E.; Falotico, E. Emotion as an Emergent Phenomenon of the Neurocomputational Energy Regulation Mechanism of a Cognitive Agent in a Decision-Making Task. Adapt. Behav. 2021, 29, 55–71. [Google Scholar] [CrossRef]
  41. Gratch, J. Émile: Marshalling Passions in Training and Education. In Proceedings of the Fourth International Conference on Autonomous Agents, Brasília, Brazil, 1 June 2000. [Google Scholar]
  42. Kirtay, M.; Vannucci, L.; Falotico, E.; Oztop, E.; Laschi, C. Sequential Decision Making Based on Emergent Emotion for a Humanoid Robot. In Proceedings of the IEEE-RAS International Conference on Humanoid Robots (Humanoids 2016), Tokyo, Japan, 12 November 2016. [Google Scholar]
  43. Kirtay, M.; Oztop, E. Emergent Emotion Via Neural Computational Energy Conservation on a Humanoid Robot. In Proceedings of the 2013 13th IEEE-RAS International Conference on Humanoid Robots (Humanoids), Tokyo, Japan, 12–15 November 2013. [Google Scholar]
  44. Maroto-Gómez, M.; Malfaz, M.; Castro-González, L.; Salichs, M. A Motivational Model Based on Artificial Biological Functions for the Intelligent Decision-making of Social Robots. Memet. Comput. 2023, 15, 237–257. [Google Scholar] [CrossRef]
  45. Ahn, H.; Park, S. Contextual Emotion Appraisal Based on a Sentential Cognitive System for Robots. Appl. Sci. 2021, 11, 2027. [Google Scholar] [CrossRef]
  46. Dimitrievska, V.; Ackovska, N. Behavior Models of Emotion-Featured Robots: A Survey. J. Intell. Robot. Syst. 2020, 100, 1031–1053. [Google Scholar] [CrossRef]
  47. Spezialetti, M.; Placidi, G.; Rossi, S. Emotion Recognition for Human-Robot Interaction: Recent Advances and Future Perspectives. Front. Robot. AI 2020, 7, 532279. [Google Scholar] [CrossRef]
  48. Hirth, J.; Schmitz, N.; Berns, K. Emotional Architecture for the Humanoid Robot Head ROMAN. In Proceedings of the 2007 IEEE International Conference on Robotics and Automation, Rome, Italy, 10–14 April 2007. [Google Scholar]
  49. Zheng, X.; Shiomi, M.; Minato, T.; Ishiguro, H. What Kinds of Robot’s Touch Will Match Expressed Emotions? IEEE Robot. Autom. Let. 2020, 5, 127–134. [Google Scholar] [CrossRef]
  50. Hwang, J.; Park, T.; Hwang, W. The Effects of Overall Robot Shape on the Emotions Invoked in Users and the Perceived Personalities of Robot. Appl. Ergon. 2013, 44, 459–471. [Google Scholar] [CrossRef]
  51. Cirasa, C.; Høgsdal, H.; Conti, D. “I See What You Feel”: An Exploratory Study to Investigate the Understanding of Robot Emotions in Deaf Children. Appl. Sci. 2024, 14, 1446. [Google Scholar] [CrossRef]
Figure 1. Main flow diagram for the second survey.
Figure 1. Main flow diagram for the second survey.
Applsci 14 08164 g001
Figure 2. (a) Yanshee robot. (b) The experimental framework in real environments.
Figure 2. (a) Yanshee robot. (b) The experimental framework in real environments.
Applsci 14 08164 g002
Figure 3. (a) The flow of participants in experiments. (b) Example scenario for HRI with interaction errors.
Figure 3. (a) The flow of participants in experiments. (b) Example scenario for HRI with interaction errors.
Applsci 14 08164 g003
Figure 4. (a) The data of concerned interaction cues. (b) The score of different interaction cues considering the level of importance.
Figure 4. (a) The data of concerned interaction cues. (b) The score of different interaction cues considering the level of importance.
Applsci 14 08164 g004
Figure 5. The percentages of users’ emotion types when users encounter CF repeatedly.
Figure 5. The percentages of users’ emotion types when users encounter CF repeatedly.
Applsci 14 08164 g005
Figure 6. (a) The percentages of emotions after interaction errors in CPIR. (b) The percentages of emotions after interaction errors in CAIR.
Figure 6. (a) The percentages of emotions after interaction errors in CPIR. (b) The percentages of emotions after interaction errors in CAIR.
Applsci 14 08164 g006
Table 1. Questionnaire of the first survey: example questions given to participants.
Table 1. Questionnaire of the first survey: example questions given to participants.
CategoriesTopicExample QuestionType
Basic experienceUnderstanding of humanoid robotsQ1: Have you ever seen a humanoid service robot in real environments?Y/N
Q2: Have you ever used a humanoid robot actively/passively?Y/N
Emotional capabilityEmotion when using humanoid robotQ3: Are you willing to actively/passively use humanoid robots?Y/N
Q4: Which is your emotion state when the active/passive use of humanoid robots?Selected
Interaction cuesPerception toward interaction cues of humanoid robotQ5: What aspects of humanoid robots do you care most about?Selected
Q6: Please rank the importance of the following interaction cues when you interact with robot.Selected
Table 2. Questionnaire of the second survey: example questions given to participants.
Table 2. Questionnaire of the second survey: example questions given to participants.
StepsExample QuestionType
1Q7: You are going to actively/passively use humanoid service robot. which type can be used to describe your current emotion?Selected
Q8: How would you evaluate this emotion?1–5
2Q9: When actively/passively using a humanoid service robot, the robot shows confusion about your operation, which type can be used to describe your current emotion?Selected
Q10: When actively/passively using a humanoid service robot, the robot shows irrelevant feedbacks about your operation, which type can be used to describe your current emotion?Selected
3Q11: How would you evaluate this emotion?1–5
Table 3. The results in the case of encountering interaction error for the first time.
Table 3. The results in the case of encountering interaction error for the first time.
Factorχ2pCramer’s V
Feedback43.414<0.0010.419
Contexts1.6410.950.083
Previous emotions44.149<0.0010.248
Table 4. The results in the case of repeated CF for the effects of contexts and previous emotions, respectively.
Table 4. The results in the case of repeated CF for the effects of contexts and previous emotions, respectively.
i *ContextsPrevious Emotions
χ2pCramer’s Vχ2pCramer’s V
11.6760.8150.11939.585<0.0010.323
22.2880.8380.12651.192<0.0010.357
32.0440.9010.12887.742<0.0010.424
* i represents the number of times that CF is repeated.
Table 5. The results in the case of repeated errors for the effects of feedback and contexts, respectively.
Table 5. The results in the case of repeated errors for the effects of feedback and contexts, respectively.
n *FeedbackContexts
χ2pCramer’s V χ2pCramer’s V
114.723 0.0050.3152.8790.7660.139
211.5770.0270.284.1990.5140.169
33.3610.3410.165.9030.30.196
* n is the number of times that error is repeated.
Table 6. The results in the case of repeated errors for the effects of feedback in CAIR and CPIR, respectively.
Table 6. The results in the case of repeated errors for the effects of feedback in CAIR and CPIR, respectively.
n *CAIRCPIR
χ2pCramer’s Vχ2pCramer’s V
110.6490.0330.3466.5350.1630.338
29.1310.0810.3216.4310.1680.33
34.0980.5470.2145.9240.3150.325
* n is the number of times that error is repeated.
Table 7. The results in the case of repeated errors for the effects of previous emotions in IF, CF, CAIR and CPIR, respectively.
Table 7. The results in the case of repeated errors for the effects of previous emotions in IF, CF, CAIR and CPIR, respectively.
n *IFCFCAIRCPIR
χ2pCramer’s Vχ2pCramer’s Vχ2pCramer’s Vχ2pCramer’s V
238.926<0.0010.45244.692<0.0010.41056.187<0.0010.37128.7030.0020.378
355.631<0.0010.43562.827<0.0010.5478.449<0.0010.47543.814<0.0010.484
* n is the number of times that error is repeated.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gao, W.; Tian, Y.; Shen, S.; Ji, Y.; Sun, N.; Song, W.; Zhai, W. Exploring the Effects of Multi-Factors on User Emotions in Scenarios of Interaction Errors in Human–Robot Interaction. Appl. Sci. 2024, 14, 8164. https://doi.org/10.3390/app14188164

AMA Style

Gao W, Tian Y, Shen S, Ji Y, Sun N, Song W, Zhai W. Exploring the Effects of Multi-Factors on User Emotions in Scenarios of Interaction Errors in Human–Robot Interaction. Applied Sciences. 2024; 14(18):8164. https://doi.org/10.3390/app14188164

Chicago/Turabian Style

Gao, Wa, Yuan Tian, Shiyi Shen, Yang Ji, Ning Sun, Wei Song, and Wanli Zhai. 2024. "Exploring the Effects of Multi-Factors on User Emotions in Scenarios of Interaction Errors in Human–Robot Interaction" Applied Sciences 14, no. 18: 8164. https://doi.org/10.3390/app14188164

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop