1. Introduction
HRI needs users and robots to understand each other through various interactive cues and exchange interaction information. With the expansion of robot application scenarios, such as exhibitions [
1], classrooms [
2], shopping malls [
3], homes [
4], etc., HRI often happens in unstructured environments, with unpredicted noises, uncertain user behaviour, etc., which make interaction errors unavoidable [
5]. From an interactive task perspective, HRI can be effectively implemented to complete the task when a user’s behavior and the robot’s feedback meet the expectations of each other. If not, the user will think that there is an interaction error, and the task fails. However, from an emotional experience perspective, it may not be unacceptable or unsuccessful to the user. The violation of existing expectancy may lead to humor, and a breach in the interaction norms may amuse a user and increase the memorability and fun for the user in HRI [
6,
7]. Just like the process in which humans communicate with each other, even if a task is not completed, it does not necessarily mean it is a bad experience for the user. The context and a person’s expression, such as their voice or physical behavior, may result in feelings of surprise, humor, and so on.
Emotions are important for social understanding between humans and robots. The emotional change in users during the process of HRI is not only a kind of subjective experience for users but also the internal reaction of the users’ social signals. It can enhance or regulate the behavior and decision-making of users. Studies by Mirnig et al. and Tian et al. show the categories of interaction errors in HRI and demonstrate that users respond differently to different situations and types of interaction errors [
5,
6]. This means that the emotions of users change accordingly. Users’ emotions are influenced by their motives and behaviors [
8]. For example, when users are angry, they may tend to resist interacting with robots, and when happy, they may show a more active attitude to establish social connections with robots. However, there are relative few studies on designing handling strategies for interaction errors in HRI from the perspective of emotion.
Emotional experiences are personal and context-specific [
9]. They are defined by the Merriam-Webster Dictionary as “the interrelated conditions in which something occurs or exists” [
10]. In HRI, some researchers focus on the context of information in conversations [
11] and contexts that are observed by robots, including viewpoints, operating backgrounds, and so on [
12], and refer to contexts as the simulated “closer to nature” experience [
13]. For example, Kim et al. investigated the interaction errors that occurred in contexts with and without prior knowledge [
14]. However, there are still few studies that consider the impact of context on interaction errors. Meanwhile, it is worth noting that interaction is bidirectional. Both the user and the robot can be the starting point of interactive information transmission. From a user-centered view, the common context that users operate the robots and then the robots give feedback can be seen as an active interaction process for users. When the robot actively expresses information to the user, the starting point of information comes from the robot. It is a passive interaction for users. In the process of gradual deployment in different scenarios, robots can adjust reactions and behaviors according to the requirements of the environments and users. This means that, in recent years, passive and active contexts in HRI are not uncommon. However, for these two contexts, there are also relatively few studies on the emotional changes in users when they encounter interaction errors.
As mentioned above, this paper mainly focuses on the following four research questions to explore the effects of multi-factors on user emotions when encountering interaction errors in HRI. The corresponding contributions are helpful in promoting the effective design of interaction error-handling strategies in HRI.
Research question 1 (RQ1): How does the feedback from the robot affect user emotions in the case of interaction errors in HRI?
Research question 2 (RQ2): Do passive and active contexts affect user emotions in the case of interaction errors in HRI?
Research question 3 (RQ3): Do previous emotions affect user emotions after encountering interaction errors in HRI? For convenience in writing, previous emotions and user emotions in the following sections, respectively, represent the emotions before and after users encounter interaction errors.
Research question 4 (RQ4): Are there any interaction effects between feedback, contexts, and previous emotions?
The sections are organized as follows:
Section 2 briefly illustrates the background.
Section 3 shows the methodology, including definitions, study aims, and the procedures of the surveys and lab experiments.
Section 4 describes the results and corresponding analysis.
Section 5 illustrates the discussions and limitations.
Section 6 details the conclusions.
5. Discussions
The four research questions in this paper can be discussed by combining the above results of the surveys and experiments. Overall, for RQ1, RQ2, and RQ3, the type of feedback and the type of previous emotions affect the type of user emotions when users encounter interaction errors in HRI, but the type of contexts, CPIR and CAIR, do not. Specifically, whether users actively interact with robots or not does not affect their emotions when encountering interaction errors, and users rarely feel fear. When previous emotions show one type of emotion, this emotion also has the largest proportion in user emotions after encountering interaction errors. For example, when the previous emotion was happiness, the user’s emotion tends to be happiness. Hence, it is better to design robot expressions that can make users feel happier to reduce the possibility of negative emotions and help them cope when encountering interaction errors. Since the emotional intensity perceived by users does not show an impact on user emotions, the designed robot’s expressions of happiness should be recognized by users; however, the corresponding intensity is unimportant in handling interaction errors.
Furthermore, regarding the feedback mentioned in RQ2, compared with CF, IF tends to evoke happiness and surprise according to the results of Fisher’s exact test. As shown in
Table 4 and
Table 7, in the case of IF, the corresponding magnitude of the effect size decreases with the frequency of interaction errors, while it increases in the case of CF. Considering the uncontrollability of previous emotions, it is better to reduce the corresponding magnitude of the effect on user emotions in real HRI. Hence, if we hope user emotions will be in a more positive state when users encounter interaction errors, an irrelevant answer to the user is a better choice when handling interaction errors. Further, the overall magnitude of the effect size of feedback on user emotions reduces with the times of interaction errors, as shown in
Table 5. This trend is also evident in the CAIR and the CPIR, respectively, and the decreasing amplitude of effect size in the CAIR is faster than that in the CPIR, as shown in
Table 6. This indicates that, although the CAIR and the CPIR do not affect user emotions, different feedback from robots shows different effects on users in the cases of the CAIR and the CPIR. Significant differences were only found in the first interaction error in the CAIR. This means that the impact of the different feedback, IF and CF, is manifested in the CAIR. Combined with
Figure 6, IF is better than CF in making users express fewer negative emotions in the CAIR. As mentioned above, if interaction errors occur when users operate robots actively, an irrelevant answer given to users may be better for their emotions, even though the irrelevant answer is a social norm violation of interaction. The type of previous emotions and the type of feedback are influencing factors on user emotions in this case. But when the interaction begins with the robot, expressions, including social norm violations and technical failure from the robot, given to users do not affect their emotions, and the type of previous emotions is the influencing factor.
For RQ4, there are no interaction effects between the three factors. This means that the selection of contexts, the expression of feedback, and the regulation of previous emotions can be considered, respectively. Through the appropriate design of the factors, users’ negative emotions toward interaction errors can be reduced. The intensity of user emotions is not affected by contexts or feedback. With increasing frequency of repeated interaction errors, the general intensity of anger increases. Hence, it is necessary to deal with interaction errors as soon as they are found to reduce the accumulation of anger. In addition, the effects of gender were also considered. In the experiments, a total of 34 males (M = 22.56, SD = 3.34) and 28 females (M = 22.44, SD = 2.26) were recruited. According to the results of Fisher’s exact test and one-way ANOVA, all the p-values are smaller than 0.05, and there are no significant differences in the types and the intensity of user emotions. This reveals that gender does not affect user emotions after encountering interaction errors.
The findings in this study could be utilized to design interaction from the view of emotion to handle interaction errors in real HRI environments. The limitations are described from three aspects. One is that the experimental platform, the Yanshee robot, could not exhaustively represent robots that are used in real environments. Humanoid robots used in many public areas may show larger sizes and much more interactive expressions. These could affect user emotions toward robots [
49,
50]. However, considering the results of the online surveys, the data analyzed were derived from participants who had experience with different robots. This means that the outcome of this study can be used as a directional guideline for design in the cases of interaction errors in HRI. Another is that the results were obtained from the data of participants aged 19–35. If the participants were from a wider range of ages, the findings of this study would be more widely generalizable. Finally, the data obtained in this study were from subjective reports by users. Objective methods, such as tests using an eye-tracker system or EEG system, which may provide better comparisons, were not considered. This is a question that we will address in our future research.