Next Article in Journal
Phantomless Computed Tomography-Based Quantitative Bone Mineral Density Assessment: A Literature Review
Next Article in Special Issue
Modelling and Measuring Trust in Human–Robot Collaboration
Previous Article in Journal
Design-Dependent Electrophysiological Effects of Electrolysis Electrodes Used for Endodontic Disinfection
Previous Article in Special Issue
The Impression of Phones and Prosody Choice in the Gibberish Speech of the Virtual Embodied Conversational Agent Kotaro
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

“I See What You Feel”: An Exploratory Study to Investigate the Understanding of Robot Emotions in Deaf Children

1
Department of Humanities, University of Catania, 95124 Catania, Italy
2
Regional Centre for Child and Youth Mental Health and Child Welfare—North, UiT The Arctic University of Norway, 9019 Tromsø, Norway
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(4), 1446; https://doi.org/10.3390/app14041446
Submission received: 8 January 2024 / Revised: 3 February 2024 / Accepted: 7 February 2024 / Published: 9 February 2024
(This article belongs to the Special Issue Recent Advances in Human-Robot Interactions)

Abstract

:
Research in the field of human–robot interactions (HRIs) has advanced significantly in recent years. Social humanoid robots have undergone severe testing and have been implemented in a variety of settings, for example, in educational institutions, healthcare facilities, and senior care centers. Humanoid robots have also been assessed across different population groups. However, research on various children groups is still scarce, especially among deaf children. This feasibility study explores the ability of both hearing and deaf children to interact with and recognize emotions expressed by NAO, the humanoid robot, without relying on sounds or speech. Initially, the children watched three video clips portraying emotions of happiness, sadness, and anger. Depending on the experimental condition, the children observed the humanoid robot respond to the emotions in the video clips in a congruent or incongruent manner before they were asked to recall which emotion the robot exhibited. The influence of empathy on the ability to recognize emotions was also investigated. The results revealed that there was no difference in the ability to recognize emotions between the two conditions (i.e., congruent and incongruent). Indeed, NAO responding with congruent emotions to video clips did not contribute to the children recognizing the emotion in NAO. Specifically, the ability to predict emotions in the video clips and gender (females) were identified as significant predictors to identify emotions in NAO. While no significant difference was identified between hearing and deaf children, this feasibility study aims to establish a foundation for future research on this important topic.

1. Introduction

Human–robot interactions (HRIs) have been extensively explored over the last decade, contributing to a nuanced comprehension of how humans and robots engage in diverse environments (for a comprehensive review, see: [1]). Goodrich and Schultz [2] state that all interactions between humans and robots require some type of communication. Moreover, speech and language are considered to play a vital role in HRIs [3]. As robots increasingly interact with humans in various areas of their lives, understanding how people perceive emotions in robots is crucial for advancing the research in the field, as well as to improve interactions between humans and robots across different domains. Understanding robot emotions can be achieved by examining the robot’s body posture, tone of voice, or facial expressions [4]. Over the past decade, studies have placed significant emphasis on ensuring that robots not only comprehend human emotions, but also respond to them in a personalized manner [5]. This implies that robots must be advanced and capable of conducting detailed analyses of human expressions. Simultaneously, they should possess the ability to distinguish variations in expression, such as those between sexes [6].
While research into the potential of HRIs has made substantial progress, the capacity of humans to read emotions in robots remains an evolving domain. It is particularly crucial to advance research on the ability of children with special needs to interpret emotions in robots, especially among deaf children. For deaf children, effective interactions with the robot relies on non-verbal communication, as they are unable to hear sounds and voices from the robot. Therefore, it is crucial that the robot can convey its emotional states without sounds. In other words, making the robot’s expressiveness as intelligible and direct as possible, without sound, can enable effective and positive engagement between deaf children and the robot. Thus, including deaf children in HRI research can offer profound advantages, that is, the inclusion of this group not only prevents these children from being sidelined in the swiftly evolving domain of robotics, but it can also provide researchers with a more expansive perspective on HRIs when auditory communication is not an option. However, to the best of our knowledge, there are no studies investigating the ability of deaf children to recognize emotions in robots.
This feasibility study aims to establish the groundwork for further research on interactions between robots and deaf children, by including both deaf and hearing children. Initially, the credibility of a humanoid robot without sound and speech are investigated. Additionally, the ability to recognize emotions in the robot without sound and speech is examined by omitting this feature in a controlled experimental context. Finally, the impact of empathy on the ability to recognize emotions in a robot without sound and speech are investigated.
Specifically, the following research questions are examined:
(RQ1) How do hearing and deaf children assess the credibility of a humanoid robot without sound and speech?
(RQ2) Are children able to recognize emotions in a humanoid robot when they are exhibited through non-verbal communication?
(RQ3) Can empathy affect the ability to recognize emotions, expressed without sound or speech, in a humanoid robot?
(RQ4) Are the children who see a humanoid robot respond to a video in a congruent manner better able to recognize the emotions in the robot compared to children who see a robot respond to a video in an incongruent manner?
The remaining parts of the paper is organized as follows: Section 2 reviews the scientific literature that contextualizes this work, focusing on social robots, deaf children, and empathy; Section 3 presents the method and procedure for the experiment. Section 4 present the results of the experiment, while in Section 5, the findings are discussed.

2. Related Works

2.1. Social Robots

Bartneck and Forlizzi [7] define a social robot as an “autonomous or semi-autonomous robot that interacts and communicates with humans following the behavioral rules expected by the people with whom the robot intends to interact”. The humanoid appearance of social robots facilitates interaction, as demonstrated in a systematic review by Sarrica [8]. Social robots are capable of both verbal and non-verbal communication, with the latter being particularly relevant for them to interact with deaf children. A study by Zinina et al. [9] revealed that users prefer social robots that utilize gestures, head movements, eye contact, and mouth animation, as opposed to robots with stationary body parts. Other studies have also showed that expressive robots, capable of displaying human-like emotions, are perceived as friendlier and more human-like, resulting in greater engagement and more pleasant interactions [10,11]. Further, humanoid social robots are stimulating for oral dialogue; they have been found to better facilitate language development compared to devices without human-like characteristics (i.e., tablets or mobile devices [12,13]). The potential advantages of a social robot compared to a tablet have also been demonstrated in relation to performance, involvement, and fun [14]. Their humanoid characteristics enable robots to able to stimulate suitable learning situations; they have been proven to be effective tutors in various settings, including second language acquisition [14]. Humanoid social robots can also enhance concentration and academic performance in both children and adolescents [15]. However, a study has shown that children’s engagement and motivation in response to robotic narration are positively correlated to expressive behavior in the robot [16], indicating that robots must be able to express themselves in a specific way in order to achieve their full potential. However, more research is needed on these factors among children in general, but especially among deaf children.

2.2. Children with Deafness

From infancy, the sensory system acquires the ability to discern emotional signals transmitted by caregivers through various communication channels. This information can be received through, for example, facial expressions, tone and volume of voice, and body postures [17]. When one of the communication channels is weakened, other communication channels will be used to compensate for the deficiency. Hearing is pivotal in the developmental process of every child [18], when this sense is weakened, as it is in deaf children, non-verbal communication becomes fundamentally important. Deaf children and children with hearing impairment are twice as vulnerable to developing emotional and general health problems compared to children with normal hearing [19]. Further, some studies have revealed that deaf children exhibit lower levels of empathy and encounter more peer-related challenges than hearing children [20,21,22]. Netten et al. [23] state that because children without the ability to hear interact with others based on non-verbal communication, empathy is important for development as it can increase the ability to read other people’s states and emotions in interactions.

2.3. Empathy

Empathy can be referred to as the ability to perceive the inner state of others as well as the ability to feel what others feel [24,25]. Specifically, empathy refers to the ability to react with the right feelings to an event [26] and is important for functioning well in social interactions and refraining from displaying antisocial behavior. Because empathy refers to the ability to feel what others feel, it is also important for emotion recognition. A study conducted by Ramachandra and Longacre [27] found that higher levels of empathy in individuals lead them to better recognize facial and eyes-only emotions. A study by Charrier et al. [28] showed that empathy influenced how empathic and intelligent a robot was perceived, as well as perceived familiarity and comfortability toward the robot. Beck et al. [29] found that body and head positions of robots can be used to convey emotions among children in human–robot interactions. However, a scoping review [30] found that most studies investigating elements related to emotions in human–robot interactions rely on mechanisms that include sounds from the robot (i.e., tones, volumes, and speed of speech), and more research is needed to better understand how emotions in robots can be interpreted by young people without sound.

3. Materials and Methods

3.1. Participants and Recruitment

Children aged between 6 and 11 were involved in the study. The sample was recruited using two different approaches. Specifically, (1) hearing children were recruited through a primary school in Catania, Italy, while (2) the deaf children were recruited from the centers of the Association of Hearing Impaired Families in Catania, Italy. A total of 9 deaf children, participated in the study. Children with comorbidities were previously not included in the sample due to the nature of the study.
All the parents signed consent forms before their children were included in the study. Children were free to leave the experiment at any time and they were always supported by a teacher or educator.
The final sample consisted of N = 27 children, of which n = 11 were males (40.7%) and n = 16 were females (59.3%). The children had an age range from 6 to 11 years (Mage = 7.59, SD = 1.5). Within the total sample, N = 9 children (33.3%) were deaf and only N = 2 (7.4%) had previous experience with a humanoid robot.

3.2. Stimulus Material

The social humanoid robot NAO was used in the experiment. NAO is a 58 cm tall robot weighing 4.3 kg. The robot can express gestures with 25 degrees of freedom (4 arm joints; 2 for hands; 5 for each leg; 2 for the head; and one for control the hips). In addition to having several human characteristics, NAO has a light and compact design. The robot can obtain information about the environment using sensors and microphones. Further, NAO can detect faces and simulate eye contact by moving its head. The robot’s eyes consist of LED lights, which can change to simulate different emotions. NAO has been shown to be suitable to use in various research settings, including experiments involving children [31,32]. According to Robaczewski et al. [33], more than 51 publications and over 1895 participants have used NAO in various types of research in 2021. This makes it among the most used humanoid robots in HRI research.
From a technical point of view, the robot was programmed with the software Choregraphe and Python 2.7+ SDK (H25, V4). We used NAOqi 2.5 framework’s SDK to write Python scripts that ran on the robot and to handle its behavior during the administration.
Although NAO is not able to convey emotions using facial expressions, the program provides the possibility to design complex behaviors (i.e., different interfaces for non-verbal communication, such as gestures, sounds, and LEDs). Zinina et al. [9] found that a robot’s appeal is influenced by its active features, with the most important being hands, followed by mouth, head, and eyes. NAO’s emotions were programmed in line with previous studies [34,35]. Specifically, the emotion happiness was portrayed by NAO raising his hands to the ceiling and looking up (Figure 1), sadness was portrayed by NAO looking down at the floor and having his arms extended down his side (Figure 2). Anger was portrayed by NAO clenching his fists in front of him and glaring at them (Figure 3).

3.3. Instruments

3.3.1. Emotions in Video Clips

Inspired by an experiment conducted by Tsiourti et al. [17], the children were presented with a list of 11 emotions (amusement, anger, disgust, despair, embarrassment, fear, happiness, neutral, sadness, shame, and surprise). The children were then asked to watch three short video clips from well-known Disney movies in random order, that depicted situations portraying the emotions happiness, sadness, or anger. Before the experiment, to favor the consistency of the instrument used, a pre-test was carried out among 5 children (2 girls and 3 boys, Mage = 8). The videos were shown, and the children were asked “what feeling do you associate with this video?”. All the children independently agreed that the emotion shown in the video was in line with the emotion that was intended.

3.3.2. Emotion Recognition of NAO Robot

Inspired by the experiment by Tsiourti et al. [17], NAO reacted to the videos by displaying the emotions happiness, sadness, and anger. The children were asked to identify the emotion NAO displayed in line with the list of 11 emotions (amusement, anger, disgust, despair, embarrassment, fear, happiness, neutral, sadness, shame, and surprise).

3.3.3. Empathy

The children answered to the Empathy questionnaire for children and adolescents (EmQue-CA; [36]). The questionnaire consists of 14 items to measure empathy (e.g., “If my mother is happy, I also feel happy”; “When a friend is angry, I tend to know why”; and “If a friend is sad, I like to comfort him”). All responses were rated on a 3-point Likert scale (1 = not true, 2 = sometimes true, 3 = often true). The questionnaire has been shown to have good psychometric properties among various study samples from the age of seven [35,36], and the instrument has been translated and validated to use on Italian children [37]. Mean and Cronbach alpha are presented in Table 1.

3.3.4. State Empathy

To measure state empathy, we administered the State Empathy Scale test [38]. The questionnaire consists of three subscales, each with four items measuring three different dimensions of state empathy: affective empathy (e.g., “The character’s emotions are genuine”), cognitive empathy (e.g., “I can see the character’s point of view”), and associative empathy (e.g., “I can relate to what the character was going through in the message”). The scale has previously been used among different study samples, including college students [38]. Mean and Cronbach alpha are presented in Table 1.

3.3.5. Credibility toward the Humanoid Robot

The children were asked to fill out the seven questions adapted by Tsiourti [17] to assess the perceived credibility of the robot. The questions included asking their perspective on how reliable the robot reacted to emotions shown in the video clips (e.g., “The robot perceived the content of the movie clip correctly”; “The behavior expressed by the robot was appropriate for the content of the movie”) and how easy it was to recognize the emotions NAO expressed (e.g., “It was easy to understand which emotion was expressed by the robot”) as well as the anthropomorphic features of the robot (e.g., “The robot has a personality”; “It was easy to understand what the robot was thinking about”).

3.4. Experimental Procedure

The experiment was conducted in two different settings (i.e., a primary school or in a center for deaf children). The participants were randomized through a simple randomization (i.e., subject 1 belonged to the congruent condition, subject 2 belonged to the incongruent condition). The teacher in the class determined the order in which the students were to carry out the experiment. The conditions were blind to the children participating in the experiment. All parts of the experiment took place in a room with good lighting conditions. The rooms consisted of one computer screen, a NAO robot, and two researchers. The deaf children also had a sign language interpreter with them, who could communicate with the children on behalf of the researchers. Before the experiment, all children interacted with NAO for 20 min (see Figure 4) to minimize the possible novelty effect. The experiment was conducted without any voices or sounds, lasted for approximately 30 min, and were divided into three sessions.
In the first session, the participants were asked to sit down in front of a computer screen together with NAO and to answer questions on demographic information. Then, they filled out the EmQue-CA [36].
In the second session, the children watched the three video clips in random order together with NAO (Figure 5). Each clip lasted for approximately 1 min each, and showed the emotions happiness, sadness, and anger. The clip that displayed the emotion happiness showed the final scene from the movie Pinocchio, where the puppet transforms into a real child and the cartoon characters express happiness and celebrate together. The clip that displayed the emotion sadness showed a clip of a girl (from the movie Lilo and Stitch) crying while holding her teddy bear. The clip that displayed the emotion anger, showed a section from the movie The Incredibles, where members of a family scream and yell at each other.
After each clip, NAO responded to the movie with a congruent or incongruent emotion depending on the conditional condition. After watching a video clip and NAO’s reaction, the children were asked to recognize and express the feelings displayed in the video and by the robot (see Figure 6).
In the third session, the children filled the items adapted by Tsiourti et al. [17] on the perceived credibility of the robot and the State Empathy Scale [39]. When the experiment was finished, the children had a small debriefing session, where they could interact and play with the robot and ask questions about the experiment.

3.5. Statistical Analysis

All statistical analyses were performed in SPSS for Windows (version 29). Basic descriptive statistics including means, standard deviations, percentages, frequency distributions, and Cronbach alpha for each scale were calculated. The items assessing cognitive empathy from the State Empathy Scale were removed from further analyses due to low Cronbach alpha values. The number of correct recognitions of NAO’s emotions and emotions displayed in the video was summed. As the participants were asked to recognize three different emotions, the answers were distributed from 0 to 3, reflecting how many correct answers the children had. Initially, a correlation analysis between the different variables was conducted. A linear regression was conducted to predict emotion recognition in NAO. Differences between groups were examined with an independent t-test. A significance level (p-value) of less than 0.05 was applied for all tests.

4. Results

4.1. Descriptives

All means, standard deviations, and correlations between the variables are presented in Table 2. Emotion recognition in NAO and emotion recognition in the videos showed a correlation (r = 0.63). Further, sex and emotion recognition by NAO showed a correlation (r = 0.54).

4.2. Credibility toward NAO

Overall, the children experienced NAO as credible (M = 3.84; SD = 0.83). The level of congruence predicted perceived credibility of the robot in a linear regression analysis (β = −0.451, p = 0.01), indicating that the children who saw NAO react to emotions in a congruent matter perceived the robot as more credible. The children in the congruent condition found it easier to imagine what the robot was thinking (M = 4.27; SD = 1.22) compared to the incongruent group (M = 3.0; SD = 1.41; t25 = 2.496, p = 0.02). The children in the congruent condition also found NAO to respond more appropriately to the emotions in the videos, compared to the incongruent group (Mcongruent = 4.73, SDcongruent = 0.46; Mincongruent = 2.75, SDincongruent = 1.71; t(12.26) = 3.903, p = 0.002; see Table 3 and Table 4 for frequency distribution of responses to each item by the congruent and incongruent groups, respectively).

4.3. Recognition of NAO’s Emotions

A large proportion (81.5%) of the children reported that it was easy to guess NAO’s emotions (M = 4.19; SD = 1.15). The children in the congruent group rated it easier to guess NAO’s emotions (M = 4.60, SD = 0.63); however, there was no significant difference compared to the incongruent group (M = 3.67, SD = 1.44; p > 0.05). The children were able to recognize, on average, just under two emotions displayed by NAO (M = 1.96 (SD = 1.09). The emotions happiness and sadness were the most frequently correctly chosen emotions (n = 18, 66.7% for both emotions), followed by the emotion anger (n = 17, 63.0%).
In the regression analysis conducted to predict correct emotion recognition in NAO, only the ability to recognize emotions in the video (β = 0.720, p < 0.001) and sex (β = 0.421, p = 0.01) were found to be significant predictors (see Table 5 for a summary of regression results), indicating that girls were more likely to guess NAO’s emotion correctly. The level of empathy and hearing level did not significantly predict the recognized emotions in NAO without sounds.

5. Discussion and Conclusions

In this study, an experiment investigating the interaction between hearing and deaf children and a humanoid robot have been conducted. Further, in an experimental situation, the children witnessed the humanoid robot NAO responding with congruent or incongruent emotions to video clips, without using any sounds; that is, all emotions were expressed through non-verbal communications (i.e., with gestures and head movements). The children’s ability to recognize the emotions expressed by NAO were investigated.
Overall, the children in this study reported medium to high credibility toward the robot. In line with findings from Tsiourti et al. [17], seeing NAO respond with congruent emotions to the video predicted higher perceived credibility of the robot. Further, the children who observed NAO’s congruent emotions found the reactions to have been more appropriate and easier to recognize compared to the group that saw NAO’s incongruent emotions. The findings indicate that it is important for children that the robot responds with emotions that suit a given situation for them to experience higher credibility toward it.
There was a varying ability to recognize emotions in NAO among the children in both groups. The emotions happiness and sadness were the most frequently correctly guessed emotions, which is in line with previous findings on emotion recognition in humans [40]. Even though several of the children reported that it was easy to recognize the emotions NAO exhibited, many also struggled to identify the right emotion expressed by NAO. Although the programming of emotions in NAO was inspired by previous studies [34,35], it is possible that the exhibited emotions were too indistinct for the children to recognize them correctly.
The ability to guess correct emotions displayed in the video clips predicted the ability to recognize the emotions NAO displayed, regardless of the congruence level of the emotion. Previous findings suggest that when NAO exhibits congruent emotions to video clips, the recognition of emotions in the robot are simplified [17]. Contrary to these previous findings, the children in this study who observed NAO respond with congruent emotions were not more likely to recognize the correct emotion, compared to the group that observed NAO respond with incongruent emotions. Several factors can explain these contradictory results. First, factors such as general self-concept and age have been found to influence emotion recognition in humans [40]. This study’s young sample may therefore be an explanation for why the previous results were not replicated in this experiment. Further, it is also important to mention that the small sample size may have been an influencing factor as to why previous findings were not replicated in this study.
There was no difference between hearing and deaf children’s ability to recognize emotions in NAO. Even though no difference was found in this study, previous findings have suggested that deaf children have a poorer ability to recognize emotions in others compared to hearing children [17,40]. Wiefferink et al. [41] showed that children with cochlear implants recognized fewer emotions than children with normal hearing and that emotion recognition skills are affected by hearing loss, including when using non-verbal communication to express emotions. Finally, even though empathy has been shown to influence experiences in human–robot interactions [27], the results of this study showed, surprisingly, that empathy did not lead to a better ability to recognize emotions in the robot. No difference in empathy was found between the conditions or groups, in contrast to previous findings suggesting that deaf children have lower levels of empathy than hearing children [20,21,22].
This study aims to contribute to the future research on the field of human–robot interactions between deaf children and robots. As well as to contribute to a broader understanding of how children recognize emotions exhibited from humanoid robots.
However, several limitations need to be considered when interpreting the presented results. First, the sample size is relatively small, which affects how generalizable the findings are to a broader population. In addition, the study focused only on the three emotions: happiness, sadness, and anger. This is a potentially limiting factor for the exploration of children’s ability to recognize more comprehensive emotions in the robot. In future research, all primary emotions should be included to gain a broader understanding of children’s emotion recognition in robots. Furthermore, the study mainly involved a sample with similar cultural backgrounds, potentially limiting the generalizability of the results to a more diverse population. Equally, to avoid a too indistinct display of emotions by the robot with non-verbal communication, future research should investigate how to program robots to express properly intended emotions. This study examined deaf and hearing children’s interactions with a humanoid robot. The children reported, overall, a medium-to-high level of credibility toward the robot.
Further, both hearing and deaf children observed a robot watching video clips that displayed the emotions happiness, sadness, and anger. The robot displayed either a congruent or incongruent emotion in response to the video clips, and the children’s ability to recognize the emotions in the robot was examined. No differences were found between the congruent and incongruent conditions. Notably, the ability to recognize emotions in a video and sex were identified as significant to identify emotions in the robot. Future research should include larger samples, incorporate diverse cultural perspectives, and encompass a broader range of emotions to enhance the generalizability and depth of understanding in this field.

Author Contributions

Conceptualization, C.C. and D.C.; methodology, C.C. and H.H.; validation, C.C., H.H. and D.C.; investigation, C.C. and H.H.; data curation, H.H.; writing—original draft preparation, C.C.; writing—review and editing, C.C., H.H. and D.C.; supervision, D.C.; project administration, D.C.; funding acquisition, D.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research has been fully supported by the “START to Aid” (209564) project (PIACERI—PIAno di inCEntivi per la RIcerca di Ateneo) of the Department of Humanities, University of Catania (Italy).

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Ethics Committee of Oasi Research Institute and Sheffield Hallam University (protocol code 2017l0lII7ICE-IRCCS-OASV4) for studies involving humans.

Informed Consent Statement

Informed consent was obtained from all parents of the participating children involved in the study.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to them containing information that could compromise research participant privacy.

Acknowledgments

The authors gratefully thank all participants of this study for the generous contribution of their time. Additionally, thanks to “I.C.S. Elio Vittorini” San Pietro Clarenza, CT; “C.D. Teresa di Calcutta, Tremestieri Etneo, CT”; and “AFAE—association of families of the hearing impaired in Etna, CT” for the collaboration, and the psychologist Olga Distefano for the support during the recruitment process.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Sheridan, T.B. Human–Robot Interaction: Status and Challenges. Hum. Factors 2016, 58, 525–532. [Google Scholar] [CrossRef] [PubMed]
  2. Goodrich, M.A.; Schultz, A.C. Human-Robot Interaction: A Survey. Found. Trends Hum. Comput. Interact. 2007, 1, 203–275. [Google Scholar] [CrossRef]
  3. Bonarini, A. Communication in Human-Robot Interaction. Curr. Robot. Rep. 2020, 1, 279–285. [Google Scholar] [CrossRef] [PubMed]
  4. Sati, V.; Sánchez, S.M.; Shoeibi, N.; Arora, A.; Corchado, J.M. Face Detection and Recognition, Face Emotion Recognition Through NVIDIA Jetson Nano. Int. Symp. Ambient Intell. 2020, 1239, 177–185. [Google Scholar] [CrossRef]
  5. Jaiswal, S.; Nandi, G.C. Robust real-time emotion detection system using CNN architecture. Neural Comput. Appl. 2020, 32, 11253–11262. [Google Scholar] [CrossRef]
  6. Vesić, A.; Mićović, A.; Ignjatović, V.; Lakićević, S.; Čolović, M.; Zivkovic, M.; Marjanovic, M. Hidden Sadness Detection: Differences between Men and Women. In Proceedings of the 2021 Zooming Innovation in Consumer Technologies Conference (ZINC), Novi Sad, Serbia, 26–27 May 2021; pp. 237–241. [Google Scholar] [CrossRef]
  7. Bartneck, C.; Forlizzi, J. A design-centered framework for social human-robot interaction. In Proceedings of the 13th IEEE International Workshop on, Kurashiki, Japan, 22 September 2004; pp. 591–594. [Google Scholar] [CrossRef]
  8. Sarrica, M.; Brondi, S.; Fortunati, L. How many facets does a “social robot” have? A review of scientific and popular definitions online. Inf. Technol. People 2020, 33, 1–21. [Google Scholar] [CrossRef]
  9. Zinina, A.; Zaidelman, L.; Arinkin, N.; Kotov, A. Non-verbal behavior of the robot companion: A contribution to the likeability. Procedia Comput. Sci. 2020, 169, 800–806. [Google Scholar] [CrossRef]
  10. Hall, J.; Tritton, T.; Rowe, A.; Pipe, A.; Melhuish, C.; Leonards, U. Perception of own and robot engagement in human robot interactions and their dependence on robotics knowledge. Robot. Auton. Syst. 2014, 62, 392–399. [Google Scholar] [CrossRef]
  11. Breazeal, C. Emotion and sociable humanoid robots. Int. J. Hum. Comput. Stud. 2003, 59, 119–155. [Google Scholar] [CrossRef]
  12. Berghe, R.v.D.; Verhagen, J.; Oudgenoeg-Paz, O.; van der Ven, S.; Leseman, P. Social Robots for Language Learning: A Review. Rev. Educ. Res. 2018, 89, 259–295. [Google Scholar] [CrossRef]
  13. Randall, N. A survey of robot-assisted language learning (RALL). ACM Trans. Hum. Robot Interact. 2019, 9, 36. [Google Scholar] [CrossRef]
  14. Konijn, E.A.; Jansen, B.; Bustos, V.M.; Hobbelink, V.L.N.F.; Vanegas, D.P. Social Robots for (Second) Language Learning in (Migrant) Primary School Children. Int. J. Soc. Robot. 2021, 14, 827–843. [Google Scholar] [CrossRef]
  15. Belpaeme, T.; Kennedy, J.; Ramachandran, A.; Scassellati, B.; Tanaka, F. Social robots for education: A review. Sci. Robot. 2018, 3, eaat5954. [Google Scholar] [CrossRef] [PubMed]
  16. Conti, D.; Cirasa, C.; Di Nuovo, S.; Di Nuovo, A. ‘Robot, tell me a tale!’: A Social Robot as tool for Teachers in Kindergarten. Interact. Stud. 2020, 21, 220–242. [Google Scholar] [CrossRef]
  17. Tsiourti, C.; Weiss, A.; Wac, K.; Vincze, M. Multimodal Integration of Emotional Signals from Voice, Body, and Context: Effects of (In)Congruence on Emotion Recognition and Attitudes Towards Robots. Int. J. Soc. Robot. 2019, 11, 555–573. [Google Scholar] [CrossRef]
  18. Signoret, C.; Rudner, M. Hearing impairment and perceived clarity of predictable speech. Ear Hear. 2019, 40, 1140–1148. [Google Scholar] [CrossRef]
  19. Garcia, R.; Turk, J. The applicability of Webster-Stratton parenting programmers to deaf children with emotional and behavioral problems, and autism, and their families: Annotation and case report of a child with autistic spectrum disorder. Clin. Child Psychol. Psychiatry 2007, 12, 125–136. [Google Scholar] [CrossRef] [PubMed]
  20. Ashori, M. Impact of Auditory-Verbal Therapy on executive functions in children with Cochlear Implants. J. Otol. 2022, 17, 130–135. [Google Scholar] [CrossRef]
  21. Terlektsi, E.; Kreppner, J.; Mahon, M.; Worsfold, S.; Kennedy, C.R. Peer relationship experiences of deaf and hard-of-hearing adolescents. J. Deaf. Stud. Deaf. Educ. 2020, 25, 153–166. [Google Scholar] [CrossRef]
  22. Rieffe, C. Awareness, and regulation of emotions in deaf children. Br. J. Dev. Psychol. 2012, 30, 477–492. [Google Scholar] [CrossRef]
  23. Netten, A.P.; Rieffe, C.; Theunissen, S.C.P.M.; Soede, W.; Dirks, E.; Briaire, J.J.; Frijns, J.H.M. Low empathy in deaf and hard of hearing (pre) adolescents compared to normal hearing controls. PLoS ONE 2015, 10, e0124102. [Google Scholar] [CrossRef] [PubMed]
  24. Batson, C.D. These Things Called Empathy: Eight Related but Distinct Phenomena; The MIT Press: Cambridge, MA, USA, 2009. [Google Scholar] [CrossRef]
  25. Batson, C.D.; Early, S.; Salvarani, G. Perspective taking: Imagining how another feels versus imaging how you would feel. Personal. Soc. Psychol. Bull. 1997, 23, 751–758. [Google Scholar] [CrossRef]
  26. O’Connell, G.; Christakou, A.; Haffey, A.T.; Chakrabarti, B. The role of empathy in choosing rewards from another’s perspective. Front. Hum. Neurosci. 2013, 7, 174. [Google Scholar] [CrossRef] [PubMed]
  27. Ramachandra, V.; Longacre, H. Unmasking the psychology of recognizing emotions of people wearing masks: The role of empathizing, systemizing, and autistic traits. Personal. Individ. Differ. 2022, 185, 111249. [Google Scholar] [CrossRef] [PubMed]
  28. Charrier, L.; Galdeano, A.; Cordier, A.; Lefort, M. Empathy display influence on human-robot interactions: A pilot study. In Proceedings of the Workshop on Towards Intelligent Social Robots: From Naive Robots to Robot Sapiens at the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; p. 7. [Google Scholar]
  29. Beck, A.; Cañamero, L.; Hiolle, A.; Damiano, L.; Cosi, P.; Tesser, F.; Sommavilla, G. Interpretation of emotional body language displayed by a humanoid robot: A case study with children. Int. J. Soc. Robot. 2013, 5, 325–334. [Google Scholar] [CrossRef]
  30. Gasteiger, N.; Lim, J.; Hellou, M.; MacDonald, B.A.; Ahn, H.S. A Scoping Review of the Literature on Prosodic Elements Related to Emotional Speech in Human-Robot Interaction. Int. J. Soc. Robot. 2022, 1–12. [Google Scholar] [CrossRef]
  31. Di Nuovo, A.; Varrasi, S.; Lucas, A.; Conti, D.; McNamara, J.; Soranzo, A. Assessment of cognitive skills via human-robot interaction and cloud computing. J. Bionic Eng. 2019, 16, 526–539. [Google Scholar] [CrossRef]
  32. Conti, D.; Di Nuovo, A.; Cirasa, C.; Di Nuovo, S. A Comparison of kindergarten Storytelling by Human and Humanoid Robot with Different Social Behavior. In Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, Vienna, Austria, 6–9 March 2017; pp. 97–98. [Google Scholar] [CrossRef]
  33. Robaczewski, A.; Bouchard, J.; Bouchard, K.; Gaboury, S. Socially Assistive Robots: The Specific Case of the NAO. Int. J. Soc. Robot. 2021, 13, 795–831. [Google Scholar] [CrossRef]
  34. Beck, A.; Cañamero, L.; Bard, K. Towards an Affect Space for robots to display emotional body language. In Proceedings of the 19th International Symposium in Robot and Human Interactive Communication, Viareggio, Italy, 13–15 September 2010; pp. 464–469. [Google Scholar] [CrossRef]
  35. Mizumaru, K.; Sakamoto, D.; Ono, T. Perception of Emotional Relationships by Observing Body Expressions between Multiple Robots. In Proceedings of the 10th International Conference on Human-Agent Interaction (HAI ’22), Association for Computing Machinery, New York, NY, USA, 5 December 2022; pp. 203–211. [Google Scholar] [CrossRef]
  36. Overgaauw, S.; Rieffe, C.; Broekhof, E.; Crone, E.A.; Güroğlu, B. Assessing empathy across childhood and adolescence: Validation of the empathy questionnaire for children and adolescents (emque-CA). Front. Psychol. 2017, 8, 870. [Google Scholar] [CrossRef]
  37. Lazdauskas, T.; Nasvytienė, D. Psychometric properties of Lithuanian versions of empathy questionnaires for children. Eur. J. Dev. Psychol. 2021, 18, 144–159. [Google Scholar] [CrossRef]
  38. Liang, Z.; Mazzeschi, C.; Delvecchio, E. Empathy questionnaire for children and adolescents: Italian validation. Eur. J. Dev. Psychol. 2023, 20, 567–579. [Google Scholar] [CrossRef]
  39. Shen, L. On a scale of state empathy during message processing. West. J. Commun. 2010, 74, 504–524. [Google Scholar] [CrossRef]
  40. Cordeiro, T.; Botelho, J.; Mendonça, C. Relationship Between the Self-Concept of Children and Their Ability to Recognize Emotions in Others. Front. Psychol. 2021, 12, 672919. [Google Scholar] [CrossRef]
  41. Wiefferink, C.H.; Rieffe, C.; Ketelaar, L.; De Raeve, L.; Frijns, J.H. Emotion Understanding in Deaf Children with a Cochlear Implant. J. Deaf. Stud. Deaf. Educ. 2013, 18, 175–186. [Google Scholar] [CrossRef]
Figure 1. Choregraphe program executing the emotional reaction “happiness”.
Figure 1. Choregraphe program executing the emotional reaction “happiness”.
Applsci 14 01446 g001
Figure 2. Choregraphe program executing the emotional reaction “sadness”.
Figure 2. Choregraphe program executing the emotional reaction “sadness”.
Applsci 14 01446 g002
Figure 3. Choregraphe program executing the emotional reaction “anger”.
Figure 3. Choregraphe program executing the emotional reaction “anger”.
Applsci 14 01446 g003
Figure 4. Example of first meeting with NAO.
Figure 4. Example of first meeting with NAO.
Applsci 14 01446 g004
Figure 5. (a,b) The children watching video clips with NAO. The experimental setting also involved a screen for watching the video clips and a low table that allowed the child to sit near NAO.
Figure 5. (a,b) The children watching video clips with NAO. The experimental setting also involved a screen for watching the video clips and a low table that allowed the child to sit near NAO.
Applsci 14 01446 g005
Figure 6. Example of the emotions expressed by NAO (happiness, sadness, and anger).
Figure 6. Example of the emotions expressed by NAO (happiness, sadness, and anger).
Applsci 14 01446 g006
Table 1. Overview of constructs and Cronbach alpha.
Table 1. Overview of constructs and Cronbach alpha.
ConstructsNo. of Items M (SD) Cronbach Alpha
Empathy 182.70 (0.54)0.84
State Empathy (total)123.62 (0.89)0.86
State Empathy (affective)43.59 (1.09)0.79
State Empathy (cognitive)43.99 (0.72)0.43
State Empathy (associative)43.27 (1.30)0.82
Table 2. Descriptive statistics and correlations.
Table 2. Descriptive statistics and correlations.
M (SD)1234567891011
1 Emotion recognition (NAO) 1.96 (1.09)--
2 Emotion recognition (video) 2.00 (0.78)0.639 **--
3 Level of congruence 1 1.44 (0.51)−0.039−0.097--
4 Hearing level 21.33 (0.48)−0.049−0.102000--
5 Empathy2.70 (0.54)−0.019−0.181−0.203−0.049--
6 Affective empathy3.59 (1.09)0.1480.3250.0440.104−0.033--
7 Associative empathy3.27 (1.30)0.1020.2650.148−0.0720.1310.815 **--
8 State empathy *3.62 (0.89)0.1740.3460.037−0.0430.0820.935**0.915 **--
9 Believability3.84 (0.84) 0.1680.151−0.451 *−0.1650.2930.2220.428 *0.389 *-
10 Age7.59 (1.50)−0.0330.1960.0450.569 **.01300.0300.0090.037−0.102-
11 Sex 31.59 (0.50)0.534 **0.196−0.017−0.0530.1050.177−0.0030.133−0.056−0.178-
Note. * p < 0.05; ** p < 0.01. 1 Level of congruence: 1 = congruent, 2 = incongruent. 2 Hearing level: 0 = hearing children, 1 = deaf children. 3 Sex: 1 = boy, 2 = girl.
Table 3. Answers from the children in the congruent condition (N = 15) on the questionnaire on perceived credibility of NAO, adapted by Tsiourti [17].
Table 3. Answers from the children in the congruent condition (N = 15) on the questionnaire on perceived credibility of NAO, adapted by Tsiourti [17].
MinMaxM
(SD)
To a Minor ExtentTo Some ExtentTo a Great Extent
The robot perceived the content of the movie clip correctly254.53 (0.84)6.7%0%93.3%
It was easy to understand which emotion was expressed by the robot354.60 (0.63)0%6.7%93.3%
It was easy to understand what the robot was thinking about154.27 (1.22)13.3%0%86.7%
The robot has a personality153.80 (1.52)20.0%0%80.0%
The robot’s behavior drew my attention154.00 (1.46)26.7%0%73.3%
The robot’s behavior was predictable 153.27 (1.75)40.0%6.7%53.3%
The behavior expressed by the robot was appropriate for the content of the movie 454.73 (0.45)0%0%100%
Table 4. Answers from the children in the incongruent condition (N = 12) on the questionnaire on perceived credibility of NAO, adapted by Tsiourti [17].
Table 4. Answers from the children in the incongruent condition (N = 12) on the questionnaire on perceived credibility of NAO, adapted by Tsiourti [17].
MinMaxM
(SD)
To a Minor ExtentTo Some ExtentTo a Great Extent
The robot perceived the content of the movie clip correctly 153.83 (1.40)25.0%0%75.0%
It was easy to understand which emotion was expressed by the robot 153.67 (1.44)16.7%16.7%66.6%
It was easy to understand what the robot was thinking about 153.00 (1.41)41.6%16.7%41.7%
The robot has a personality153.00 (1.86)41.7%8.3%50.0%
The robot’s behavior drew my attention 154.42 (1.65) 8.3%0%91.7%
The robot’s behavior was predictable 153.33 (1.49) 33.4%8.3%58.3%
The behavior expressed by the robot was appropriate for the content of the movie 152.75 (1.71) 58.3%0%41.7%
Table 5. Summary of regression results for predicting emotion recognition in NAO. All statistically significant values are emboldened.
Table 5. Summary of regression results for predicting emotion recognition in NAO. All statistically significant values are emboldened.
Predictorsβtp
Emotion recognition (video)0.7204.341<0.001
Congruence level0.1340.7420.469
Hearing level 0.1660.7940.439
Empathy0.3411.8170.088
State empathy (affective)0.1070.1820.858
State empathy (associative)−0.073−0.1790.860
State empathy (total)−0.400−0.5110.616
Credibility toward the robot0.2211.0520.308
Age−0.118−0.5790.571
Sex0.4212.7570.014
Note: r2 = 0.838.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Cirasa, C.; Høgsdal, H.; Conti, D. “I See What You Feel”: An Exploratory Study to Investigate the Understanding of Robot Emotions in Deaf Children. Appl. Sci. 2024, 14, 1446. https://doi.org/10.3390/app14041446

AMA Style

Cirasa C, Høgsdal H, Conti D. “I See What You Feel”: An Exploratory Study to Investigate the Understanding of Robot Emotions in Deaf Children. Applied Sciences. 2024; 14(4):1446. https://doi.org/10.3390/app14041446

Chicago/Turabian Style

Cirasa, Carla, Helene Høgsdal, and Daniela Conti. 2024. "“I See What You Feel”: An Exploratory Study to Investigate the Understanding of Robot Emotions in Deaf Children" Applied Sciences 14, no. 4: 1446. https://doi.org/10.3390/app14041446

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop