Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (30)

Search Parameters:
Keywords = humanoid robot NAO

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 1669 KB  
Article
Empowering Education with Intelligent Systems: Exploring Large Language Models and the NAO Robot for Information Retrieval
by Nikos Fragakis, Georgios Trichopoulos and George Caridakis
Electronics 2025, 14(6), 1210; https://doi.org/10.3390/electronics14061210 - 19 Mar 2025
Cited by 2 | Viewed by 1016
Abstract
To unlock more aspects of human cognitive structuring, human–AI and human–robot interactions require increasingly advanced communication skills on both the human and robot sides. This paper compares three methods of retrieving cultural heritage information in primary school education: search engines, large language models [...] Read more.
To unlock more aspects of human cognitive structuring, human–AI and human–robot interactions require increasingly advanced communication skills on both the human and robot sides. This paper compares three methods of retrieving cultural heritage information in primary school education: search engines, large language models (LLMs), and the NAO humanoid robot, which serves as a facilitator with programmed answering capabilities for convergent questions. Human–robot interaction has become a critical aspect of modern education, with robots like the NAO providing new opportunities for engaging and personalized learning experiences. The NAO, with its anthropomorphic design and ability to interact with students, presents a unique approach to fostering deeper connections with educational content, particularly in the context of cultural heritage. The paper includes an introduction, extensive literature review, methodology, research results from student questionnaires, and conclusions. The findings highlight the potential of intelligent and embodied technologies for enhancing knowledge retrieval and engagement, demonstrating the NAO’s ability to adapt to student needs and facilitate more dynamic learning interactions. Full article
Show Figures

Figure 1

24 pages, 259 KB  
Article
How Do Older Adults Perceive Technology and Robots? A Participatory Study in a Care Center in Poland
by Paulina Zguda, Zuzanna Radosz-Knawa, Tymon Kukier, Mikołaj Radosz, Alicja Kamińska and Bipin Indurkhya
Electronics 2025, 14(6), 1106; https://doi.org/10.3390/electronics14061106 - 11 Mar 2025
Cited by 2 | Viewed by 1810
Abstract
One of the key areas of application for social robots is healthcare, particularly for the elderly. To better address user needs, a study involving the humanoid robot NAO was conducted at the Municipal Care Center in Krakow, Poland, with the participation of 29 [...] Read more.
One of the key areas of application for social robots is healthcare, particularly for the elderly. To better address user needs, a study involving the humanoid robot NAO was conducted at the Municipal Care Center in Krakow, Poland, with the participation of 29 older adults. This participatory design study explored their attitudes toward robots and technology both before and after interacting with the robot. It also identified the most desirable applications of social robots that could simplify everyday life for the elderly. Full article
22 pages, 3579 KB  
Article
Gait-to-Gait Emotional Human–Robot Interaction Utilizing Trajectories-Aware and Skeleton-Graph-Aware Spatial–Temporal Transformer
by Chenghao Li, Kah Phooi Seng and Li-Minn Ang
Sensors 2025, 25(3), 734; https://doi.org/10.3390/s25030734 - 25 Jan 2025
Cited by 1 | Viewed by 1163
Abstract
The emotional response of robotics is crucial for promoting the socially intelligent level of human–robot interaction (HRI). The development of machine learning has extensively stimulated research on emotional recognition for robots. Our research focuses on emotional gaits, a type of simple modality that [...] Read more.
The emotional response of robotics is crucial for promoting the socially intelligent level of human–robot interaction (HRI). The development of machine learning has extensively stimulated research on emotional recognition for robots. Our research focuses on emotional gaits, a type of simple modality that stores a series of joint coordinates and is easy for humanoid robots to execute. However, a limited amount of research investigates emotional HRI systems based on gaits, indicating an existing gap in human emotion gait recognition and robotic emotional gait response. To address this challenge, we propose a Gait-to-Gait Emotional HRI system, emphasizing the development of an innovative emotion classification model. In our system, the humanoid robot NAO can recognize emotions from human gaits through our Trajectories-Aware and Skeleton-Graph-Aware Spatial–Temporal Transformer (TS-ST) and respond with pre-set emotional gaits that reflect the same emotion as the human presented. Our TS-ST outperforms the current state-of-the-art human-gait emotion recognition model applied to robots on the Emotion-Gait dataset. Full article
Show Figures

Figure 1

35 pages, 5660 KB  
Article
“Warning!” Benefits and Pitfalls of Anthropomorphising Autonomous Vehicle Informational Assistants in the Case of an Accident
by Christopher D. Wallbridge, Qiyuan Zhang, Victoria Marcinkiewicz, Louise Bowen, Theodor Kozlowski, Dylan M. Jones and Phillip L. Morgan
Multimodal Technol. Interact. 2024, 8(12), 110; https://doi.org/10.3390/mti8120110 - 5 Dec 2024
Viewed by 1753
Abstract
Despite the increasing sophistication of autonomous vehicles (AVs) and promises of increased safety, accidents will occur. These will corrode public trust and negatively impact user acceptance, adoption and continued use. It is imperative to explore methods that can potentially reduce this impact. The [...] Read more.
Despite the increasing sophistication of autonomous vehicles (AVs) and promises of increased safety, accidents will occur. These will corrode public trust and negatively impact user acceptance, adoption and continued use. It is imperative to explore methods that can potentially reduce this impact. The aim of the current paper is to investigate the efficacy of informational assistants (IAs) varying by anthropomorphism (humanoid robot vs. no robot) and dialogue style (conversational vs. informational) on trust in and blame on a highly autonomous vehicle in the event of an accident. The accident scenario involved a pedestrian violating the Highway Code by stepping out in front of a parked bus and the AV not being able to stop in time during an overtake manoeuvre. The humanoid (Nao) robot IA did not improve trust (across three measures) or reduce blame on the AV in Experiment 1, although communicated intentions and actions were perceived by some as being assertive and risky. Reducing assertiveness in Experiment 2 resulted in higher trust (on one measure) in the robot condition, especially with the conversational dialogue style. However, there were again no effects on blame. In Experiment 3, participants had multiple experiences of the AV negotiating parked buses without negative outcomes. Trust significantly increased across each event, although it plummeted following the accident with no differences due to anthropomorphism or dialogue style. The perceived capabilities of the AV and IA before the critical accident event may have had a counterintuitive effect. Overall, evidence was found for a few benefits and many pitfalls of anthropomorphising an AV with a humanoid robot IA in the event of an accident situation. Full article
(This article belongs to the Special Issue Cooperative Intelligence in Automated Driving-2nd Edition)
Show Figures

Figure 1

35 pages, 13690 KB  
Article
An Audio-Based SLAM for Indoor Environments: A Robotic Mixed Reality Presentation
by Elfituri S. F. Lahemer and Ahmad Rad
Sensors 2024, 24(9), 2796; https://doi.org/10.3390/s24092796 - 27 Apr 2024
Cited by 2 | Viewed by 3107
Abstract
In this paper, we present a novel approach referred to as the audio-based virtual landmark-based HoloSLAM. This innovative method leverages a single sound source and microphone arrays to estimate the voice-printed speaker’s direction. The system allows an autonomous robot equipped with a single [...] Read more.
In this paper, we present a novel approach referred to as the audio-based virtual landmark-based HoloSLAM. This innovative method leverages a single sound source and microphone arrays to estimate the voice-printed speaker’s direction. The system allows an autonomous robot equipped with a single microphone array to navigate within indoor environments, interact with specific sound sources, and simultaneously determine its own location while mapping the environment. The proposed method does not require multiple audio sources in the environment nor sensor fusion to extract pertinent information and make accurate sound source estimations. Furthermore, the approach incorporates Robotic Mixed Reality using Microsoft HoloLens to superimpose landmarks, effectively mitigating the audio landmark-related issues of conventional audio-based landmark SLAM, particularly in situations where audio landmarks cannot be discerned, are limited in number, or are completely missing. The paper also evaluates an active speaker detection method, demonstrating its ability to achieve high accuracy in scenarios where audio data are the sole input. Real-time experiments validate the effectiveness of this method, emphasizing its precision and comprehensive mapping capabilities. The results of these experiments showcase the accuracy and efficiency of the proposed system, surpassing the constraints associated with traditional audio-based SLAM techniques, ultimately leading to a more detailed and precise mapping of the robot’s surroundings. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

14 pages, 6079 KB  
Article
“I See What You Feel”: An Exploratory Study to Investigate the Understanding of Robot Emotions in Deaf Children
by Carla Cirasa, Helene Høgsdal and Daniela Conti
Appl. Sci. 2024, 14(4), 1446; https://doi.org/10.3390/app14041446 - 9 Feb 2024
Cited by 5 | Viewed by 1953
Abstract
Research in the field of human–robot interactions (HRIs) has advanced significantly in recent years. Social humanoid robots have undergone severe testing and have been implemented in a variety of settings, for example, in educational institutions, healthcare facilities, and senior care centers. Humanoid robots [...] Read more.
Research in the field of human–robot interactions (HRIs) has advanced significantly in recent years. Social humanoid robots have undergone severe testing and have been implemented in a variety of settings, for example, in educational institutions, healthcare facilities, and senior care centers. Humanoid robots have also been assessed across different population groups. However, research on various children groups is still scarce, especially among deaf children. This feasibility study explores the ability of both hearing and deaf children to interact with and recognize emotions expressed by NAO, the humanoid robot, without relying on sounds or speech. Initially, the children watched three video clips portraying emotions of happiness, sadness, and anger. Depending on the experimental condition, the children observed the humanoid robot respond to the emotions in the video clips in a congruent or incongruent manner before they were asked to recall which emotion the robot exhibited. The influence of empathy on the ability to recognize emotions was also investigated. The results revealed that there was no difference in the ability to recognize emotions between the two conditions (i.e., congruent and incongruent). Indeed, NAO responding with congruent emotions to video clips did not contribute to the children recognizing the emotion in NAO. Specifically, the ability to predict emotions in the video clips and gender (females) were identified as significant predictors to identify emotions in NAO. While no significant difference was identified between hearing and deaf children, this feasibility study aims to establish a foundation for future research on this important topic. Full article
(This article belongs to the Special Issue Recent Advances in Human-Robot Interactions)
Show Figures

Figure 1

16 pages, 1395 KB  
Article
Redefining User Expectations: The Impact of Adjustable Social Autonomy in Human–Robot Interaction
by Filippo Cantucci, Rino Falcone and Marco Marini
Electronics 2024, 13(1), 127; https://doi.org/10.3390/electronics13010127 - 28 Dec 2023
Cited by 2 | Viewed by 2086
Abstract
To promote the acceptance of robots in society, it is crucial to design systems exhibiting adaptive behavior. This is particularly needed in various social domains (e.g., cultural heritage, healthcare, education). Despite significant advancements in adaptability within Human-Robot Interaction and Social Robotics, research in [...] Read more.
To promote the acceptance of robots in society, it is crucial to design systems exhibiting adaptive behavior. This is particularly needed in various social domains (e.g., cultural heritage, healthcare, education). Despite significant advancements in adaptability within Human-Robot Interaction and Social Robotics, research in these fields has overlooked the essential task of analyzing the robot’s cognitive processes and their implications for intelligent interaction (e.g., adaptive behavior, personalization). This study investigates human users’ satisfaction when interacting with a robot whose decision-making process is guided by a computational cognitive model integrating the principles of adjustable social autonomy. We designed a within-subjects experimental study in the domain of Cultural Heritage, where users (e.g., museum visitors) interacted with the humanoid robot Nao. The robot’s task was to provide the user with a museum exhibition to visit. The robot adopted the delegated task by exerting some degree of discretion, which required different levels of autonomy in the task adoption, relying on its capability to have a theory of mind. The results indicated that as the robot’s level of autonomy in task adoption increased, user satisfaction with the robot decreased, whereas their satisfaction with the tour itself improved. Results highlight the potential of adjustable social autonomy as a paradigm for developing autonomous adaptive social robots that can improve user experiences in multiple HRI real domains. Full article
(This article belongs to the Special Issue Human Computer Interaction in Intelligent System)
Show Figures

Figure 1

14 pages, 9716 KB  
Article
Can You Dance? A Study of Child–Robot Interaction and Emotional Response Using the NAO Robot
by Vid Podpečan
Multimodal Technol. Interact. 2023, 7(9), 85; https://doi.org/10.3390/mti7090085 - 30 Aug 2023
Cited by 13 | Viewed by 4578
Abstract
This retrospective study presents and summarizes our long-term efforts in the popularization of robotics, engineering, and artificial intelligence (STEM) using the NAO humanoid robot. By a conservative estimate, over a span of 8 years, we engaged at least a couple of thousand participants: [...] Read more.
This retrospective study presents and summarizes our long-term efforts in the popularization of robotics, engineering, and artificial intelligence (STEM) using the NAO humanoid robot. By a conservative estimate, over a span of 8 years, we engaged at least a couple of thousand participants: approximately 70% were preschool children, 15% were elementary school students, and 15% were teenagers and adults. We describe several robot applications that were developed specifically for this task and assess their qualitative performance outside a controlled research setting, catering to various demographics, including those with special needs (ASD, ADHD). Five groups of applications are presented: (1) motor development activities and games, (2) children’s games, (3) theatrical performances, (4) artificial intelligence applications, and (5) data harvesting applications. Different cases of human–robot interactions are considered and evaluated according to our experience, and we discuss their weak points and potential improvements. We examine the response of the audience when confronted with a humanoid robot featuring intelligent behavior, such as conversational intelligence and emotion recognition. We consider the importance of the robot’s physical appearance, the emotional dynamics of human–robot engagement across age groups, the relevance of non-verbal cues, and analyze drawings crafted by preschool children both before and after their interaction with the NAO robot. Full article
(This article belongs to the Special Issue Intricacies of Child–Robot Interaction - 2nd Edition)
Show Figures

Figure 1

19 pages, 6545 KB  
Article
Augmenting Mobile App with NAO Robot for Autism Education
by A. M. Mutawa, Hanan Mansour Al Mudhahkah, Aisha Al-Huwais, Norah Al-Khaldi, Rayuof Al-Otaibi and Amna Al-Ansari
Machines 2023, 11(8), 833; https://doi.org/10.3390/machines11080833 - 16 Aug 2023
Cited by 13 | Viewed by 3809
Abstract
This paper aims to investigate the possibility of combining humanoid robots, particularly the NAO robot, with a mobile application to enhance the educational experiences of children with autism spectrum disorder (ASD). The NAO robot, interfaced with a mobile app, serves as a socially [...] Read more.
This paper aims to investigate the possibility of combining humanoid robots, particularly the NAO robot, with a mobile application to enhance the educational experiences of children with autism spectrum disorder (ASD). The NAO robot, interfaced with a mobile app, serves as a socially assistive robotic (SAR) tool in the classroom. The study involved two groups of children aged three to six years old, exhibiting mild to moderate ASD symptoms. While the experimental group interacted with the NAO robot, the control group followed the standard curriculum. Initial findings showed that students in the experimental group exhibited higher levels of engagement and eye contact. However, certain limitations were identified, including the NAO robot’s limited capacity for concurrent interactions, language difficulties, battery life, and internet access. Despite these limitations, the study highlights the potential of robots and AI in addressing the particular educational requirements of children with ASD. Future research should focus on overcoming these obstacles to maximize the advantages of this technology in ASD education. Full article
(This article belongs to the Special Issue Design and Applications of Service Robots)
Show Figures

Figure 1

28 pages, 8194 KB  
Article
Designing Behaviors of Robots Based on the Artificial Emotion Expression Method in Human–Robot Interactions
by Liming Li and Zeang Zhao
Machines 2023, 11(5), 533; https://doi.org/10.3390/machines11050533 - 6 May 2023
Cited by 6 | Viewed by 3875
Abstract
How to express emotions through motion behaviors of robots (mainly for robotic arms) to achieve human–robot emotion interactions is the focus of this paper. An artificial emotion expression method that accords with human emotion that can deal with external stimuli and has the [...] Read more.
How to express emotions through motion behaviors of robots (mainly for robotic arms) to achieve human–robot emotion interactions is the focus of this paper. An artificial emotion expression method that accords with human emotion that can deal with external stimuli and has the capability of emotion decision-making was proposed based on the motion behaviors of robot. Firstly, a three-dimensional emotion space was established based on the motion indexes (deviation coefficient, acceleration, and interval time). Then, an artificial emotion model, which was divided into three parts (the detection and processing of external events, the generation and modification of emotion response vectors, and the discretization of emotions) was established in the three-dimensional emotion space. Then emotion patterns (love, excited, happy, anxiety, hate) and emotion intensity were calculated based on the artificial emotion model in human–robot interaction experiments. Finally, the influence of motion behaviors of humanoid robot NAO on the emotion expression of experimenters was studied through human–robot emotion interaction experiments based on the emotion patterns and emotion intensity. The positive emotion patterns (love, excited, happy) and negative emotion patterns (anxiety, hate) of the experimenters were evaluated. The experimental results showed that the personalized emotion responses could be generated autonomously for external stimuli, and the change process of human emotions could be simulated effectively according to the established artificial emotion model. Furthermore, the experimenters could recognize the emotion patterns expressed by the robot according to the motion behaviors of the robot, and whether experimenters were familiar with robots did not influence the recognition of different emotion patterns. Full article
(This article belongs to the Topic Intelligent Systems and Robotics)
Show Figures

Figure 1

25 pages, 27207 KB  
Article
A Novel Multi-Modal Teleoperation of a Humanoid Assistive Robot with Real-Time Motion Mimic
by Julio C. Cerón, Md Samiul Haque Sunny, Brahim Brahmi, Luis M. Mendez, Raouf Fareh, Helal Uddin Ahmed and Mohammad H. Rahman
Micromachines 2023, 14(2), 461; https://doi.org/10.3390/mi14020461 - 16 Feb 2023
Cited by 7 | Viewed by 3886
Abstract
This research shows the development of a teleoperation system with an assistive robot (NAO) through a Kinect V2 sensor, a set of Meta Quest virtual reality glasses, and Nintendo Switch controllers (Joycons), with the use of the Robot Operating System (ROS) framework to [...] Read more.
This research shows the development of a teleoperation system with an assistive robot (NAO) through a Kinect V2 sensor, a set of Meta Quest virtual reality glasses, and Nintendo Switch controllers (Joycons), with the use of the Robot Operating System (ROS) framework to implement the communication between devices. In this paper, two interchangeable operating models are proposed. An exclusive controller is used to control the robot’s movement to perform assignments that require long-distance travel. Another teleoperation protocol uses the skeleton joints information readings by the Kinect sensor, the orientation of the Meta Quest, and the button press and thumbstick movements of the Joycons to control the arm joints and head of the assistive robot, and its movement in a limited area. They give image feedback to the operator in the VR glasses in a first-person perspective and retrieve the user’s voice to be spoken by the assistive robot. Results are promising and can be used for educational and therapeutic purposes. Full article
(This article belongs to the Special Issue Assistive Robots)
Show Figures

Figure 1

24 pages, 1466 KB  
Article
Collaborative Autonomy: Human–Robot Interaction to the Test of Intelligent Help
by Filippo Cantucci and Rino Falcone
Electronics 2022, 11(19), 3065; https://doi.org/10.3390/electronics11193065 - 26 Sep 2022
Cited by 9 | Viewed by 2741
Abstract
A big challenge in human–robot interaction (HRI) is the design of autonomous robots that collaborate effectively with humans, exposing behaviors similar to those exhibited by humans when they interact with each other. Indeed, robots are part of daily life in multiple environments (i.e., [...] Read more.
A big challenge in human–robot interaction (HRI) is the design of autonomous robots that collaborate effectively with humans, exposing behaviors similar to those exhibited by humans when they interact with each other. Indeed, robots are part of daily life in multiple environments (i.e., cultural heritage sites, hospitals, offices, touristic scenarios and so on). In these contexts, robots have to coexist and interact with a wide spectrum of users not necessarily able or willing to adapt their interaction level to the kind requested by a machine: the users need to deal with artificial systems whose behaviors must be adapted as much as possible to the goals/needs of the users themselves, or more in general, to their mental states (beliefs, goals, plans and so on). In this paper, we introduce a cognitive architecture for adaptive and transparent human–robot interaction. The architecture allows a social robot to dynamically adjust its level of collaborative autonomy by restricting or expanding a delegated task on the basis of several context factors such as the mental states attributed to the human users involved in the interaction. This collaboration has to be based on different cognitive capabilities of the robot, i.e., the ability to build a user’s profile, to have a Theory of Mind of the user in terms of mental states attribution, to build a complex model of the context, intended both as a set of physical constraints and constraints due to the presence of other agents, with their own mental states. Based on the defined cognitive architecture and on the model of task delegation theorized by Castelfranchi and Falcone, the robot’s behavior is explainable by considering the abilities to attribute specific mental states to the user, the context in which it operates and its attitudes in adapting the level of autonomy to the user’s mental states and the context itself. The architecture has been implemented by exploiting the well known agent-oriented programming framework Jason. We provide the results of an HRI pilot study in which we recruited 26 real participants that have interacted with the humanoid robot Nao, widely used in HRI scenarios. The robot played the role of a museum assistant with the main goal to provide the user the most suitable museum exhibition to visit. Full article
(This article belongs to the Special Issue Human Factors in the Age of Artificial Intelligence (AI))
Show Figures

Figure 1

16 pages, 1630 KB  
Article
The Use of Social Robots in the Diagnosis of Autism in Preschool Children
by Krzysztof Arent, David J. Brown, Joanna Kruk-Lasocka, Tomasz Lukasz Niemiec, Aleksandra Helena Pasieczna, Penny J. Standen and Remigiusz Szczepanowski
Appl. Sci. 2022, 12(17), 8399; https://doi.org/10.3390/app12178399 - 23 Aug 2022
Cited by 13 | Viewed by 3866
Abstract
The present study contributes to the research problem of applying social robots in autism diagnosis. There is a common belief that existing diagnostic methods for autistic spectrum disorder are not effective. Advances in Human–Robot Interactions (HRI) provide potential new diagnostic methods based on [...] Read more.
The present study contributes to the research problem of applying social robots in autism diagnosis. There is a common belief that existing diagnostic methods for autistic spectrum disorder are not effective. Advances in Human–Robot Interactions (HRI) provide potential new diagnostic methods based on interactive robots. We investigated deficits in turn-taking in preschool children by observing their interactions with the NAO robot during two games: (Dance with me vs. Touch me). We compared children’s interaction profiles with the robot (five autistic vs. five typically developing young children). Then, to investigate turn-taking deficits, we adopted a rating procedure to indicate differences between both groups of children based on an observational scale. A statistical analysis based on ratings of the children’s interactions with the NAO robot indicated that autistic children presented a deficient level of turn-taking behaviors. Our study provides evidence for the potential of designing and implementing an interactive dyadic game between a child and a social robot that can be used to detect turn-taking deficits based on objective measures. We also discuss our results in the context of existing studies and propose guidelines for a robotic-enabled autism diagnosis system. Full article
(This article belongs to the Special Issue Automation Control and Robotics in Human-Machine Cooperation)
Show Figures

Figure 1

16 pages, 936 KB  
Article
Autonomous Critical Help by a Robotic Assistant in the Field of Cultural Heritage: A New Challenge for Evolving Human-Robot Interaction
by Filippo Cantucci and Rino Falcone
Multimodal Technol. Interact. 2022, 6(8), 69; https://doi.org/10.3390/mti6080069 - 17 Aug 2022
Cited by 10 | Viewed by 2496
Abstract
Over the years, the purpose of cultural heritage (CH) sites (e.g., museums) has focused on providing personalized services to different users, with the main goal of adapting those services to the visitors’ personal traits, goals, and interests. In this work, we propose a [...] Read more.
Over the years, the purpose of cultural heritage (CH) sites (e.g., museums) has focused on providing personalized services to different users, with the main goal of adapting those services to the visitors’ personal traits, goals, and interests. In this work, we propose a computational cognitive model that provides an artificial agent (e.g., robot, virtual assistant) with the capability to personalize a museum visit to the goals and interests of the user that intends to visit the museum by taking into account the goals and interests of the museum curators that have designed the exhibition. In particular, we introduce and analyze a special type of help (critical help) that leads to a substantial change in the user’s request, with the objective of taking into account the needs that the same user cannot or has not been able to assess. The computational model has been implemented by exploiting the multi-agent oriented programming (MAOP) framework JaCaMo, which integrates three different multi-agent programming levels. We provide the results of a pilot study that we conducted in order to test the potential of the computational model. The experiment was conducted with 26 real participants that have interacted with the humanoid robot Nao, widely used in Human-Robot interaction (HRI) scenarios. Full article
(This article belongs to the Special Issue Digital Cultural Heritage (Volume II))
Show Figures

Figure 1

14 pages, 1161 KB  
Article
Efficacy of a Robot-Assisted Intervention in Improving Learning Performance of Elementary School Children with Specific Learning Disorders
by Maria T. Papadopoulou, Elpida Karageorgiou, Petros Kechayas, Nikoleta Geronikola, Chris Lytridis, Christos Bazinas, Efi Kourampa, Eleftheria Avramidou, Vassilis G. Kaburlasos and Athanasios E. Evangeliou
Children 2022, 9(8), 1155; https://doi.org/10.3390/children9081155 - 31 Jul 2022
Cited by 9 | Viewed by 3457
Abstract
(1) Background: There has been significant recent interest in the potential role of social robots (SRs) in special education. Specific Learning Disorders (SpLDs) have a high prevalence in the student population, and early intervention with personalized special educational programs is crucial for optimal [...] Read more.
(1) Background: There has been significant recent interest in the potential role of social robots (SRs) in special education. Specific Learning Disorders (SpLDs) have a high prevalence in the student population, and early intervention with personalized special educational programs is crucial for optimal academic achievement. (2) Methods: We designed an intense special education intervention for children in the third and fourth years of elementary school with a diagnosis of a SpLD. Following confirmation of eligibility and informed consent, the participants were prospectively and randomly allocated to two groups: (a) the SR group, for which the intervention was delivered by the humanoid robot NAO with the assistance of a special education teacher and (b) the control group, for which the intervention was delivered by the special educator. All participants underwent pre- and post-intervention evaluation for outcome measures. (3) Results: 40 children (NAO = 19, control = 21, similar baseline characteristics) were included. Pre- and post-intervention evaluation showed comparable improvements in both groups in cognition skills (decoding, phonological awareness and reading comprehension), while between-group changes favored the NAO group only for some phonological awareness exercises. In total, no significant changes were found in any of the groups regarding the emotional/behavioral secondary outcomes. (4) Conclusion: NAO was efficient as a tutor for a human-supported intervention when compared to the gold-standard intervention for elementary school students with SpLDs. Full article
Show Figures

Figure 1

Back to TopTop