Exploring the Potential of Social Robots for Speech and Language Therapy: A Review and Analysis of Interactive Scenarios
Abstract
:1. Introduction
2. Methodology
- Sample—Children and adolescents with Disabilities, Neurodevelopmental disorders, Language Disorders, Hearing impairments, Cerebral Palsy, Learning Disabilities, Fluency disorder, Stuttering, Autism Spectrum Disorders, Intellectual disability.
- Intervention—Study reports speech and language therapy, rehabilitation for social skills, language and communication disorders.
3. Related Works on Social Robots for the Therapy of Communication Disorders
3.1. Social Robots as ATs in the Rehabilitation of Communication Disorders
3.2. Technical, Methodological, and Ethical Limitations and Challenges of Using SARs in Speech and Language Therapy
- Limited adaptability and personalization: Most SARs are pre-programmed with a fixed set of responses and behaviors, which may not be tailored to the individual needs and preferences of each patient.
- Limited physical capabilities: SARs may have limited physical capabilities, such as the ability to manipulate objects or to move around in the environment, which may limit their effectiveness in certain therapy contexts.
- Limited speech recognition and natural language processing capabilities: SARs may have difficulty accurately recognizing and understanding speech, especially in noisy environments or when dealing with non-standard dialects or accents, or in cases of speech and/or language disorders.
- Limited emotional and social intelligence: Although SARs are designed to interact with humans, they may lack the emotional and social intelligence needed to provide appropriate responses to patients who are experiencing strong emotions or who have complex social communication needs.
- Technical failures and maintenance issues: Like any technology, SARs may experience technical failures or require maintenance and updates, which can disrupt therapy sessions and create additional stress for patients and therapists.
- Cost: The cost of SAR technology and maintenance may be prohibitively high for some healthcare organizations, limiting their opportunity to provide this type of therapy to patients who could benefit from it.
- Reliability and Validity: One of the main challenges is ensuring the reliability and validity of the results when using SARs in speech and language therapy. This requires careful control of the methods of study design and data collection methods to minimize sources of bias and error.
- Usability and User Acceptance: SARs must be usable and acceptable to the target population, including children with communication disorders, to be effective. This may require significant efforts to design and refine the user interface and user experience of the robot.
- Standardization: There is a lack of standardized protocols and assessment methods for using SARs in speech and language therapy, which can make it difficult to compare results across studies and determine the effectiveness of different approaches.
- Evaluation: Assessing the effectiveness of SARs in speech and language therapy often requires multiple raters to evaluate the therapy sessions. Ensuring inter-rater reliability, or consistent long-term effectiveness: Another challenge is demonstrating the long-term effectiveness of SARs in speech and language therapy. Many studies have only measured short-term outcomes, so there is a need for longer-term studies to determine the sustainability of the benefits of using SARs in therapy.
- A number of participants: Usually, the sample is small in most published research about children/adolescents with communication disorders who interact with the SARs. The study groups consist of heterogenous types of neurodevelopmental disorders and lack control groups; therefore, it is difficult to apply statistical analysis.
- Privacy and Confidentiality: SARs collect and store sensitive information about the users, such as their speech and language patterns, which can raise concerns about privacy and confidentiality. This requires appropriate data protection measures, such as encryption and secure storage, to prevent unauthorized access to the data.
- Bias and Discrimination: SARs are designed and programmed by humans, which raises the possibility of unintended bias and discrimination in their behavior and interactions with users. This requires careful consideration of the design and programming of SARs to ensure that they do not perpetuate or amplify existing biases and discrimination.
- Responsibility and Liability: SARs are increasingly being used in healthcare settings, which raises questions about who is responsible and liable for any harm caused by their use. This requires clear and well-defined policies and procedures for the use of SARs in healthcare and speech and language therapy, as well as appropriate insurance coverage and risk management strategies.
- Interpersonal Relationships: SARs may have the potential to affect interpersonal relationships and human interactions, including the relationships between patients, therapists, and caregivers. This requires careful consideration of the design and use of SARs to ensure that they enhance, rather than undermine, existing relationships and interactions.
- Dependence and Over-Reliance: There is a risk that users may become overly dependent on SARs and cease to engage in important interpersonal relationships and activities, which can have negative impacts on their health and well-being. This requires careful monitoring and evaluation of the use of SARs in speech and language therapy to ensure that they are not creating negative consequences for users.
4. Related Works on Interactive Scenarios with Social Robots for the Therapy of Communication Disorders
4.1. Components of the Play Scenarios (in Tables)
4.1.1. Description of Interactive Scenarios with SARs (Pilot Studies)
Reference: [17], 2022 | Name of Scenario: Farm Animals—Voices and Names |
---|---|
Objectives | Remote speech and language therapy; Enrich the child’s vocabulary. |
Treatment domain, Type of CD | Language domain, Farm animals’ voices and names; children with neurodevelopmental disorders. |
Treatment technique | Identification of farm animal voice. Identification and pronunciation of words for farm. |
Play type (social∣cognitive) | Cognitive play. |
Interaction technique | Child–robot interaction. |
Age | Four years old. |
Participants’ role and behavior | There are five participants in this scenario, a speech and language therapist (control the game) a social robot (instructor–Nao), a social robot EmoSan (playmate), parent (co-therapist), and a child with neurodevelopmental disorders (playmate). |
Activity description | [17], page 123 (https://youtu.be/KpeQcIXG6cA, accessed on 16 April 2023). |
Robot configuration and mission | A social robot NAO, a social robot EmoSan, pictures of farm animals, a tablet and a laptop, BigBlueButton platform for telepresence. |
Used software | NAOqi software v.2.8.6.23, Python v.2.7, Node-RED v.2.1.3. |
Setting and time | This scenario was carried out in a clinical setting over multiple sessions. |
Variation | The activity can also include more participants. |
Reference: [17], 2022 | Name of Scenario: Storytime |
Objectives | Follow a story and representation of a story as a sequence of scenes in time. |
Treatment domain, Type of CD | Language domain, children with neurodevelopmental disorders. |
Treatment technique | Story as a sequence of scenes in time. |
Play type (social∣cognitive) | Cognitive play. |
Interaction technique | Child–robot interaction. |
Participants’ role and behavior | There are four participants in this scenario, a speech and language therapist (control the game), a social robot (instructor-Nao), a social robot EmoSan (playmate), and a child with neurodevelopmental disorders (playmate). |
Age | 3–10 years old (15 children) |
Activity description | [17], page 123 (https://youtu.be/AZhih7KlaPc, accessed on 16 April 2023) |
Robot configuration and mode of operation | A social robot NAO, a social robot EmoSan was used with 3 pictures of story scenes and a whisk. |
Used software | NAOqi software, v.2.8.6.23 Python 2.7, Node-RED v.2.1.3. |
Setting and time | This scenario was carried out in a clinical setting over multiple sessions. |
Variation | The activity can also include more participants to promote cooperative play. |
Variation | - |
Reference: [46], 2021 | Name of Scenario: Different interactive activities with a tablet; robots are expected to be used. |
Objectives | To propose a conceptual framework for designing linguistic activities (for assessment and training), based on advances in psycholinguistics. |
Treatment domain, Type of CD | Speech and language impairments—developmental language disorder, autism spectrum disorder. |
Treatment technique | Interactive therapeutic activities. |
Play type (social∣cognitive) | Social and cognitive. |
Interaction technique | The child performs activities on a tablet. |
Age | 4–12 years old. |
Participants’ role and behavior | The participants in this scenario are the children (30), performing activities via a tablet. |
Activity description | [46], page 2–6. |
Robot configuration and mission | Socially assistive robots/tablets with different modules for training and assessing linguistic capabilities of children with structural language impairments. |
Used software | Socially assistive robot and/or mobile device. |
Setting and time | This scenario has been carried out in clinical settings over multiple sessions, two groups have been included—a target and a control group. |
Variation | There are different linguistic tasks which evaluate different linguistic skills. Activities can include more than one participant. |
Reference: [47], 2021 | Name of Scenario: Serious games conducted by a social robot via embedded mini-video projector |
Objectives | To show the application of a robot, called MARIA T21 as a therapeutic tool. |
Treatment domain, Type of CD | Autism spectrum disorder, Down syndrome. |
Treatment technique | Interactive serious games. |
Play type (social∣cognitive) | Social and cognitive. |
Interaction technique | Robot–child interaction. |
Age | 4–9 years old. |
Participants’ role and behavior | The participants in this scenario are the social robot and eight children, supervised by the therapist and a group of researchers. |
Activity description | [47], page 6–14 (see in Section 5 Methodology) |
Robot configuration and mission | A new socially assistive robot termed MARIA T21 which uses an innovative embedded mini-video projector able to project Serious Games on the floor or tables. |
Used software | A set of libraries-PyGame, written in Python 2.7; an open-source robot operating system. |
Setting and time | The tests were carried out partly in a countryside region and partly in a metropolitan area, in order to expand socioeconomic diversity. |
Variation | The games were created with all their possible events, characters, awards, and stories and have included different types of serious games. |
Reference: [52], 2021 | Name of Scenario: Questions and Answering with NAO Robot |
Objectives | Initiation of conversation. |
Treatment domain, Type of CD | Language domain, Language disorder due to ASD. |
Treatment technique | Asking and answering simple questions. |
Play type (social∣cognitive) | Social play. |
Interaction technique | Child–robot interaction. |
Age | 5–24 years old (4 children). |
Participants’ role and behavior | There are five participants in this scenario, two teachers, two researchers, social robot, and the child. |
Activity description | [52], page 0357 |
Robot configuration and mission | A social robot NAO is talking with a child. |
Used software | NAOqi software v.2.8.6.23 |
Setting and time | This scenario was carried out in a classroom of special school, in 4 sessions. |
Variation | - |
Reference: [52], 2021 | Name of Scenario: Physical Activities with NAO Robot. |
Objectives | Initiation of physical movements. |
Treatment domain, Type of CD | Basic communication domain, Social and communication interaction due to ASD. |
Treatment technique | Provocation of imitation of physical movements. |
Play type (social∣cognitive) | Social play. |
Interaction technique | Child–robot interaction. |
Age | 5–24 years old (4 children) |
Participants’ role and behavior | There are five participants in this scenario, two teachers, two researchers, social robot, and the child. |
Activity description | [52], page 0357 |
Robot configuration and mission | A social robot NAO is talking with a child. |
Used software | NAOqi software v.2.8.6.23 |
Setting and time | This scenario was carried out in a classroom of special school, in 4 sessions. |
Variation | - |
Reference: [54], 2021 | Name of Scenario: I like to eat popcorn |
Objectives | Learning Bulgarian Sign Language. |
Treatment domain, Type of CD | Language domain, Language disorder due to hearing impairment. |
Treatment technique | Demonstration of signs, video and pronunciation of words from Sign Language. |
Play type (social∣cognitive) | Social play. |
Interaction technique | Child–robot interaction. |
Participants’ role and behavior | There are two participants in this scenario social robot (instructor) and the typically developed toddler. |
Age | 5 years |
Activity description | [54] page 72–73 |
Robot configuration and mode of operation | A social robot Pepper. |
Used software | NAOqi v.2.8.6.23 |
Setting and time | This scenario has been carried out in a lab setting, in one session. |
Variation | The activity can also include more participants to promote cooperative play. |
Reference: [49], 2016 | Name of Scenario: Different activities between a robot and children |
Objectives | To present a robotic assistant which can provide support during therapy and can manage the information. |
Treatment domain, Type of CD | Communication disorders. |
Treatment technique | Tasks and exercises for language, pragmatics, phonetics, oral-motor, phonological, morphosyntactic, and semantic interventions. |
Play type (social∣cognitive) | Social and cognitive. |
Interaction technique | Robot–child interaction. |
Age | - |
Participants’ role and behavior | The participants in this scenario are the robot and 32 children of regular schools. |
Activity description | [49], see pages 4–6 |
Robot configuration and mission | The robot was designed via 3D technology, and has a humanoid form with possibility to wear any costume representing animals (dogs, cats, etc.), children (boys or girls), or any other characters. The main controller of the robot (brain). |
Used software | A Raspberry PI 2 plate that contains the operative system (Raspbian-Raspberry Pi Model 2 B+). |
Setting and time | The pilot experiment consists of two stages—lab tests to determine robot’s performance (over multiple activities) and analyses of patients’ responses to the robot’s appearance. |
Variation | The robot offers different activities (playing, dancing, talking, walking, acting, singing, jumping, moving, and receiving voice commands. The system automates reports generation, monitoring of activities, patient’ data management, and others. The robot’s appearance can be customized according to the preferences of the patients. |
Reference: [36], 2016 | Name of Scenario: Therapy mode |
Objectives | Development of phonological, morphological, and semantic areas. |
Treatment domain, Type of CD | Language and speech domain; Children with Cerebral Palsy. |
Treatment technique | The robot displays on its screen some activities related to speech therapy such as phonological, semantic, and morphosyntactic exercises. |
Play type (social∣cognitive) | Cognitive play. |
Interaction technique | Child–robot interaction. |
Age | 7 years |
Participants’ role and behavior | There are three participants in this scenario, a speech and language therapist, social robot, and the child. |
Activity description | [36], page 4 |
Robot configuration and mission | SPELTRA (Speech and Language Therapy Robotic Assistant) with a display, |
Used software | a Raspberry Pi Model 2 B+ (2015); mobile application (Android-Raspberry Pi Model 2 B+,2015). |
Setting and time | This scenario was carried out in a school setting, in three sessions |
Variation | Generates a complete report of activities and areas of language which the child has worked; it could be used by parents and their children at home. |
Reference: [55], 2016 | Name of Scenario: Fruit Salad |
Objectives | Assessment of nonverbal communication behavior and verbal utterances, transferring skills in life. |
Treatment domain, Type of CD | Nonverbal behavior and Language domain, Children with ASD. |
Treatment technique | The robot had the role of presenting each trial by following the same repetitive pattern of behaviors: calling the child’s name, looking at each fruit, expressing the pre-established facial expression, and providing an answer at the end after the child placed a fruit in the salad bowl. |
Play type (social∣cognitive) | Social play. |
Interaction technique | Child–robot interaction. |
Age | 5–7 years |
Participants’ role and behavior | There are three participants in this scenario, an adult, social robot, and the child. |
Activity description | [55], page 118 |
Robot configuration and mission | Social robot Probo and plastic fruit toys. |
Used software | Elan—Linguistic Annotator, version 4.5 |
Setting and time | This scenario has been carried out in the therapy rooms in three schools, in two sessions. |
Variation | The game is played in child–adult condition or in child–robot condition. |
Reference: [56], 2016 | Name of Scenario: Shapes |
Objectives | Assessment of decoding/understanding words. |
Treatment domain, Type of CD | Language domain, Language disorder due to hearing impairment. |
Treatment technique | Identification; listening and following spoken instructions; Sign Language interpreter helps with the instructions if the child needs it. |
Play type (social∣cognitive) | Cooperative and practice play. |
Interaction technique | Child–robot interaction. |
Participants’ role and behavior | There are three participants in this scenario, a speech and language therapist (mediator), social robot (instructor), and the child with hearing impairment. |
Age | 5–15 years old |
Activity description | [56], page 257 |
Robot configuration and mode of operation | A social robot NAO was used with pictures of different shapes and colors. |
Used software | NAOqi software v.2.8.6.23 |
Setting and time | This scenario was carried out in a school setting, in one session. |
Variation | The activity can also include more participants to promote cooperative play. |
Reference: [56], 2016 | Name of Scenario: Emotions |
Objectives | Understanding emotion sounds and naming the emotion, transferring skills in life. |
Treatment domain, Type of CD | Language domain, Language disorder due to hearing impairment. |
Treatment technique | Identification of emotion sounds; Sign Language interpreter helps with the instructions if the child needs it. |
Play type (social∣cognitive) | Cognitive play. |
Interaction technique | Peer interaction. |
Participants’ role and behavior | There are three participants in this scenario, a speech and language therapist (mediator), social robot (instructor), and the child with hearing impairment. |
Age | 5–15 years |
Activity description | [56], page 257 |
Robot configuration and mode of operation | A social robot NAO was used with pictures of emotions. |
Used software | NAOqi software v.2.8.6.23 |
Setting and time | This scenario was carried out in a school setting, in one session. |
Variation | The activity can also include more participants to promote cooperative play. |
Reference: [56], 2016 | Name of Scenario: Shopping_1 |
Objectives | Identification of environment sounds and words pronunciation, transferring skills in life. |
Treatment domain, Type of CD | Language domain, Language disorder due to hearing impairment. |
Treatment technique | Identification of environmental sounds; Demonstration of body movements; Sign Language interpreter helps with the instructions if the child needs it. |
Play type (social∣cognitive) | Cognitive play. |
Interaction technique | Peer interaction. |
Participants’ role and behavior | There are three participants in this scenario, a speech and language therapist (mediator), social robot (instructor), and the child with hearing impairment. |
Age | 5–15 years |
Activity description | [56], page 257 |
Robot configuration and mode of operation | A social robot NAO and hygienic products (soap, shampoo, sponge, toothpaste and etc.). |
Used software | NAOqi software v.2.8.6.23 |
Setting and time | This scenario wascarried out in a school setting, in one session. |
Variation | The activity can also include more participants to promote cooperative play. |
Reference: [56], 2016 | Name of Scenario: Shopping_2 |
Objectives | Identification of sentence and words pronunciation, transferring skills in life. |
Treatment domain, Type of CD | Language domain, Language disorder due to hearing impairment. |
Treatment technique | Identification of sentence; categorization of words according to a certain criterion; Sign Language interpreter helps with the instructions if the child need. |
Play type (social∣cognitive) | Cognitive play. |
Interaction technique | Peer interaction. |
Participants’ role and behavior | There are three participants in this scenario, a speech and language therapist (mediator), social robot (instructor), and the child with hearing impairment. |
Age | 5–15 years |
Activity description | [56], page 258 |
Robot configuration and mode of operation | A social robot NAO and toys. |
Used software | NAOqi software v.2.8.6.23 |
Setting and time | This scenario was carried out in a school setting, in one session. |
Variation | The activity can also include more participants to promote cooperative play. |
Reference: [57], 2016 | Name of Scenario: Order a doughnut |
Objectives | How to order a doughnut from a menu in a doughnut shop, transferring skills in life. |
Treatment domain, Type of CD | Language domain, ASD. |
Treatment technique | Imitation of actions and words. |
Play type (social∣cognitive) | Social play. |
Interaction technique | Child–robot interaction. |
Participants’ role and behavior | The child’s family, the robot programmer, the special education teacher, social robot NAO, and the child. |
Age | 6 years old |
Activity description | [57], page 132–133 |
Robot configuration and mode of operation | A social robot NAO and a menu |
Used software | NAOqi software v.2.8.6.23 |
Setting and time | This scenario was carried out at subject’s home, in two sessions. |
Variation | - |
Reference: [57], 2016 | Name of Scenario: Joint Attention |
Objectives | Joint attention skills |
Treatment domain, Type of CD | Joint attention; Developmental Delay and Speech-Language Impairments. |
Treatment technique | Understanding instructions. |
Play type (social∣cognitive) | Social play. |
Interaction technique | Child–robot interaction. |
Participants’ role and behavior | The robot programmer, the speech and language pathologist, social robot NAO, and two children. |
Age | 7 and 9 years old |
Activity description | [57], page 135 |
Robot configuration and mode of operation | A social robot NAO and objects in speech and language pathologist’s office. |
Used software | NAOqi software v.2.8.6.23 |
Setting and time | This scenario was carried out at speech and language pathologist’s office in five sessions. |
Variation | After each session, the modification of the robot behaviors were designed according to the child’s needs. |
Reference: [57], 2016 | Name of Scenario: Joint Attention, Turn-Taking, Initiative |
Objectives | Joint attention, introduction of turn-taking and initiative skills |
Treatment domain, Type of CD | Language domain, Speech-Language Impairment. |
Treatment technique | Imitation of actions and sentences. |
Play type (social∣cognitive) | Social play. |
Interaction technique | Child–robot interaction. |
Participants’ role and behavior | The robot operator, the speech and language pathologist, social robot NAO, and a child |
Age | 7 years |
Activity description | [57], page 136–137 |
Robot configuration and mode of operation | A social robot NAO and cue cards. |
Used software | NAOqi software v.2.8.6.23 |
Setting and time | This scenario was carried out at school’s playroom, in eight months, twice a week sessions. |
Playing the game without the cue cards. | |
Reference: [48], 2015 | Name of Scenario: Auditory Memory Stimulation, Comprehensive Reading, Visual Stimulation, Stimulation of Motor Skills |
Objectives | To offer a robotic assistant able to provide support for Speech Language Practitioners. |
Treatment domain, Type of CD | Autism spectrum disorder, Down syndrome, Cerebral Palsy, Mild and Moderate Intellectual Disability, Epilepsy, Unspecified intellectual disabilities, other disabilities. |
Treatment technique | Interactive therapy exercises, assessment tasks. |
Play type | Social and cognitive. |
Interaction technique | Therapist–patient interaction via an intelligent integrative environment. |
Age | - |
Participants’ role and behavior | The participants in this scenario are the therapist, the children, the robotic assistant (the model can be used by relatives and students, too). |
Activity description | [48], page 75 |
Robot configuration and mission | RAMSES (v.2)—an intelligent environment that uses mobile devices, embedded electronic systems, and a robotic assistant. The robotic assistant consists of a central processor (an Android smartphone or tablet, or an embedded electronic system) and a displacement. |
Used software | Electronic platform. |
Setting and time | This is a pilot study, conducted in clinical settings over multiple activities. |
Variation | The proposed model relies on different ICT tools, knowledge structures, and functionalities. |
Reference: [58], 2014 | Name of Scenario: The impact of humanoid robots in teaching sign languages |
Objectives | Teaching Sign Language |
Treatment domain, Type of CD | Language domain, Language disorder due to hearing impairment. |
Treatment technique | Demonstration of sign language and special flashcards illustrating the signs. |
Play type (social∣cognitive) | Cognitive play. |
Interaction technique | Child–robot interaction. |
Age | 9–16 years (10 children hearing impairment). |
Participants’ role and behavior | Individual and group sessions of a therapist in sign language, a social robot, and a child/ children. |
Activity description | [58], page 1124–1125 |
Robot configuration and mission | A social robot Robovie R3 and pictures of sings. |
Used software | Robovie Maker 2 software (v.1.4). |
Setting and time | This scenario was carried out in a computer laboratory, in one session. |
Variation | Individual or group sessions. |
Reference: [59], 2014 | Name of Scenario: Sign Language Game for Beginners |
Objectives | Learning signs from Turkish Sign Language |
Treatment domain, Type of CD | Language domain, Language disorder due to hearing impairment. |
Treatment technique | Identification of words in Turkish Sign Language for beginners’ level (children of early age group), most frequently used daily signs. |
Play type (social∣cognitive) | Cognitive play. |
Interaction technique | Child–robot interaction. |
Age | Average age of 10:6 (years:months) |
Participants’ role and behavior | There are two participants in this scenario, the typically developed child and a humanoid social robot (instructor). |
Activity description | [59], page 523, 525 |
Robot configuration and mission | A social robot NAO H25 and a modified Robovie R3 robot. |
Used software | NAOqi software v.2.8.6.23 |
Setting and time | This scenario wa carried out in a university setting for one session. |
Variation | The game can also be played with children with hearing impairment. |
4.1.2. Description of Interactive Scenarios with SARs (Empirical Use Studies)
Reference: [60], 2022 | Name of Scenario: Ling Six-Sound Test |
---|---|
Objectives | Assessment of auditory skills/identification. |
Treatment domain, Type of CD | Frequency speech sounds, children with neurodevelopmental disorders. |
Treatment technique | Discrimination and identification of speech sounds. |
Play type (social∣cognitive) | Cooperative and practice play. |
Interaction technique | Child–robot interaction. |
Participants’ role and behavior | There are four participants in this scenario, a speech and language therapist (control the game), a social robot (instructor-Nao), a social robot EmoSan (playmate), and a child with neurodevelopmental disorders (playmate). |
Age | 3–10 years old |
Activity description | [60], page 491 |
Robot configuration and mode of operation | A social robot NAO; a social robot EmoSan was used with pictures of different speech sounds. |
Used software | NAOqi software v.2.8.6.23 and Python 2.7. |
Setting and time | This scenario was carried out in a clinical setting over multiple sessions. |
Variation | The instructions play in random order. The activity can also include more participants to promote cooperative play. |
Reference: [60], 2022 | Name of Scenario: Warming up |
Objectives | Identification of speech. |
Treatment domain, Type of CD | Common greeting and introduction of someone, children with neurodevelopmental disorders. |
Treatment technique | Identification of speech. |
Play type (social∣cognitive) | Cooperative and practice play. |
Interaction technique | Child–robot interaction. |
Participants’ role and behavior | There are four participants in this scenario, a speech and language therapist (control the game), a social robot (instructor-Nao), a social robot EmoSan (playmate), and a child with neurodevelopmental disorders (playmate). |
Age | 3–10 years old |
Activity description | [60], page 491 |
Robot configuration and mode of operation | A social robot NAO, a social robot EmoSan. |
Used software | NAOqi software v.2.8.6.23 and Python 2.7. |
Setting and time | This scenario was carried out in a clinical setting over multiple sessions. |
Variation | The activity can also include more participants to promote cooperative play. |
Reference: [60], 2022 | Name of Scenario: Farm animals—receptive vocabulary |
Objectives | Receptive vocabulary of children for this particular closed set of words. |
Treatment domain, Type of CD | Receptive vocabulary of closed set of words, children with neurodevelopmental disorders. |
Treatment technique | Identification of vocabulary of closed set of words. |
Play type (social∣cognitive) | Cooperative and practice play. |
Interaction technique | Child-robot interaction. |
Participants’ role and behavior | There are four participants in this scenario, a speech and language therapist (control the game) a social robot (instructor-Nao), a social robot EmoSan (playmate) and a child with neurodevelopmental disorders (playmate). |
Age | 3–10 years old |
Activity description | [60], page 492 |
Robot configuration and mode of operation | A social robot NAO, a social robot EmoSan has been used with pictures of different farm animals. |
Used software | NAOqi software v.2.8.6.23 and Python 2.7. |
Setting and time | This scenario has been carried out in a clinical setting over multiple sessions. |
Variation | The instructions are played in random order. The activity can also include more participants to promote cooperative play. |
Reference: [60], 2022 | Name of Scenario: Colors. |
Objectives | Receptive vocabulary of children for this particular closed set of words. |
Treatment domain, Type of CD | Receptive vocabulary of closed set of words, children with neurodevelopmental disorders. |
Treatment technique | Identification of vocabulary of closed set of words. |
Play type (social∣cognitive) | Cooperative and practice play. |
Interaction technique | Child–robot interaction. |
Participants’ role and behavior | There are four participants in this scenario, a speech and language therapist (control the game), a social robot (instructor-Nao), a social robot EmoSan (playmate), and a child with neurodevelopmental disorders (playmate). |
Age | 3–10 years old |
Activity description | [60], page 492 |
Robot configuration and mode of operation | A social robot NAO; a social robot EmoSan has been used with pictures of different colors. |
Used software | NAOqi software v.2.8.6.23 and Python 2.7. |
Setting and time | This scenario has been carried out in a clinical setting over multiple sessions. |
Variation | The instructions are played in random order. The activity can also include more participants to promote cooperative play. |
Reference: [60], 2022 | Name of Scenario: Shopping game |
Objectives | Identification of environmental sounds and expressive vocabulary of closed set of words, transferring skills in life. |
Treatment domain, Type of CD | Identification of sounds and expressive vocabulary of closed set of words, children with neurodevelopmental disorders. |
Treatment technique | Identification of sounds and words. |
Play type (social∣cognitive) | Cooperative and practice play. |
Interaction technique | Child-robot interaction. |
Participants’ role and behavior | There are four participants in this scenario, a speech and language therapist (control the game), a social robot (instructor-Nao), a social robot EmoSan (playmate), and a child with neurodevelopmental disorders (playmate). |
Age | 3–10 years old |
Activity description | [60], page 492 |
Robot configuration and mode of operation | A social robot NAO; a social robot EmoSan was used with pictures of different colors. |
Used software | NAOqi software v.2.8.6.23 and Python 2.7. |
Setting and time | This scenario was carried out in a clinical setting over multiple sessions. |
Variation | The instructions are played in random order. The activity can also include more participants to promote cooperative play. |
Reference: [61], 2022 | Name of Scenario: Imitation games and speech therapy sessions |
Objectives | To compare the children’s engagement while playing a mimic game with the affective robot and the therapist; to assess the efficacy of the robot’s presence in the speech therapy sessions alongside the therapist. |
Treatment domain, Type of CD | Language disorders. |
Treatment technique | Mimic game; speech therapy sessions. |
Play type | Social and cognitive play. |
Interaction technique | Robot–child–therapist interaction. |
Age | Average age of 6.4 years. |
Participants’ role and behavior | The participants in the scenarios are the social robot (RASA), six children in the intervention group, six children in the control group, and the therapist. |
Activity description | [61], pages 10–11 |
Robot configuration and mission | A humanoid Robot Assistant for Social Aims (RASA). Designed to be utilized primarily for teaching Persian Sign Language to children with hearing disabilities. |
Used software | The robot is controlled by a central PC carrying out high level control, and two local controllers. |
Setting and time | Scenarios have been carried out in a clinical setting over ten therapy sessions (one per week). |
Variation | The robot uses external graphics processing unit to execute facial expression recognition due to the limited power of the robot’s onboard computer. |
Reference: [12], 2022 | Name of Scenario: Reading skills |
Objectives | Social robots are used as the tutor with the assistance of a special educator. |
Treatment domain, Type of CD | Special Learning Disorder (dyslexia, dysgraphia, dysorthography). |
Treatment technique | Teaching cognitive and metacognitive strategies. |
Play type | Cognitive play. |
Interaction technique | Robot–child interaction enhanced by the special education teacher; |
Age | mean age 8.58. |
Participants’ role and behavior | All scenarios were similar in content; structure and succession for both the NAO and the control group with the only difference that the welcoming, the instructions, the support, and the feedback for the activities was delivered by the special educator for the control group. |
Activity description | [12], pages 5–4 |
Robot configuration and mission | A humanoid robot Nao. |
Used software | NAOqi software v.2.8.6.23. |
Setting and time | Interventions took place in a specially designed room in a center; 24 sessions with a frequency of two sessions per week |
Variation | - |
Reference: [14], 2021 | Name of Scenario: Therapy session with EBA. |
Objectives | Formulation of questions and answers, Comprehension and construction of sentences, Articulation and pronunciation, Voice volume, Dictations, Literacy, Reading comprehension |
Treatment domain, Type of CD | Treatment domain—nasality, vocalization, language, attention, motivation, memory, calculation, visual perception; children with language disorders—cleft palate and cleft lip, ADHD, dyslexia, language development delay. |
Treatment technique | Story-telling, making dictations to check the spelling, asking questions about the text that has been read or listened, ask the child for words starting with a letter or will ask the child to identify how many syllables are contained in a word told, to repeat more clearly everything the child does not say properly, give instructions to the child for all the activities defined. |
Play type (social∣cognitive) | Social and cognitive play. |
Interaction technique | Robot-child-therapist interaction. |
Participants’ role and behavior | There are 3 participants in this scenario, a speech and language therapist (control the game) a social robot Nao and the child with language disorder. |
Age | 9–12 years old (five children) |
Activity description | [14], page 8–9 |
Robot configuration and mode of operation | A social robot NAO has been used, preprogrammed with the modules: reading comprehension; dictations, stories and vocabulary, improvement of oral comprehension; articulation and phonetic-phonological pronunciation; phonological awareness and phonetic segmentation; literacy skills. |
Used software | NAOqi software v.2.8.6.23 and Python 2.7. |
Setting and time | Thirty-minute sessions with children were conducted once a week for 30 weeks. The intervention was conducted during ordinary therapy sessions in a room at the speech therapist centre. |
Variation | Possible software modifications for different behaviors and scenarios. |
Reference: [44], 2020 | Name of Scenario: Different scenarios for child–robot interaction |
Objectives | To achieve significant changes in social interaction and communication. |
Treatment domain, Type of CD | Different speech and language impairments—specific language impairment, ADHD, dyslexia, oppositional defiant disorder, misuse of oral language, dyslalia, ADD, problems with oral language, nasality, vocalization. |
Treatment technique | Logopedic and pedagogical therapy. |
Play type | Social and cognitive play. |
Interaction technique | Robot–child–therapist interaction. |
Age | 9–12 years old (9,10,12) |
Participants’ role and behavior | The participants in this scenario are the social robot (instructor), five children, the therapist, and a researcher-programmer. |
Activity description | [44], page 564–565 |
Robot configuration and mission | A social robot NAO was used, preprogrammed with the modules: reading comprehension; dictations, stories and vocabulary, improvement of oral comprehension; articulation and phonetic-phonological pronunciation; phonological awareness and phonetic segmentation; literacy skills. |
Used software | NAOqi v.2.8.6.23 |
Setting and time | This scenario was carried out in a clinical setting over multiple sessions—once a week for 30 weeks. |
Variation | Possible software modifications for different behaviors, faster modules, and adaptation to unpredictable scenarios. |
Reference: [35], 2020 | Name of Scenario: Physically explore the robot |
Objectives | Joint attention, identification of emotional expressions. |
Treatment domain, Type of CD | Language disorders in children with complex social and communication conditions. |
Treatment technique | Cause and effect game. |
Play type | Social and cognitive play. |
Interaction technique | Robot–child–therapist interaction. |
Age | From 2 to 6 years. |
Participants’ role and behavior | The participants in the scenarios are the social robot Kaspar, staff at the nursery, teachers and volunteers, children with complex social and communication conditions. |
Activity description | [35], pages 306–307 |
Robot configuration and mission | A social robot Kaspar. |
Used software | The robot is controlled by a specific Kaspar software that have been developed to facilitate semi-autonomous behavior and make it more user-friendly for non-technical users. |
Setting and time | Scenarios were carried out in a nursery and the children interacted with the robot for as many sessions as were deemed meaningful within the day-to-day running of the nursery. Number of interactions with the robot per child was 27.37 and the standard deviation was 18.62. |
Variation | The robot Kaspar can be used in different play scenarios. |
Reference: [15], 2019 | Name of Scenario: Ling sounds story |
Objectives | Acquisition of hearing skills. |
Treatment domain, Type of CD | Language domain, Language disorder due to hearing impairment. |
Treatment technique | Ling sounds, auditory-verbal therapy method. |
Play type (social/cognitive) | Cooperative and practice play. |
Interaction technique | Child–robot interaction. |
Participants’ role and behavior | There are two participants in this scenario, a social robot (instructor) and the individual who has hearing impairments (learner). |
Age | 3–4 years old |
Activity description | [15], page 442 |
Robot configuration and mode of operation | A social robot NAO was used with toys correlated with the Ling sounds. |
Used software | NAOqi software v.2.8.6.23 |
Setting and time | This scenario was carried out in a clinical setting over multiple sessions. |
Variation | The level of difficulty can be adjusted. The activity can also include more participants to promote cooperative play. |
Reference: [15], 2019 | Name of Scenario: Music density |
Objectives | Acquisition of hearing skills. |
Treatment domain, Type of CD | Language domain, Language disorder due to hearing impairment. |
Treatment technique | Listening of environmental sounds; discrimination and identification; sound intensity, auditory-verbal therapy method. |
Play type (social∣cognitive) | Cooperative and practice play. |
Interaction technique | Child–robot interaction. |
Participants’ role and behavior | There are two participants in this scenario, a social robot (instructor) and the individual who has hearing impairments (learner). |
Age | 3–4 years old |
Activity description | [15], page 443 |
Robot configuration and mode of operation | A social robot NAO was used with toys correlated with musical instruments. |
Used software | NAOqi software v.2.8.6.23 |
Setting and time | This scenario was carried out in a clinical setting over multiple sessions. |
Variation | The level of difficulty can be adjusted. The activity can also include more participants to promote cooperative play. |
Reference: [15], 2019 | Name of Scenario: Farm animals—discrimination and identification of animal sounds |
Objectives | Acquisition of hearing skills. |
Treatment domain, Type of CD | Language domain, Language disorder due to hearing impairment. |
Treatment technique | Discrimination and identification of animal sounds which are with different frequency (e.g., low frequency—cow sound, high frequency—cat sound); auditory-verbal therapy method. |
Play type (social∣cognitive) | Cooperative and practice play. |
Interaction technique | Child–robot interaction. |
Participants’ role and behavior | There are two participants in this scenario, a social robot (instructor) and the individual who has hearing impairments (learner). |
Age | 3–4 years old |
Activity description | [15], page 443 |
Robot configuration and mode of operation | A social robot NAO has been used with toys correlated with farm animals. |
Used software | NAOqi software v.2.8.6.23 |
Setting and time | This scenario was carried out in a clinical setting over multiple sessions. |
Variation | The instructions play in random order. The activity can also include more participants to promote cooperative play. |
Reference: [15], 2019 | Name of Scenario: Vegetables |
Objectives | Acquisition of decoding of words/understanding. |
Treatment domain, Type of CD | Language domain, Language disorder due to hearing impairment. |
Treatment technique | Discrimination and identification of words; auditory-verbal therapy method. |
Play type (social∣cognitive) | Cooperative and practice play. |
Interaction technique | Child–robot interaction. |
Participants’ role and behavior | There are two participants in this scenario, a social robot (instructor) and the individual who has hearing impairments (learner). |
Age | 3–4 years old |
Activity description | [15], page 443 |
Robot configuration and mode of operation | A social robot NAO has been used with vegetable-toys and a basket. |
Used software | NAOqi software v.2.8.6.23 |
Setting and time | This scenario has been carried out in a clinical setting over multiple sessions. |
Variation | The instructions play in random order. The activity can also include more participants to promote cooperative play. |
Reference: [33], 2019 | Name of Scenario: Responding to directives |
Objectives | Language expansion. |
Treatment domain, Type of CD | Language domain, autism spectrum. |
Treatment technique | The robot tells the student what to do and initiates social engagement. |
Play type | Cooperative and practice play. |
Interaction technique | Teacher–robot–student. |
Age | Eight-year-old student. |
Participants’ role and behavior | There are three participants in this scenario, a social robot (instructor), a speech-language pathologist (a teacher) and the individual who has a communication disorder (learner). |
Activity description | [33], page 5–6 |
Robot configuration and mission | A social robot NAO has been used with favorite toys of the learner |
Used software | NAOqi software v.2.8.6.23 |
Setting and time | This scenario has been carried out in clinical settings over multiple sessions. |
Variation | The level of difficulty can be adjusted. The activity can also include more participants to promote cooperative play. |
Reference: [42], 2016 | Name of Scenario: Teaching fundamentals of music |
Objectives | Facilitation multisystem development in children with autism. |
Treatment domain, Type of CD | Autism, fine movements, communication skills. |
Treatment technique | Robot–Child or Robot–Child–Therapist/Parent imitation turn taking games and playing a Kinect based virtual xylophone on the screen. |
Play type | Cooperative and practice play. |
Interaction technique | Interaction between a robot, a child, and a therapist/parent. |
Age | 6-year-old children. |
Participants’ role and behavior | There are four participants in this scenario, a social robot (instructor), and the individual who has autism (learner). |
Activity description | [42], page 543 |
Robot configuration and mission | A social robot NAO has been used with drum and xylophone. |
Used software | NAOqi v.2.8.6.23 |
Setting and time | This scenario has been carried out in a clinical settings over 11 sessions. |
Variation | The design study contains a Baseline, pre-test, post-test, and a follow-up test. Each participant’s skill is compared with his previous skill based on assessment tools. |
Reference: [43], 2016 | Name of Scenario: Football game |
Objectives | To achieve significant changes in social interaction and communication. |
Treatment domain, Type of CD | ASD, communication, and social behavior. |
Treatment technique | Play therapy. |
Play type | Collaborative physical play. |
Interaction technique | Interaction between a robot, a child, a therapist, and a parent. |
Age | 3–10 years old (5, 7, 3.5) |
Participants’ role and behavior | The participants in this scenario are the social robot (instructor), the individual who has autism spectrum disorder, his parent, and trainer (teacher at the elementary school). |
Activity description | [43], page 564–565 |
Robot configuration and mission | A social robot NAO uses a ball and participates in an interactive football game with the child. |
Used software | NAOqi v.2.8.6.23 |
Setting and time | This scenario was carried out in clinical settings over four sessions. |
Variation | There are various specific autonomous behaviors that may lead to cross-platform utility of socially assistive robots. |
Reference: [62], 2016 | Name of Scenario: Interactive play with a song |
Objectives | Promoting foundational communication and socialization skills. |
Treatment domain, Type of CD | To elicit child communication and socialization, Language disorder due to ASD. |
Treatment technique | Playing a song and performing appropriate hand/arm motions. |
Play type (social∣cognitive) | Social play. |
Interaction technique | Child–robot interaction. |
Participants’ role and behavior | The participants in this scenario are a social robot, two researchers (a computer scientist and a clinical instructor), and a child. |
Age | 3–6 years (8 children) |
Activity description | [62], page 643, 645 |
Robot configuration and mode of operation | A robot Probo, named CHARLIE. |
Used software | New software was designed to promote two fundamental skills linked to communication—turn-taking and imitation. |
Setting and time | This scenario was carried out in a university setting 2 times a week for 6 weeks. |
Variation | The game could be played by one or more participants (the child with ASD + sibling/caregiver). |
Reference: [62], 2016 | Name of Scenario: The hat game |
Objectives | Verbal utterances. |
Treatment domain, Type of CD | To encourage eye contact, directed attention, speech, and social interaction by providing a positive sensory response to reinforce each child’s efforts to communicate. Language disorder due to ASD. |
Treatment technique | Ask and answer to a simple question. |
Play type (social∣cognitive) | Social play. |
Interaction technique | Child–robot interaction. |
Participants’ role and behavior | The participants in this scenario are a social robot, two researchers (a computer scientist and a clinical instructor), and the child. |
Age | 3–6 years (8 children) |
Activity description | [62], page 645–646. |
Robot configuration and mode of operation | A robot Probo, named CHARLIE. |
Used software | New software was designed to promote two fundamental skills known to be closely linked to communication—turn-taking and imitation. |
Setting and time | This scenario has been carried out in a university setting 2 times a week for 6 weeks. |
Variation | The game could be played by one or more participants (the child with ASD + sibling/caregiver). |
Reference: [41], 2015 | Name of Scenario: Giving color responses |
Objectives | Training joint attention skills. |
Treatment domain, Type of CD | Joint attention skills, autism. |
Treatment technique | The student is instructed to follow the robot’s head movement, to verbally (out-loud) name the color of the target, at which the robot is looking, and additionally press the corresponding button. |
Play type | Cooperative and practice play. |
Interaction technique | Robot–student (and a teacher in the pre- and post-tests). |
Age | Mean age 4.6 (from 4 to 5) |
Participants’ role and behavior | There are three participants in this scenario, a pet robot (instructor), a speech-language pathologist (a teacher), and the individual who has autism (learner). |
Activity description | [41], page 3–5 |
Robot configuration and mission | A social robot, CuDDler (A*STAR), was used with 10 colorful line drawings of various objects in 4 colors. |
Used software | Two Android phones execute software modules. |
Setting and time | This scenario was carried out in a clinical settings over 3 sessions. |
Variation | The level of difficulty can be adjusted. The activity can also include more participants. |
Reference: [16], 2015 | Name of Scenario: “Special Friend, iRobiQ” |
Objectives | To promote of language interaction for children with speech-language disorders |
Treatment domain, Type of CD | Speech and language disorders, emotional expression |
Treatment technique | Scripts for practical language goals by robot as an interlocutor friend (turn-taking; functional communication) |
Play type | Entertaining educational elements for initiated conversations with a robot |
Interaction technique | Robot–child–therapist interaction. |
Age | Four autism/MR (Mental Retardation) children who were 4–5 years old. |
Participants’ role and behavior | The participants in this scenario are the social robot (instructor), four children, and the therapist. |
Activity description | [16] Greetings and a birthday celebration script (cake, gift, song), with the theme of the practical language goals. |
Robot configuration and mission | A humanoid robot iRobiQ with touch screen. |
Used software | iRobiQ software v.2.8.6.23 |
Setting and time | This scenario was carried out in a clinical setting over eight sessions. |
Variation | Variation of facial expressions (happy, surprised, neutral, disappointed and shy) |
Reference: [45], 2014 | Name of Scenario: Different interactive plays |
Objectives | To achieve a positive impact on the communication skills of nonverbal children via robot-based augmentative and alternative communication. |
Treatment domain, Type of CD | Communication disorders, language impairments—pervasive development disorder. |
Treatment technique | Play therapy. |
Play type | Social and cognitive play |
Interaction technique | Robot–child–therapist interaction. |
Age | From 2 years, 9 months to 5 years, 4 months old. |
Participants’ role and behavior | The participants in this scenario are the social robot (instructor), four children, the therapist, and a researcher-programmer. |
Activity description | [45], page 855–857 |
Robot configuration and mission | A humanoid robot iRobi with a robot-based augmentative and alternative programs. |
Used software | Can be controlled by a smartphone through Wi-Fi. |
Setting and time | This scenario was carried out in a clinical setting over multiple sessions in 3 phases for 6 months—three times a week. |
Variation | Multi-functional sensors which can motivate children to initiate social communication. |
Reference: [59], 2014 | Name of Scenario: Sign Language Game for Advanced Users |
Objectives | Recognition of signs from Turkish Sign Language. |
Treatment domain, Type of CD | Language domain, Language disorder due to hearing impairment. |
Treatment technique | Recognition of words in Turkish Sign Language for advanced level. |
Play type (social∣cognitive) | Cognitive play. |
Interaction technique | Child–robot interaction. |
Participants’ role and behavior | There are two participants in this scenario, a humanoid social robot (instructor), and the child with hearing impairment. |
Age | 7–11 years (21 children) and 9–16 years (10 children). |
Activity description | [59], page 525 |
Robot configuration and mode of operation | A social robot NAO H25 and a modified Robovie R3 robot. |
Used software | NAOqi software v.2.8.6.23 |
Setting and time | This scenario was carried out in a university setting for 6–9 games with each robot. |
Variation | The participants can randomly select a robot to play with. |
5. Discussion and Future Directions
5.1. How Social Robots Can Assist in the Intervention of Communication Disorders
- Vocabulary and language development (verbal and sign language): Social robots can assist children in practicing and improving their language skills through playful and engaging activities, offering real-time feedback and encouragement. SARs are able to initiate and support communication and enrich child’s vocabulary. They also help therapists train and assess linguistic capabilities of children and adolescents with language impairments [6,7,8,13,15,16,17,23,32,35,38,39,41,42,43,44,45,47,48,49,53,54,55,56,57,58,59,62].
- Articulation therapy: Social robots can help children with speech disorders practice pronunciation and articulation exercises. The youngsters are observed to show increased verbal production and participation via SARs. The latter contribute to improvements in articulation, and phonological, morphosyntactical, and semantic communication [13,33,35,36,37,43,44,48,49,57]. Auditory skills: Children learn and develop language through listening. Some SARs are used to develop auditory skills as well as verbal speech. Robots are able to offer sounds with different frequency. SARs can also repeat words and provide support when necessary. In addition, robots can give visual and auditory feedback which is essential for therapists [15,48,60].
- Storytelling: Social robots can assist children in practicing storytelling and engaging in conversation. Stories told by robots are found to be more engaging and funnier for children. SARs encourage verbal communication and enhance cognitive abilities in youngsters. Robots can also monitor, gather, and analyze data from the child’s behavior [16,33,35,64].
- Social skills: Social robots can help children improve social skills, such as turn-taking, joint attention, emotion regulation, and eye contact through playful and engaging activities. During these activities, different participants, together with the robots, can take part—peers, therapists, or parents. Children are provided support and guidance during play. Youngsters learn to interact and cooperate with the others and robot-based therapies enhance their cognitive and emotional abilities [6,8,15,17,39,42,43,46,55,56,62].
5.2. Future Directions on How to Optimize the Role and Importance of Social Robots in Speech and Language Therapy
- Natural Language Processing (NLP): Advanced NLP techniques can enable SARs to better understand and interpret speech and language patterns via speech recognition models tailored for specific speech and language therapy tasks, such as articulation exercises or language comprehension activities.
- Adaptive Dialogue Systems: Implementation of adaptive dialogue systems that allow SARs to adapt their conversational style, pacing, and prompts based on the individual’s progress and needs.
- Multimodal Interaction: Such interaction enables SARs to engage in multimodal interaction by incorporating visual, auditory, and tactile modalities. SARs can use visual aids, such as interactive displays or gesture recognition, to supplement verbal instructions and support visual learning. Incorporates tactile aids such as interactive touchscreens to facilitate hands-on activities or audio.
- Virtual (VR), Augmented (AR), and Mixed Reality (MR): These technologies can assist SARs to support intervention sessions with more immersive and interactive therapy environments. By using AR, SARs will overlay virtual objects or visual cues in the real world to support language or articulation exercises. VR will help in simulating real-life scenarios for social communication training, providing children and adolescent with a safe and controlled environment to practice their skills. Three-dimensional modeling or avatars can be applied to support both children and robots to immerse themselves in a shared virtual environment. Furthermore, VR can be a source of adaptation in a protected environment.
- Affective Computing for improving SARs’ emotion recognition and facial expression capabilities: Incorporating emotion recognition algorithms (visual, voice, speech, gesture, physiologically based) can enhance SARs’ ability to detect and respond to individuals’ emotional states during therapy sessions. By tailoring their responses and interventions accordingly, SARs can create a more personalized and empathetic therapeutic environment. Furthermore, developing expressive capabilities for SARs enables them to display appropriate emotional responses and gestures, further enhancing their ability to provide sympathetic and supportive communications.
- Integrating SARs with Conversational AI can create a more engaging and interactive speech and language experience. This can be achieved by providing personalized intervention and real-time feedback to children with CD, as well as encouragement and guidance in tasks or play. Currently, robots of types Furhat [66], iRobi [16], and QTrobot [18] have access to cloud-based services and are used for rehabilitation of children with ASD. As an instance, researchers have successfully integrated OpenAI’s text completion services, including GPT-3 and ChatGPT, into the Furhat robot, so that real-time, non-scripted, automated interactions can be provided [67]. Furthermore, different frameworks based on cloud computing and clustering are employed to enhance the capabilities of social robots and address the limitations of existing embedded platforms [68,69,70]. The papers present approaches which enhance the capabilities of social robots via NLP cloud services. The authors also provide detailed descriptions of the architecture and components of the proposed framework and discuss the challenges and opportunities of using cloud computing for social robots, such as security and privacy concerns, network latency, and scalability.
- Integrating SARs with Adaptive Dialogue Systems can contribute to effective dialogue management. In [71], the authors present a design and implementation of a cloud system for knowledge-based autonomous interaction created for Social Robots and other conversational agents. The system is particularly convenient for low-cost robots and devices: it can be used as a stand-alone dialogue system or as an integration to provide “background” dialogue capabilities to any preexisting Natural Language—(CAIR, Cloud-based Autonomous Interaction with Robots).
- Automatic Speech Recognition and Text Generation technologies can aid children in language learning through story telling. They also practice social skills and behaviors [72]. The robot Furhat can tell stories to children through interactive conversation, natural language processing, expressive facial expressions and gestures, voice and speech synthesis, and personalization. It engages children in dialogue-like interactions, understands their speech, and adapts the story based on their preferences for a personalized experience [73]. In [74], authors propose how to recommend books in a child–robot interactive environment based on the speech input and child’s profile. Experimental results show that the proposed recommender system has an improved performance and it can operate on embedded consumer devices with limited computational resources. Speech recognition software can also help to provide real-time feedback on the accuracy of a child’s speech production.
- Graphics, Touch, Sound, and Barcode user interface design can enhance the multimodal interaction with SARs by enriching visual, auditory, and tactile modalities. Graphical interfaces are great support for a robot interaction by listing complex information, allowing text input or showing a game interface. Many SARs provide GUI touch by their own tablet [54] or external touchscreen [16,18,75,76]. Usually, the GUI is used to acquire and repeat knowledge by pictures displayed on a touch screen connected to the SAR. An additional tool for therapists and children to interact with, either in clinics or at home, is the QR-code-scanning capabilities of SARs [16,17,18,77]. An auditory interface can also be integrated into SARs in order to enable users to interact with robots via spoken commands, voice recognition, altered auditory feedback [23], etc. Regarding future work in [25], a voice analyzer can determine the quality of the patient’s voice (configuration of the vocal tract + anatomy of the larynx + scientific component). The AI methods used for automatic classification of autistic children include Support Vector Machines (SVMs) and Convolutional Neural Networks (CNNs). SVMS use acoustic feature sets from multiple Interspeech COMPARE challenges, while CNNs extract deep spectrum features from the spectrogram of autistic speech instances [78].
- VR/AR/MR technologies can provide significant support for both children and robots to immerse themselves in a shared virtual environment. Through the use of a 3D model, SARs can engage with children and adolescents in a virtual setting, allowing them to explore and interact within the virtual environment together [79]. Virtual reality-based therapy can reduce anxiety and improve speech fluency, as well as social and emotional skills [80]. In terms of application categories, simulations make up approximately 60% of the content, while games account for the remaining 40%. The majority of simulation content focuses on specific scenes and aim to imitate various real-life events and situations, such as classroom scenes, bus stop scenes, and pedestrian crossing scenes. Many studies report on virtual reality training with children with CD and have explored these scenes and their potential therapeutic benefits [81,82,83]. For example, the therapy in [84] involves different stages, including auto-navigation from outside the school to the classroom, welcome and task introduction, and completion of tasks. These stages are accompanied by verbal and nonverbal cues, such as hand-waving animations. Examples how to use AR to overlay virtual objects or visual cues in the real world to support language or articulation exercises can be found in [85].
- AI-based visual and audio algorithms for Affective computing enables enhanced human–robot interaction in therapy sessions by improving social robots’ emotion recognition and facial expression capabilities. AI-based visual and audio algorithms can detect the emotional state of the individual, allowing the SAR to tailor its responses and interventions during therapy sessions. The reviewed papers with SARs that have an integrated AI-based emotion recognition and facial expression technologies for showing real-time animation of facial emotions on a display in the head are [36,45,47,50,61]. The last illustrates lip-syncing as well. The principle of visual-based algorithms for affective computing involves analyzing visual cues to recognize and interpret emotions, such as facial expressions, body language, and gestures. These algorithms use computer vision techniques to extract relevant features from visual data and apply machine learning or deep learning models to classify and understand emotional states [86,87,88]. The principle of audio-based algorithms for affective computing involves analyzing audio signals, such as speech and vocal intonations, to detect and classify emotions. These algorithms utilize signal processing techniques, feature extraction, and machine learning to analyze acoustic properties and patterns related to emotional states. Acoustic features are extracted to capture the valence and arousal dimensions, including voice quality features [89,90,91,92]. Such AI-based visual and audio algorithms are integrated in Furhat robot, allowing it to display emotions and expressions on its face by animations that correspond with the emotional content and tone of the speech being delivered.
- The principle of Biofeedback algorithms for Affective computing involves monitoring and analyzing physiological signals to infer and understand emotional states. These algorithms use techniques such as signal processing and machine learning to identify patterns and correlations associated with specific emotions in order to interpret physiological responses of heart rate, muscle tension, pupil dilation and eye-tracking, skin conductance, or sweating. Children and adolescents with CD, especially ASD, frequently have difficulties in their social skills, such as communicating wants and needs and maintaining eye contact. Biofeedback is a technique which is often recommended for those struggling with anxiety, anger, or stress. Some authors have explored various machine learning methods and types of data collected through wearable sensors worn by children diagnosed with ASD [93,94]. Other authors have designed a prototype to reinforce the mental skills of children with ASD through neurofeedback using EEG data and a small humanoid robot to stimulate attention towards a joint activity [95]. The results encourage the development of more precise measures of attention, combining EEG data and behavioral information. In addition, scientists have worked on EEG measures suitable for robotic neurofeedback systems and able to detect and intervene in case of attention breakdowns [96]. In [97], the children’s gaze towards the robot and other individuals present was observed. The analysis revealed that the attention and engagement towards the parents of the children increased. Eye tracking can help in understanding and quantifying where the children direct their visual attention during the therapy sessions, their engagement levels, and emotional responses to different stimuli, such as the robot, other humans, and logopedic materials. Via this feedback, SARs can assess the child’s affect during the SLT in real time and personalize the interactive scenarios.
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
Abbreviations
AAC | Advanced Audio Coding |
ABA | Applied Behavior Analysis |
ADD | Attention Deficit Disorder |
ADHD | Attention deficit and hyperactivity disorder |
AI | Artificial Intelligence |
AR | Augmented Reality |
ASD | Autism spectrum disorder |
AT | Assistive Technologies |
CD | Communication Disorders |
DAF | Delayed auditory feedback |
DMDD | Disruptive mood dysregulation disorder |
DSM-V | Diagnostic and Statistical Manual (of Mental Disorders), Fifth Edition (V) |
IEEE | Institute of Electrical and Electronics Engineers |
ICT | Information and Communications Technology |
HRI | Human–Robot Interaction |
MDPI | Multidisciplinary Digital Publishing Institute |
MR | Mixed reality |
ODD | Oppositional defiant disorder |
RAAT | Robot-Assisted Autism Therapy |
RJA | Responding to Joint Attention |
SAR | Socially Assistive Robots |
SLI | Specific Language Impairment |
SLT | Speech and Language Therapy |
VR | Virtual reality |
References
- Fogle, P.T. Essentials of Communication Sciences & Disorders; Jones & Bartlett Learning: Burlington, MA, USA, 2022; p. 8. [Google Scholar]
- American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders, 5th ed.; American Psychiatric Association: Arlington, VA, USA, 2013; Available online: https://dsm.psychiatryonline.org/doi/book/10.1176/appi.books (accessed on 20 April 2023).
- Besio, S.; Bulgarelli, D.; Stancheva-Popkostadinova, V. (Eds.) Play Development in Children with Disabilities; De Gruyter: Berlin, Germany, 2017. [Google Scholar]
- United Nations. Convention on the Rights of Persons with Disabilities; United Nations: New York, NY, USA, 2007; Available online: https://www.un.org/development/desa/disabilities/convention-on-the-rights-of-persons-withdisabilities.html (accessed on 16 April 2023).
- World Health Organisation. The Global Strategy for Women’s, Children’s and Adolescents’ Health, 2016–2030; World Health Organization: Geneva, Switzerland, 2015; Available online: https://www.who.int/life-course/partners/global-strategy/globalstrategy-2016-2030/en/ (accessed on 16 April 2023).
- Gibson, J.L.; Pritchard, E.; de Lemos, C. Play-based interventions to support social and communication development in autistic children aged 2–8 years: A scoping review. Autism Dev. Lang. Impair. 2021, 6, 1–30. [Google Scholar] [CrossRef] [PubMed]
- Baker, F.S. Engaging in play through assistive technology: Closing gaps in research and practice for infants and toddlers with disabilities. In Assistive Technology Research, Practice, and Theory; IGI Global: Hershey, PA, USA, 2014; pp. 207–221. [Google Scholar] [CrossRef] [Green Version]
- Francis, G.; Deniz, E.; Torgerson, C.; Toseeb, U. Play-based interventions for mental health: A systematic review and meta-analysis focused on children and adolescents with autism spectrum disorder and developmental language disorder. Autism Dev. Lang. Impair. 2022, 7, 1–44. [Google Scholar] [CrossRef] [PubMed]
- Papakostas, G.A.; Sidiropoulos, G.K.; Papadopoulou, C.I.; Vrochidou, E.; Kaburlasos, V.G.; Papadopoulou, M.T.; Holeva, V.; Nikopoulou, V.-A.; Dalivigkas, N. Social Robots in Special Education: A Systematic Review. Electronics 2021, 10, 1398. [Google Scholar] [CrossRef]
- Mahdi, H.; Akgun, S.A.; Salen, S.; Dautenhahn, K. A survey on the design and evolution of social robots—Past, present and future. Robot. Auton. Syst. 2022, 156, 104193. [Google Scholar] [CrossRef]
- World Health Organization; United Nations Children’s Fund (UNICEF). Global Report on Assistive Technology; World Health Organization: Geneva, Switzerland, 2022; Available online: https://www.unicef.org/reports/global-report-assistive-technology (accessed on 16 April 2023).
- Papadopoulou, M.T.; Karageorgiou, E.; Kechayas, P.; Geronikola, N.; Lytridis, C.; Bazinas, C.; Kourampa, E.; Avramidou, E.; Kaburlasos, V.G.; Evangeliou, A.E. Efficacy of a Robot-Assisted Intervention in Improving Learning Performance of Elementary School Children with Specific Learning Disorders. Children 2022, 9, 1155. [Google Scholar] [CrossRef]
- Robins, B.; Dautenhahn, K.; Ferrari, E.; Kronreif, G.; Prazak-Aram, B.; Marti, P.; Laudanna, E. Scenarios of robot-assisted play for children with cognitive and physical disabilities. Interact. Stud. 2012, 13, 189–234. [Google Scholar] [CrossRef] [Green Version]
- Estévez, D.; Terrón-López, M.-J.; Velasco-Quintana, P.J.; Rodríguez-Jiménez, R.-M.; Álvarez-Manzano, V. A Case Study of a Robot-Assisted Speech Therapy for Children with Language Disorders. Sustainability 2021, 13, 2771. [Google Scholar] [CrossRef]
- Ioannou, A.; Andreva, A. Play and Learn with an Intelligent Robot: Enhancing the Therapy of Hearing-Impaired Children. In Proceedings of the IFIP Conference on Human-Computer Interaction—INTERACT 2019. INTERACT 2019. Lecture Notes in Computer Science, Paphos, Cyprus, 2–6 September 2019; Springer: Cham, Switzerland, 2019; Volume 11747. [Google Scholar] [CrossRef]
- Hawon, L.; Hyun, E. The Intelligent Robot Contents for Children with Speech-Language Disorder. J. Educ. Technol. Soc. 2015, 18, 100–113. Available online: http://www.jstor.org/stable/jeductechsoci.18.3.100 (accessed on 16 April 2023).
- Lekova, A.; Andreeva, A.; Simonska, M.; Tanev, T.; Kostova, S. A system for speech and language therapy with a potential to work in the IoT. In Proceedings of the CompSysTech ‘22: International Conference on Computer Systems and Technologies 2022, Ruse, Bulgaria, 17–18 June 2022; pp. 119–124. [Google Scholar] [CrossRef]
- QTrobot for Education of Children with Autism and Other Special Needs. Available online: https://luxai.com/assistive-tech-robot-for-special-needs-education/ (accessed on 16 April 2023).
- Vukliš, D.; Krasnik, R.; Mikov, A.; Zvekić Svorcan, J.; Janković, T.; Kovačević, M. Parental Attitudes Towards The Use Of Humanoid Robots In Pediatric (Re)Habilitation. Med. Pregl. 2019, 72, 302–306. [Google Scholar] [CrossRef] [Green Version]
- Szymona, B.; Maciejewski, M.; Karpiński, R.; Jonak, K.; Radzikowska-Büchner, E.; Niderla, K.; Prokopiak, A. Robot-Assisted Autism Therapy (RAAT). Criteria and Types of Experiments Using Anthropomorphic and Zoomorphic Robots. Review of the Research. Sensors 2021, 21, 3720. [Google Scholar] [CrossRef]
- Nicolae, G.; Vlãdeanu, G.; Saru, L.M.; Burileanu, C.; Grozãvescu, R.; Craciun, G.; Drugã, S.; Hãþiş, M. Programming The Nao Humanoid Robot For Behavioral Therapy In Romania. Rom. J. Child Amp Adolesc. Psychiatry 2019, 7, 23–30. [Google Scholar]
- Gupta, G.; Chandra, S.; Dautenhahn, K.; Loucks, T. Stuttering Treatment Approaches from the Past Two Decades: Comprehensive Survey and Review. J. Stud. Res. 2022, 11, 1–24. [Google Scholar] [CrossRef]
- Chandra, S.; Gupta, G.; Loucks, T.; Dautenhahn, K. Opportunities for social robots in the stuttering clinic: A review and proposed scenarios. Paladyn J. Behav. Robot. 2022, 13, 23–44. [Google Scholar] [CrossRef]
- Bonarini, A.; Clasadonte, F.; Garzotto, F.; Gelsomini, M.; Romero, M. Playful interaction with Teo, a Mobile Robot for Children with Neurodevelopmental Disorders. DSAI 2016. In Proceedings of the 7th International Conference on Software Development and Technologies for Enhancing Accessibility and Fighting Info-exclusion, Portugal, 1–3 December 2016; pp. 223–231. [Google Scholar] [CrossRef] [Green Version]
- Kose, H.; Yorganci, R. Tale of a robot: Humanoid Robot Assisted Sign Language Tutoring. In Proceedings of the 2011 11th IEEE-RAS International Conference on Humanoid Robots, Bled, Slovenia, 26–28 October 2011; pp. 105–111. [Google Scholar]
- Robles-Bykbaev, V.; López-Nores, M.; Pazos-Arias, J.; Quisi-Peralta, D.; García-Duque, J. An Ecosystem of Intelligent ICT Tools for Speech-Language Therapy Based on a Formal Knowledge Model. Stud. Health Technol. Inform. 2015, 216, 50–54. [Google Scholar]
- Fosch-Villaronga, E.; Millard, C. Cloud Robotics Law and Regulation, Challenges in the Governance of Complex and Dynamic Cyber-Physical Ecosystems. Robot. Auton. Syst. 2019, 119, 77–91. [Google Scholar] [CrossRef]
- Samaddar, S.; Desideri, L.; Encarnação, P.; Gollasch, D.; Petrie, H.; Weber, G. Robotic and Virtual Reality Technologies for Children with Disabilities and Older Adults. In Computers Helping People with Special Needs. ICCHP-AAATE 2022. Lecture Notes in Computer Science; Miesenberger, K., Kouroupetroglou, G., Mavrou, K., Manduchi, R., Covarrubias Rodriguez, M., Penáz, P., Eds.; Springer: Cham, Switzerland, 2022; Volume 13342. [Google Scholar] [CrossRef]
- da Silva, C.A.; Fernandes, A.R.; Grohmann, A.P. STAR: Speech Therapy with Augmented Reality for Children with Autism Spectrum Disorders. In Enterprise Information Systems. ICEIS 2014. Lecture Notes in Business Information Processing; Cordeiro, J., Hammoudi, S., Maciaszek, L., Camp, O., Filipe, J., Eds.; Springer: Cham, Switzerland, 2015; Volume 227. [Google Scholar] [CrossRef]
- Lorenzo, G.; Lledó, A.; Pomares, J.; Roig, R. Design and application of an immersive virtual reality system to enhance emotional skills for children with autism spectrum disorders. Comput. Educ. 2016, 98, 192–205. [Google Scholar] [CrossRef] [Green Version]
- Kotsopoulos, K.I.; Katsounas, M.G.; Sofios, A.; Skaloumbakas, C.; Papadopoulos, A.; Kanelopoulos, A. VRESS: Designing a platform for the development of personalized Virtual Reality scenarios to support individuals with Autism. In Proceedings of the 2021 12th International Conference on Information, Intelligence, Systems & Applications (IISA), Vila Real, Portugal, 1–3 December 2016; pp. 1–4. [Google Scholar] [CrossRef]
- Furhat and Social Robots in Rehabilitation. Available online: https://furhatrobotics.com/habilitation-concept/ (accessed on 16 April 2023).
- Charron, N.; Lindley-Soucy, E.D.K.; Lewis, L.; Craig, M. Robot therapy: Promoting Communication Skills for Students with Autism Spectrum Disorders. New Hampshire J. Edu. 2019, 21, 10983. [Google Scholar]
- Silvera-Tawil, D.; Bradford, D.; Roberts-Yates, C. Talk to me: The role of human–robot interaction in improving verbal communication skills in students with autism or intellectual disability. In Proceedings of the 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Nanjing, China, 27–31 August 2018; pp. 1–6. [Google Scholar] [CrossRef]
- Syrdal, D.S.; Dautenhahn, K.; Robins, B.; Karakosta, E.; Jones, N.C. Kaspar in the wild: Experiences from deploying a small humanoid robot in a nursery school for children with autism. Paladyn J. Behav. Robot. 2020, 11, 301–326. [Google Scholar] [CrossRef]
- Robles-Bykbaev, V.; Ochoa-Guaraca, M.; Carpio-Moreta, M.; Pulla-Sánchez, D.; Serpa-Andrade, L.; López-Nores, M.; García-Duque, J. Robotic assistant for support in speech therapy for children with cerebral palsy. In Proceedings of the 2016 IEEE International Autumn Meeting on Power, Electronics and Computing (ROPEC), Ixtapa, Mexico, 9–11 November 2016; pp. 1–6. [Google Scholar] [CrossRef]
- Pereira, J.; de Melo, M.; Franco, N.; Rodrigues, F.; Coelho, A.; Fidalgo, R. Using assistive robotics for aphasia rehabilitation, in: 2019 Latin American Robotics Symposium (LARS), 2019. In Proceedings of the Brazilian Symposium on Robotics (SBR) and 2019 Workshop on Robotics in Education (WRE), Rio Grande, Brazil, 23–25 October 2019; pp. 387–392. [Google Scholar] [CrossRef]
- Castillo, J.C.; Alvarez-Fernandez, D.; Alonso-Martin, F.; Marques-Villarroya, S.; Salichs, M.A. Social robotics in therapy of apraxia of speech. J. Healthcare Eng. 2018, 2018, 11. [Google Scholar] [CrossRef] [Green Version]
- Kwaśniewicz, Ł.; Kuniszyk-Jóźkowiak, W.; Wójcik, G.M.; Masiak, J. Adaptation of the humanoid robot to speech disfluency therapy. Bio-Algorithms Med-Syst. 2016, 12, 169–177. [Google Scholar] [CrossRef]
- Charron, N.; Lewis, L.; Craig, M. A Robotic Therapy Case Study: Developing Joint Attention Skills With a Student on the Autism Spectrum. J. Educ. Technol. Syst. 2017, 46, 137–148. [Google Scholar] [CrossRef]
- Kajopoulos, J.; Wong, A.H.Y.; Yuen, A.W.C.; Dung, T.A.; Kee, T.Y.; Wykowska, A. Robot-Assisted Training of Joint Attention Skills in Children Diagnosed with Autism. In Social Robotics. ICSR 2015. Lecture Notes in Computer Science; Tapus, A., André, E., Martin, J.C., Ferland, F., Ammi, M., Eds.; Springer: Cham, Switzerland, 2015; Volume 9388. [Google Scholar] [CrossRef]
- Taheri, A.; Meghdari, A.; Alemi, M.; Pouretemad, H.; Poorgoldooz, P.; Roohbakhsh, M. Social Robots and Teaching Music to Autistic Children: Myth or Reality? In Social Robotics. ICSR 2016. Lecture Notes in Computer Science; Agah, A., Cabibihan, J.J., Howard, A., Salichs, M., He, H., Eds.; Springer: Cham, Switzerland, 2016; Volume 9979. [Google Scholar] [CrossRef]
- Tariq, S.; Baber, S.; Ashfaq, A.; Ayaz, Y.; Naveed, M.; Mohsin, S. Interactive Therapy Approach Through Collaborative Physical Play Between a Socially Assistive Humanoid Robot and Children with Autism Spectrum Disorder. In Social Robotics. ICSR 2016. Lecture Notes in Computer Science; Agah, A., Cabibihan, J.J., Howard, A., Salichs, M., He, H., Eds.; Springer: Cham, Switzerland, 2016; Volume 9979. [Google Scholar] [CrossRef]
- Egido-García, V.; Estévez, D.; Corrales-Paredes, A.; Terrón-López, M.-J.; Velasco-Quintana, P.-J. Integration of a Social Robot in a Pedagogical and Logopedic Intervention with Children: A Case Study. Sensors 2020, 20, 6483. [Google Scholar] [CrossRef]
- Jeon, K.H.; Yeon, S.J.; Kim, Y.T.; Song, S.; Kim, J. Robot-based augmentative and alternative communication for nonverbal children with communication disorders. In Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp ‘14), Seattle, Washington, USA, 13–17 September 2014; Association for Computing Machinery: New York, NY, USA, 2014; pp. 853–859. [Google Scholar] [CrossRef]
- Spitale, M.; Silleresi, S.; Leonardi, G.; Arosio, F.; Giustolisi, B.; Guasti, M.T.; Garzotto, F. Design Patterns of Technology-based Therapeutic Activities for Children with Language Impairments: A Psycholinguistic-Driven Approach, 2021. In Proceedings of the CHI EA ‘21: Extended Abstracts of the 2021 CHI Virtual Conference on Human Factors in Computing Systems, Yokohama, Japan, 8–13 May 2021; pp. 1–7. [Google Scholar] [CrossRef]
- Panceri, J.A.C.; Freitas, É.; de Souza, J.C.; da Luz Schreider, S.; Caldeira, E.; Bastos, T.F. A New Socially Assistive Robot with Integrated Serious Games for Therapies with Children with Autism Spectrum Disorder and Down Syndrome: A Pilot Study. Sensors 2021, 21, 8414. [Google Scholar] [CrossRef]
- Robles-Bykbaev, V.E.; Lopez-Nores, M.; Pazos-Arias, J.J.; Garcia-Duque, J. RAMSES: A robotic assistant and a mobile support environment for speech and language therapy. In Proceedings of the Fifth International Conference on the Innovative Computing Technology (INTECH 2015), Galcia, Spain, 20–22 May 2015; pp. 1–4. [Google Scholar] [CrossRef]
- Ochoa-Guaraca, M.; Carpio-Moreta, M.; Serpa-Andrade, L.; Robles-Bykbaev, V.; Lopez-Nores, M.; Duque, J.G. A robotic assistant to support the development of communication skills of children with disabilities. In Proceedings of the 2016 IEEE 11th Colombian Computing Conference (CCC), Popayan, Colombia, 27–30 September 2016; pp. 1–8. [Google Scholar] [CrossRef]
- Velásquez-Angamarca, V.; Mosquera-Cordero, K.; Robles-Bykbaev, V.; León-Pesántez, A.; Krupke, D.; Knox, J.; Torres-Segarra, V.; Chicaiza-Juela, P. An Educational Robotic Assistant for Supporting Therapy Sessions of Children with Communication Disorders. In Proceedings of the 2019 7th International Engineering, Sciences and Technology Conference (IESTEC), Panama, Panama, 9–11 October 2019; pp. 586–591. [Google Scholar] [CrossRef]
- Horstmann, A.C.; Mühl, L.; Köppen, L.; Lindhaus, M.; Storch, D.; Bühren, M.; Röttgers, H.R.; Krajewski, J. Important Preliminary Insights for Designing Successful Communication between a Robotic Learning Assistant and Children with Autism Spectrum Disorder in Germany. Robotics 2022, 11, 141. [Google Scholar] [CrossRef]
- Farhan, S.A.; Rahman Khan, M.N.; Swaron, M.R.; Saha Shukhon, R.N.; Islam, M.M.; Razzak, M.A. Improvement of Verbal and Non-Verbal Communication Skills of Children with Autism Spectrum Disorder using Human Robot Interaction. In Proceedings of the 2021 IEEE World AI IoT Congress (AIIoT), Seattle, WA, USA, 10–13 May 2021; pp. 356–359. [Google Scholar] [CrossRef]
- van den Berk-Smeekens, I.; van Dongen-Boomsma, M.; De Korte, M.W.P.; Boer, J.C.D.; Oosterling, I.J.; Peters-Scheffer, N.C.; Buitelaar, J.K.; Barakova, E.I.; Lourens, T.; Staal, W.G.; et al. Adherence and acceptability of a robot-assisted Pivotal Response Treatment protocol for children with autism spectrum disorder. Sci. Rep. 2020, 10, 8110. [Google Scholar] [CrossRef] [PubMed]
- Lekova, A.; Kostadinova, A.; Tsvetkova, P.; Tanev, T. Robot-assisted psychosocial techniques for language learning by hearing-impaired children. Int. J. Inf. Technol. Secur. 2021, 13, 63–76. [Google Scholar]
- Simut, R.E.; Vanderfaeillie, J.; Peca, A.; Van de Perre, G.; Vanderborght, B. Children with Autism Spectrum Disorders Make a Fruit Salad with Probo, the Social Robot: An Interaction Study. J. Autism. Dev. Disord. 2016, 46, 113–126. [Google Scholar] [CrossRef] [PubMed]
- Polycarpou, P.; Andreeva, A.; Ioannou, A.; Zaphiris, P. Don’t Read My Lips: Assessing Listening and Speaking Skills Through Play with a Humanoid Robot. In HCI International 2016—Posters’ Extended Abstracts. HCI 2016. Communications in Computer and Information Science; Stephanidis, C., Ed.; Springer: Cham, Switzerland, 2016; Volume 618. [Google Scholar] [CrossRef]
- Lewis, L.; Charron, N.; Clamp, C.; Craig, M. Co-robot therapy to foster social skills in special need learners: Three pilot studies. In Methodologies and Intelligent Systems for Technology Enhanced Learning: 6th International Conference; Springer International Publishing: Berlin/Heidelberg, Germany, 2016; pp. 131–139. [Google Scholar]
- Akalin, N.; Uluer, P.; Kose, H. Non-verbal communication with a social robot peer: Towards robot assisted interactive sign language tutoring. In Proceedings of the 2014 IEEE-RAS International Conference on Humanoid Robots, Madrid, Spain, 18–20 November 2014; pp. 1122–1127. [Google Scholar] [CrossRef]
- Özkul, A.; Köse, H.; Yorganci, R.; Ince, G. Robostar: An interaction game with humanoid robots for learning sign language. In Proceedings of the 2014 IEEE International Conference on Robotics and Biomimetics (ROBIO 2014), Bali, Indonesia, 5–10 December 2014; pp. 522–527. [Google Scholar] [CrossRef]
- Andreeva, A.; Lekova, A.; Simonska, M.; Tanev, T. Parents’ Evaluation of Interaction Between Robots and Children with Neurodevelopmental Disorders. In Smart Education and e-Learning—Smart Pedagogy. SEEL-22 2022. Smart Innovation, Systems and Technologies; Uskov, V.L., Howlett, R.J., Jain, L.C., Eds.; Springer: Singapore, 2022; Volume 305. [Google Scholar] [CrossRef]
- Esfandbod, A.; Rokhi, Z.; Meghdari, A.F.; Taheri, A.; Alemi, M.; Karimi, M. Utilizing an Emotional Robot Capable of Lip-Syncing in Robot-Assisted Speech Therapy Sessions for Children with Language Disorders. Int. J. Soc. Robot. 2023, 15, 165–183. [Google Scholar] [CrossRef]
- Boccanfuso, L.; Scarborough, S.; Abramson, R.K.; Hall, A.V.; Wright, H.H.; O’kane, J.M. A low-cost socially assistive robot and robot-assisted intervention for children with autism spectrum disorder: Field trials and lessons learned. Auton Robot 2016, 41, 637–655. [Google Scholar] [CrossRef]
- Alabdulkareem, A.; Alhakbani, N.; Al-Nafjan, A. A Systematic Review of Research on Robot-Assisted Therapy for Children with Autism. Sensors 2022, 22, 944. [Google Scholar] [CrossRef]
- Fisicaro, D.; Pozzi, F.; Gelsomini, M.; Garzotto, F. Engaging Persons with Neuro-Developmental Disorder with a Plush Social Robot. In Proceedings of the 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Daegu, Republic of Korea, 11–14 March 2019; pp. 610–611. [Google Scholar] [CrossRef]
- Cifuentes, C.; Pinto, M.J.; Céspedes, N.; Múnera, M. Social Robots in Therapy and Care. Curr. Robot. Rep. 2020, 1, 59–74. [Google Scholar] [CrossRef]
- Available online: https://furhatrobotics.com/furhat-robot/ (accessed on 16 April 2023).
- Integrating Furhat with OpenAI. Available online: https://docs.furhat.io/tutorials/openai/ (accessed on 16 April 2023).
- Elfaki, A.O.; Abduljabbar, M.; Ali, L.; Alnajjar, F.; Mehiar, D.; Marei, A.M.; Alhmiedat, T.; Al-Jumaily, A. Revolutionizing Social Robotics: A Cloud-Based Framework for Enhancing the Intelligence and Autonomy of Social Robots. Robotics 2023, 12, 48. [Google Scholar] [CrossRef]
- Lekova, A.; Tsvetkova, P.; Andreeva, A. System software architecture for enhancing human-robot interaction by Conversational AI, 2023 International Conference on Information Technologies (InfoTech-2023). In Proceedings of the IEEE Conference, Bulgaria, 20–21 September 2023. in print. [Google Scholar]
- Dino, F.; Zandie, R.; Abdollahi, H.; Schoeder, S.; Mahoor, M.H. Delivering Cognitive Behavioral Therapy Using A Conversational Social Robot. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 3–8 November 2019; pp. 2089–2095. [Google Scholar] [CrossRef] [Green Version]
- Grassi, L.; Tommaso, C.; Recchiuto, A.S.A. Sustainable Cloud Services for Verbal Interaction with Embodied Agents. Intell. Serv. Robot. 2023. in print. [Google Scholar]
- Available online: https://furhatrobotics.com/blog/5-ways-social-robots-are-innovating-education/ (accessed on 16 April 2023).
- Elgarf, M.; Skantze, G.; Peters, C. Once upon a story: Can a creative storyteller robot stimulate creativity in children? In Proceedings of the 21st ACM International Conference on Intelligent Virtual Agents, Fukuchiyama, Japan, 14–17 September 2021; pp. 60–67. [Google Scholar]
- Liu, Y.; Gao, T.; Song, B.; Huang, C. Personalized Recommender System for Children’s Book Recommendation with A Realtime Interactive Robot. J. Data Sci. Intell. Syst. 2017. [Google Scholar] [CrossRef]
- Available online: furhatrobotics.com/Furhat-robot (accessed on 16 April 2023).
- AskNAO Tablet. Available online: https://www.asknao-tablet.com/en/home/ (accessed on 16 April 2023).
- Available online: https://furhatrobotics.com (accessed on 16 April 2023).
- Baird, A.; Amiriparian, S.; Cummins, N.; Alcorn, A.M.; Batliner, A.; Pugachevskiy, S.; Freitag, M.; Gerczuk, M.; Schuller, B. Automatic classification of autistic child vocalisations: A novel database and results. Proc. Interspeech 2017, 849–853. [Google Scholar] [CrossRef]
- Shahab, M.; Taheri, A.; Hosseini, S.R.; Mokhtari, M.; Meghdari, A.; Alemi, M.; Pouretemad, H.; Shariati, A.; Pour, A.G. Social Virtual Reality Robot (V2R): A Novel Concept for Educa-tion and Rehabilitation of Children with Autism. In Proceedings of the 2017 5th RSI International Conference on Robotics and Mechatronics (ICRoM), Tehran, Iran, 25–27 October 2017; pp. 82–87. [Google Scholar] [CrossRef]
- Marušić, P.; Krhen, A.L. Virtual reality as a therapy for stuttering. Croat. Rev. Rehabil. Res. 2022, 58. [Google Scholar] [CrossRef]
- Jingying, C.; Hu, J.; Zhang, K.; Zeng, X.; Ma, Y.; Lu, W.; Zhang, K.; Wang, G. Virtual reality enhances the social skills of children with autism spectrum disorder: A review. Interact. Learn. Environ. 2022, 1–22. [Google Scholar] [CrossRef]
- Lee, S.A.S. Virtual Speech-Language Therapy for Individuals with Communication Disorders: Current Evidence, Lim-itations, and Benefits. Curr. Dev. Disord. Rep. 2019, 6, 119–125. [Google Scholar] [CrossRef]
- Bailey, B.; Bryant, L.; Hemsley, B. Virtual Reality and Augmented Reality for Children, Adolescents, and Adults with Communication Disability and Neurodevelopmental Disorders: A Systemat-ic Review. Rev. J. Autism. Dev. Disord. 2022, 9, 160–183. [Google Scholar] [CrossRef]
- Halabi, O.; Abou El-Seoud, S.; Alja’am, J.; Alpona, H.; Al-Hemadi, M.; Al-Hassan, D. Design of Immersive Virtual Reality System to Improve Communication Skills in Individuals with Autism. Int. J. Emerg. Technolo-Gies Learn. (iJET) 2017, 12, 50–64. [Google Scholar] [CrossRef] [Green Version]
- Almurashi, H.; Bouaziz, R.; Alharthi, W.; Al-Sarem, M.; Hadwan, M.; Kammoun, S. Augmented Reality, Serious Games and Picture Exchange Communication System for People with ASD: Systematic Literature Review and Future Directions. Sensors 2022, 22, 1250. [Google Scholar] [CrossRef] [PubMed]
- Chai, J.; Zeng, H.; Li, A.; Ngai, E.W. Deep learning in computer vision: A critical review of emerging techniques and application scenarios. Mach. Learn. Appl. 2021, 6, 100134. [Google Scholar] [CrossRef]
- O’Mahony, N.; Campbell, S.; Carvalho, A.; Harapanahalli, S.; Hernandez, G.V.; Krpalkova, L.; Riordan, D.; Walsh, J. Deep learning vs. traditional computer vision. In Advances in Computer Vision: Proceedings of the 2019 Computer Vision Conference (CVC); Springer International Publishing: Berlin/Heidelberg, Germany, 2020; Volume 1, pp. 128–144. [Google Scholar]
- Debnath, B.; O’brien, M.; Yamaguchi, M.; Behera, A. A review of computer vision-based approaches for physical rehabilitation and assessment. Multimed. Syst. 2022, 28, 209–239. [Google Scholar] [CrossRef]
- Aouani, H.; Ayed, Y.B. Speech emotion recognition with deep learning. Procedia Comput. Sci. 2020, 176, 251–260. [Google Scholar] [CrossRef]
- Sekkate, S.; Khalil, M.; Adib, A. A statistical feature extraction for deep speech emotion recognition in a bi-lingual scenario. Multimed. Tools Appl. 2023, 82, 11443–11460. [Google Scholar] [CrossRef]
- Samyak, S.; Gupta, A.; Raj, T.; Karnam, A.; Mamatha, H.R. Speech Emotion Analyzer. In Innovative Data Communication Technologies and Appli-cation: Proceedings of ICIDCA 2021; Springer Nature: Singapore, 2022; pp. 113–124. [Google Scholar]
- Zou, C.; Huang, C.; Han, D.; Zhao, L. Detecting Practical Speech Emotion in a Cognitive Task. In Proceedings of the 20th International Conference on Computer Communications and Networks (ICCCN), Lahaina, HI, USA, 31 July–4 August 2011; pp. 1–5. [Google Scholar] [CrossRef]
- Fioriello, F.; Maugeri, A.; D’alvia, L.; Pittella, E.; Piuzzi, E.; Rizzuto, E.; Del Prete, Z.; Manti, F.; Sogos, C. A wearable heart rate measurement device for children with autism spectrum disorder. Sci Rep. 2020, 10, 18659. [Google Scholar] [CrossRef]
- Alban, A.Q.; Alhaddad, A.Y.; Al-Ali, A.; So, W.-C.; Connor, O.; Ayesh, M.; Qidwai, U.A.; Cabibihan, J.-J. Heart Rate as a Predictor of Challenging Behaviours among Children with Autism from Wearable Sensors in Social Robot Interactions. Robotics 2023, 12, 55. [Google Scholar] [CrossRef]
- Anzalone, S.M.; Tanet, A.; Pallanca, O.; Cohen, D.; Chetouani, M. A Humanoid Robot Controlled by Neurofeedback to Reinforce Attention in Autism Spectrum Disorder. In Proceedings of the 3rd Italian Workshop on Artificial Intelligence and Robotics, Genova, Italy, 28 November 2016. [Google Scholar]
- Nahaltahmasebi, P.; Chetouani, M.; Cohen, D.; Anzalone, S.M. Detecting Attention Breakdowns in Robotic Neurofeedback Systems. In Proceedings of the 4th Italian Workshop on Artificial Intelligence and Robotics, Bari, Italy, 14–15 November 2017. [Google Scholar]
- Van Otterdijk, M.T.H.; de Korte, M.W.P.; van den Berk-Smeekens, I.; Hendrix, J.; van Dongen-Boomsma, M.; den Boer, J.C.; Buitelaar, J.K.; Lourens, T.; Glennon, J.C.; Staal, W.G.; et al. The effects of long-term child–robot interaction on the attention and the engagement of children with autism. Robotics 2020, 9, 79. [Google Scholar] [CrossRef]
Communication Objectives | References | Number of Articles |
---|---|---|
Joint attention | [35,41,45,47,57,62] | 6 |
Turn-taking | [14,16,35,41,43,45,57,62] | 8 |
Imitation/repetition | [35,41,43,47,48,52,57,61,62] | 9 |
Sign recognition/understanding/receptive vocabulary | [54,56,58,59] | 4 |
Sign production/expressive vocabulary | [58] | 1 |
Sound and speech sound recognition | [14,15,17,42,47,48,56,60,61] | 9 |
Speech recognition/receptive vocabulary | [14,15,33,35,44,45,47,48,55,56,57,61,62] | 13 |
Speech production/expressive vocabulary | [14,17,35,36,41,44,45,48,49,54,56,60,61] | 13 |
Morphosyntax skills | [12,14,33,36,44,46,48,49,52,61] | 10 |
Functional communication/maintain conversation/ pragmatic skills | [14,16,36,44,48,49,52,57,60,61] | 10 |
Socially Assistive Robot | Articles |
---|---|
NAO | 27 |
Custom made 3D-printed/Arduino-based robot (low cost) | 8 |
Probo (low cost) | 3 |
Robovie R3 (high cost) | 2 |
iRobi (high cost) | 2 |
Cozmo (low cost) | 1 |
SPELTRA (low cost) | 1 |
CuDDler (low cost) | 1 |
CASPER | 1 |
RASA | 1 |
MARIA T21 | 1 |
QTrobot (high cost) | 1 |
Pepper (high cost) | 1 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Georgieva-Tsaneva, G.; Andreeva, A.; Tsvetkova, P.; Lekova, A.; Simonska, M.; Stancheva-Popkostadinova, V.; Dimitrov, G.; Rasheva-Yordanova, K.; Kostadinova, I. Exploring the Potential of Social Robots for Speech and Language Therapy: A Review and Analysis of Interactive Scenarios. Machines 2023, 11, 693. https://doi.org/10.3390/machines11070693
Georgieva-Tsaneva G, Andreeva A, Tsvetkova P, Lekova A, Simonska M, Stancheva-Popkostadinova V, Dimitrov G, Rasheva-Yordanova K, Kostadinova I. Exploring the Potential of Social Robots for Speech and Language Therapy: A Review and Analysis of Interactive Scenarios. Machines. 2023; 11(7):693. https://doi.org/10.3390/machines11070693
Chicago/Turabian StyleGeorgieva-Tsaneva, Galya, Anna Andreeva, Paulina Tsvetkova, Anna Lekova, Miglena Simonska, Vaska Stancheva-Popkostadinova, Georgi Dimitrov, Katia Rasheva-Yordanova, and Iva Kostadinova. 2023. "Exploring the Potential of Social Robots for Speech and Language Therapy: A Review and Analysis of Interactive Scenarios" Machines 11, no. 7: 693. https://doi.org/10.3390/machines11070693
APA StyleGeorgieva-Tsaneva, G., Andreeva, A., Tsvetkova, P., Lekova, A., Simonska, M., Stancheva-Popkostadinova, V., Dimitrov, G., Rasheva-Yordanova, K., & Kostadinova, I. (2023). Exploring the Potential of Social Robots for Speech and Language Therapy: A Review and Analysis of Interactive Scenarios. Machines, 11(7), 693. https://doi.org/10.3390/machines11070693