Next Article in Journal
Exploration of the 3D World on the Internet Using Commodity Virtual Reality Devices
Previous Article in Journal
Evaluating Interactive Visualization of Multidimensional Data Projection with Feature Transformation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

A Systematic Review of Adaptivity in Human-Robot Interaction

1
THE MARCS Institute, Western Sydney University, Sydney, NSW 2751, Australia
2
School of Computing Engineering and Mathematics, Western Sydney University, Sydney, NSW 2751,Australia
3
School of Education,Western Sydney University, Sydney, NSW 2751, Australia
*
Author to whom correspondence should be addressed.
Multimodal Technol. Interact. 2017, 1(3), 14; https://doi.org/10.3390/mti1030014
Submission received: 21 April 2017 / Revised: 14 July 2017 / Accepted: 14 July 2017 / Published: 20 July 2017

Abstract

:
As the field of social robotics is growing, a consensus has been made on the design and implementation of robotic systems that are capable of adapting based on the user actions. These actions may be based on their emotions, personality or memory of past interactions. Therefore, we believe it is significant to report a review of the past research on the use of adaptive robots that have been utilised in various social environments. In this paper, we present a systematic review on the reported adaptive interactions across a number of domain areas during Human-Robot Interaction and also give future directions that can guide the design of future adaptive social robots. We conjecture that this will help towards achieving long-term applicability of robots in various social domains.

1. Introduction

The significance of designing adaptive user interfaces can be witnessed in the field of Human-Computer Interaction (HCI) [1,2]. An Adaptive User Interface (AUI) can be defined as the program that monitors user interactions, analyses the interactions to determine usage patterns, stores these patterns, and based on these patterns, presents a personalised interface to the user [1]. The applications of AUIs can be found in HCI where interfaces adapt themselves based on context [3], personalised learning [1], and user activity within the interface [4].
Human-robot interaction (HRI) and Social Robotics (SR) are sub-branches of HCI that revolve around designing, implementing and evaluating robotic systems in both controlled environments and real world social settings. The need of implementing Adaptive Robot Interfaces (ARIs) or Adaptive Social Robots (ASRs) has also been emphasised in HRI literature [5,6]. Before an ASR or an ARI is defined, it is important to distinguish between an autonomous robot and an adaptive robot. A robot can be categorised autonomous based on its level of automation as presented by [7], however, an ASR is an autonomous or semi-autonomous robot where speech is controlled by a human operator through a Wizard of Oz (WoZ) setup that can be termed as decision-making engine capable of perceiving the user information from the environment. The user information may include their profile, emotions, personality and past interactions. Based on this information, it makes decisions [8]. An ASR has also been described to have one or more of the following adaptation capabilities in HRI: understand and show emotions, communication with high-level dialogue, learn/adapt according to user responses, establishing a social relationship, react according to different social situations and have varying social characteristics and roles [9].
According to Tapus and colleagues [6], developing an adaptive robotic system in real world settings is one of the biggest challenges in the field of social robotics. In the last decade, researchers have proposed various applications of adaptive social robotic systems that were capable of displaying one or more of the aforementioned abilities. However, it still remains one of the open issues to implement such robots due to various technical challenges such as mapping of user’s emotions and user’s personality and keeping track memory of previous interactions in a real-time environment [6].
To overcome the aforementioned challenges, an immense amount of research is being currently conducted to fill the gap with respect to applications of ASRs in various social domains. These social domains includes education [10], public places [5], domestic and work environments [5], health care and therapy [11]. We also find a number of research studies that have been conducted on understanding how a user interacts with a robot, or what is an effect of robot’s social behaviour [12], role [13], anthropomorphism, animacy, like-ability, perceived intelligence and perceived safety on user’s perception [14] in various environments. Although, the applications of robots in various social domains are an evolving phenomenon [15] and recent work has resulted in positive findings, but most of these interactions are mainly one-off interactions and a handful of applications can be found where their integration can be witnessed in a long-term or longitudinal setup. Researchers have placed weight on the implementation of adaptive social robots as they foresee such robots to be applicable in long-term interaction with humans in various social settings [5,10]. We also, unfortunately, find lesser research on implementation of ASRs that can be utilised in real-time settings. It is, therefore, also one of the open issues in HRI or SR to manage the long-term applicability of robots in various social scenarios [5,6].
Our research aims are to design, implement and evaluate an Adaptive Social Robot (ASR) in a real life setting to challenge the issue of saturation during HRI. We believe that an adaptive robot will help towards overcoming the problem of maintaining long-term social engagement and establishing a social relationship with humans [16,17]. Therefore, in order to understand the field of adaptive robots and what can be considered an adaptive feature or interaction, we took on the task of conducting a systematic review on various types of robot’s adaptive interactions research in the field of HRI. We as a community find a range of studies where robots have been used in various interactions for various purposes, however; we are unsure what can be classified as adaptive behaviour in robots. Although, previously, researchers have reported systematic reviews on the applicability of robots in education [10,18], long-term utilisation of robots work environments and public places [5], domestic settings [5], and healthcare [11]. But, unfortunately, to the best of our knowledge, we do not find a review based on adaptive interactions in HRI.
Therefore, in this paper, we present a survey on adaptive interactions displayed by different robots reported in the literature. We have limited the scope of our survey on autonomous or semi-autonomous social robots displaying various adaptive characteristics either individually or as a combination and have also been involved in various field trials and user-based studies conducted in both controlled and real-world settings. In addition, we don’t consider industrial robots as part of our sampling strategy. The contribution of our survey is two-fold: firstly, we will discuss different adaptation techniques and models employed by different social robots in different application environments. Secondly, we identify the existing challenges, provide future directions and discuss open issues.

2. Methodology

We conducted an in-depth literature review on the applications of social robots reported in past research in two phases. Firstly, we performed an electronic search on the digital platforms such as Google Scholar and Microsoft Academic Scholar. It was acknowledged through prior work that the coverage and reach provided by Google Scholar was more extensive than other similar academic repositories [19]. We searched keywords that included ’Adaptive social robots’, ’Autonomous Robots’, ’Adaptation and Robot’, ’Applications of Adaptive Robots’ and ’Adaptive Robotics Systems’. In the second phase, we manually searched archives of the top journals and premium conferences in Human-Robot Interaction and Social Robotics as ranked by the Google scholar [20] from the year 2006 to 2015. Our search resulted in retrieval of articles from a number of venues such as the International Conference on Human-Robot Interaction, and social robotics, International Conference on Humanoid Robots, and Ro-Man. We also found articles published in the international Journal of Human-Robot Interaction and International Journal of Social Robotics.

2.1. Inclusion and Exclusion Criteria

Our criteria for selecting a research article was based on the definition of an adaptive robot as discussed in the introduction section. We did not consider articles reported on robots that did not possess any of the aforementioned adaptations capabilities [9] for our review. We read the paper to understand the robot’s capabilities before excluding it. We also ignored articles that did not incorporate a user study or a field trial. In addition, we did not review studies that reported qualitative assessments only. Lastly, we did not review research on the industrial and commercial robots. After applying all these criteria, we found 37 articles that are reported in this survey.

2.2. Coding Scheme

We decided to follow the organisation as reported by [5] to review all of these research papers. We divided the papers based on their social application domain. We are reporting four different domains: (1) Healthcare and Therapy, (2) Education, (3) Public domains and work environments, and (4) homes. We also find an overlap for some reported articles on adaptive social robots in the case of health care and in-home social domains. For instance, a case where a robot has been used with elderly for assistance at home or at a therapy centre. In such a case, we selected the domain where the study was conducted.
We reviewed articles on robots being applied in various domains on the basis of following criteria. The criteria involved looking for the adaptation features (dialogue, emotion, personality, or memory) and the capabilities of the robot. We present our discussion on the existing adaptive strategies and types of robots used in the research along with a summary of the studies conducted with these robots. We later discuss about the future directions on the design and implementation of future ASRs and also discuss open issues.

3. State of the Art Social Adaptive Robots

The need for robots that can create a social relation and maintain social engagement with humans during long-term interaction is a common issue in the field of SR [16,17]. An ASR is conjectured as one of the solutions to the aforementioned problem, however, we understand that before new approaches and mechanism for implementing adaptive social robots are made, it is pertinent to review the existing research reported on the adaptive interactions in the field of HRI and SR.

3.1. Health Care and Therapy Domain

Socially Assistive Robots (SARs) is a commonly used term as introduced by [6] in the field of HRI and SR. It revolves around all types of robots that can be used to assist people with special needs. In the last decade, we find a number of applications of SARs where they have used with children suffering from autism, elderly with Alzheimer’s and dementia and several other impairments. We are here, particularly interested in reviewing such applications where these SAR’s have been designed with a user-specific adaptation mechanism. In this section, we will present our results on the existing user studies on these robots.
Francois et al., 2008 [21] presented a conceptual model for a robot capable of adapting its behaviour based on the detected playing styles of an autistic individual in real-time. The interactions with a robot were classified into two classes: “gentle and strong” depending on the amount of force with which a participant touches the robot. An experimental study was conducted with 5 children to assess the effectiveness of the interaction styles through checking the criteria of both gentle and strong touches and measuring the number of interactions correctly recognised by an Aibo robot. Experiments performed with the conceptual model showed that the Aibo robot was able to classify the interaction in the real-time, and it was also able to adapt to the interaction by changing its own behaviour and therefore, changing the interaction with the subject.
Tapus et al., 2008 [22] and colleagues designed a socially assistive adaptive robot capable of engaging post-stroke users in rehabilitation exercises. A behaviour adaptation system was designed for the ActiveMedia Pioneer 2-DX mobile robot that enabled it to select a behaviour after taking information about user’s personality and sensory data. The sensory data involved user navigation and user detection, speech recognition. The robot detected speech from the microphone and human user movement was captured through the use of a motion capture device. An experimental study was later conducted with 12 adults to test the system. It was one of the first experimental studies that showed that personality adaptation by a social robot can positive impact towards improving user’s task performance.
Robins et al., 2008 [23] and colleagues tested the temporal behaviour-matching hypothesis, which predicts that a child will adapt and match robot’s behaviour during children robot interaction. Robins and colleagues used the KASPER robot controlled by Wizard of Oz to play “Drumming Call and Response” and “Gesture Imitation” games with 18 children. During both aforementioned game playing sessions, they measured the effect of the robot’s response to child’s behaviour through controlling the timings and gestures on an interaction of the child with a robot. Results supported the temporal behaviour-matching hypothesis during HRI. As the child adapted according to the robot’s behaviour in both tested conditions. The conditions included introducing timing delays between a child’s and a robot’s turn taking activity, and when robot displayed various non-verbal behaviours.
Tapus et al., 2009 [24] proposed an adaptive SAR that was able to provide assistance to people suffering from Alzheimer’s disease. A novel adaptation mechanism that enabled the robot to maximise user’s task performance on a cognitive task was presented. The robot was capable of praising or motivating the user based on the user performance that included user reaction time and a total number of correct answers. It was also able to adapt its dialogue based on the updates in the game difficulty levels. A within-subject experimental study was conducted with 9 participants for 6 months in which each participant played the Song Discovery or Name That Tune game in the presence of the Torso robot and a music therapist. The results of the study showed that the adaptation mechanism based on the task performance was successful as the participants recognised the songs with the same probability in both conditions. In addition, a robot was also able to encourage task performance and attention training.
Boccanfuso et al., 2011 [25] contributed a low-cost social robot, CHARLIE, capable of playing turn-taking imitation game and was also able to also perform face and hand tracking after adapting to a child’s non-verbal actions during game play. The authors trained hand and face classifiers through collecting data from children aged 4 to 11. A proof of concept experiment was also performed to check the face and hand tracking during the gameplay. Results showed that hand detector averaged 86% and face detector averaged 92% across all sessions and users.
McColl et al., 2013 [26] programmed Brian robot to autonomously provide feedback on user’s meal eating behaviours. The robot was capable of selecting an appropriate behaviour based on the following sensory inputs. These inputs include meal tray, utensil tracking, and user state. Meal tray and utensil tracking provided information about meal consumption and position or movement of the utensil on the tray. The user state performed user recognition. Based on these inputs, the robot adapted its dialogue according to user meal consumption or other aforementioned inputs. An experimental study was later conducted with eight individuals to investigate user engagement during meal time. The results showed that participants enjoyed interacting with the robot during the meal.
Wainer et al., 2013 [27] and colleagues programmed an autonomous KASPER robot capable of playing the video game with children diagnosed with autism. KASPER-bot was capable of sensing based on the events within the game, planning on different responses based on the sensory data, and acting through gesture, facial expressions, and speech. The authors conducted an evaluation with six autistic children in order to measure children enjoyment and collaboration during the human child and robot-child gameplay. Results showed that children were happily willing to play the game with the robot; however, they enjoyed and collaborated more when playing with a human. The authors predicted that due to the novelty factor, the results might have been influenced.
Stafford et al., 2014 [28] programmed an autonomous Charlie robot capable of responding to touch, recognising faces, generate speech, and navigating from one room to another in order to manage the health care of elderly people. An adaptation mechanism was implemented that enabled the robot to adapt according to the user profile. The robot scheduled visits, reminded about medications, and also measured blood pressure. The authors conducted a technology acceptance based study with 25 elderly people. Results showed that participant reacted positively towards the use of an assistive robot.
Coninx et al., 2016 [29] and colleagues presented an adaptive robot capable of switching between multiple activities during a single interaction. The adaptive NAO played turn-taking quiz, creative dance and collaborative sorting games on a tablet with the child. Three children diagnosed with grade I diabetes participated in the user-based evaluation. The objective of the evaluation was to access the effect of an adaptive NAO towards richer and more personalised user experience and potential consequences of such an interaction on children’s self-management skills. Results showed children to customise their interaction with a robot actively use the activity switching mechanism. However, due to limited no of subjects, quantitative findings can be regarded as of preliminary nature.

Discussion

All of the reported studies resulted in positive results based on their measures such as technology acceptance, user experience, social engagement and task performance. However, it is evident that the diverse user groups from children to adults to elderly in all of these studies points towards more research with these robots to consolidate these results. In addition, limited users have been used in all of these studies. The reasons for the limited number of users is understandable as it is may be difficult to find participants with special needs. Still, we see that adaptive robotic interactions have yielded results that point towards implementing more interactions of this kind Table 1.
Another important aspect is that all of these studies except one have used anthropomorphic robots to implement adaptive interactions for this domain. This aspect might be related to the type of adaptation presented in these articles. As most of these adaptations revolve around user’s game or task performance, gestures and personality, therefore, we conjecture that anthropomorphic robots might have been considered as the best possible choice for such adaptations due to their social dimensions. In addition, such robots have full capabilities to interact with human users in a holistic way. All of these adaptations have had a positive effect on user’s perception however, it is yet to find that how these adaptations will respond when they are tested with a larger number of users and during long-term interactions once the novelty factor wears off. We must also acknowledge that high levels of humanlike appearance can create expectations on behalf of the user that remain unfulfilled. A review of healthcare robots [30] confirms that the embodiment of an assistive robot is a delicate issue and a humanoid appearance may not necessarily be the most appropriate one.
Another critical aspect is that most of these studies have utilised video analysis to measure user experience during the interaction. However, we need to consider a general protocols for coding such interactions so that future studies can be compared with each other. The nature of these empirical studies was such that due to the lower number of participants, a control group (such as interaction with humans or proxies) was not used to compare interaction with the agent.
The significance of sensory data used by the robot to adapt is another important issue. The sensory user data capturing mechanism used in the afore-listed studies has been taken from the robots built-in cameras or cameras located somewhere inside the experimental setup. The data has been mainly used to calculate (1) user emotions, (2) facial recognition, and (3) hand movements. We understand that in case of user emotions, researchers need to focus on the loss of data during interaction in the scenario the user moves during the interaction or other factors that may cause loss of important data. Similarly, the data to understand user personality has been taken from the pre-questionnaires conducted before the user study. However, research needs to be focused on more dynamic ways on collecting information of such kind.

3.2. Education

Educational robots have been utilised successfully as a tool to teach programming in schools in the developed world [31]. In the future, researchers have envisioned robots to be not just used as a tool to make students understand certain concepts, but help teachers in different ways [10]. The usage of robots who will help teachers perform repetitive tasks and also act in various social roles in an educational setting is a growing phenomenon. We find a series of studies being conducted to understand the views of teachers on the use of robots at schools [32] and also on what sorts of adaptations a robot should display in a learning environment [33]. Results of these studies have also focused on implementing various ways of adaptation for social robots, therefore, we are interested in reviewing existing adaptive robots that have been utilised in education.
Salter et al., 2007 [34] conducted a trail using Roball, a spherical mobile toy robot, capable of recognising sensory data patterns and later adapting to individual’s behaviour patterns during human robot interaction. The robot was initially programmed to display two behaviours: wandering, and obstacle avoidance. Two different studies were conducted at the laboratory without children before taking Roball in a real life setting. Following the first two studies, three adaptive behaviours were added to the robot and later were tested in a real time environment with children. Results showed that human interaction perceived through proprioceptive sensor has the ability to inform behavioural adaption. However, in order to devise well-informed adaptive robotic systems, we need to consider several other adaption strategies.
Gonazalez et al., 2011 [35] presented an autonomous Maggie robot that was programmed to play different games (Peekaboo, Guessing the character, Hangman, Tic tac toe, and Animal Quiz) with children in order to promote edutainment. Maggie robot constitutes Voice System (ASR, TTS), Vision System (Object Identification), Radio Frequency Identification (RFID), Touch Sensors, built-in tablet screens along with interactions via smart phones, and engagement gestures. The robot was capable of adapting its interaction based on the game scenarios presented to it. A series of experiments were conducted where children participated and played various games such as Peekaboo, Guessing a Character, Tic-tac-toe, Hangman, and Animal Quiz with the MAggie robot. Preliminary findings show that children got more involved and comfortable with the robot as it displayed more interaction capabilities.
Janssen et al., 2011 [36] presented a study in which children played an adaptive game to learn arithmetic with the NAO robot. The robot was capable of adapting its behaviour based on the mistakes performed by the user during the game. It was a between-subject study design in which NAO was able to change its behaviour in two different conditions. In condition 1, when the user made a mistake, the game complexity was maintained while in condition 2, the game complexity was reduced. Children played the game for three times with the robot and their intrinsic motivation was measured. Results showed that in condition 2, participants showed the higher level of motivation.
Szafir et al., 2012 [37] presented a design of an adaptive robot that was capable of monitoring and improving user involvement during the interaction. The robot acted as an educational assistant using eye gazing, head nodding and gestures as its behavioural features. The agent voice control was also used to gain attention during the interaction. The task of the robot was to narrate the story to the user. The robot capable of adapting based on the user engagement levels and the information about the engagement level was taken from the EEG device. A between subject experimental study was performed with 30 participants, where three groups, each consisting 10 participants interacted with a robot capable of reacting on the basis of low immediacy, random immediacy and adaptive behaviours calculated based on engagement level. Results showed that the use of adaptive agent was able to significantly improve attention and performance in a narrative task. In addition, we also found the gender difference in terms of motivation during the interaction with the adaptive version of the robot. Females motivation was significantly higher than of males.
Kuehnlenz et al., 2013 [38] developed an adaption mechanism in which an EDDIE robot adapts according to user’s mood and then portrays a similar emotional state. The adaption mechanism involves two different ways of expressing an emotion: implicitly, or explicitly. In explicit scenario, the robot asks the user questions and responds “me too”, whereas in implicit, the robot generate facial or verbal emotions based on the mood of user measured through questionnaire before the interaction. A 5-step (pre-questionnaire, social sub-dialog, bonding game, picture labelling and post questionnaires) experimental evaluation was conducted later with 84 participants where a robot displayed emotional adaption in four different conditions (Full Emotion Adaption (FEA), Implicit Adaption (IEA), Explicit Adaption (EEA) or no adaption) to measure helpfulness. Results showed that participants ranked FEA highest followed by EEA, IEA and no adaption in order.
Brown et al., 2013 [39] conducted a study with the DARWIN robot that was able to adapt its behaviour during a mathematics test conducted on the tablet device. The robot was capable of adapting both verbal (positive and supportive feedback) and non-verbal (gestures) behaviours during the interaction based on user game performance. A total of 24 students participated during the study to test whether the use of an adaptive educational robot can increase test performance through performing aforementioned adaptations. Results showed that the test completion time was recorded lesser in the case where robot provided supportive feedback, gestures as compared to the no robot interacting with the user during the study.
Ros et al., 2014 [40] implemented a mechanism for the NAO robot that enabled it to adapt according to children dance moves in a long-term interaction study. The NAO robot was capable of updating both of its verbal and non-verbal behaviour based on the current and previous user-state during the interaction. The user-state involved the history of user’s dance movement, its profile (ID, name, age, gender) and also current body configuration. The authors conduct a long-term study that involved 18 sessions with 12 children in which the robot taught different dance movements to the children. Results reported a high level of engagement during the interaction and also emphasised on the need of implementing new ways of adaptations to impact the long-term social engagement.
Leite et al., 2014 [41] addressed the problem of sustaining engagement long-term interaction during children robot interaction. She presented an emphatic model for an iCAT social robot capable of playing the chess game. The iCAT robot was capable of emphasising with the children through providing positive reinforcements in terms of facial expressions and dialogue. They conducted an experimental study with 16 children who played the Chess game with iCAT for five times during five weeks. Results showed that iCAT was able to sustain engagement throughout all five sessions.
Uluer et al., 2015 [42] presented a semi-autonomous Robovie-3 (R3) tutor capable of teaching Turkish sign language. R3 was controlled through WoZ while the vision module was functioning autonomously to recognise different signs. The authors conducted an evaluation across three groups (18 graduate students, 6 children with typical hearing, 18 hearing impaired children) to measure robot’s recognition ability and its effect on user’s learning performance. The recognition rates for each sign showed by Robovie were consistent (higher than 90%) for all the three groups. In addition, robot’s has a positive influence on user’s performances through all groups.
Greeff et al., 2015 [43] presented a human teacher and robot social learning scenario in which a robot learns different words by playing a word meaning association game on a surface table. The human teachers begin with choosing the topic and uttering a word, in response, the robot finds the category that strongly associates with the uttered word and communicates the information back to the teacher. The teacher in response provides feedback and the robot adjusts category of the word accordingly. Robot’s social learning was evaluated through a user study in which 41 subjects participated and played the role of a human teacher and robot displayed two different conditions. In the social condition, the robot showed non-verbal social behaviours (fixating gaze) to inform about learning preference and followed the script whereas in a non-social condition the robot only followed a script. The purpose of the study was to measure robot’s learning performance, participant’s choice of topic to teach, participant’s gaze behaviour and overall user experience. Robot showed better learning performance in a social condition. The gender difference was observed for robot’s learning performance for both social and non-social conditions. Male participants didn’t differ in terms of learning performance in both conditions while female participants were more engaged in the social condition.

Discussion

Table 2 summarises the type of adaptations implemented and evaluated in various educational settings. Our review results show that most of the studies conducted in the education domain are based on one-off interaction and in addition, the lesser number of users are involved in these studies. We conjecture that in a case of long-term interactions, it is difficult to have a huge number of participants, however, we need to conduct studies with more participants to consolidate our results. Another important aspect resulted in two of the presented studies [37,43] was the effect of gender during human-robot interaction. As it was reported that female participants were more social as compared to male, therefore, more research on the perception of adaptive robots on genders need to be performed.
We also find an overlap on the types of adaptation implemented during these studies. Games have been used in most of these studies as a medium of communication on different devices and most of the adaptation revolves around adapting robot’s dialogue based on the game events. It can be based on user’s performance, or game outcome. In addition, a few studies have also utilised non-verbal adaptation based on the sensory input received through the facial scan. We believe that more research needs to be conducted on implementing these adaptations grounded on a theoretical framework. In addition, adaptive robot interactions have not been utilised based on various aspects such as memory, user’s personality. Most of the adaptations revolved around interaction events or understanding user-emotions.
Another important aspect revolves around the anthropomorphism as all of these studies have utilised social robots, we conjecture the reasons can be based on the type of domain as a robotic tutor will be envisioned in a shape of a living being. It has also been showed in one of the studies where children were asked to design robots to used in an educational setting [44]. However, more research needs to be conducted to address the issue revolving around the appearance of an ASR in education.

3.3. Work Environments and Public Spaces

The penetration of social robots in work environments and public spaces is growing. We find the use of robots for advertisement at the shopping mall [45], as waiters at the hotels [46] and also at the banks in Japan. In addition, we also find the utilisation of commercial robots being applied in various ways to the public spaces in the developed world. As we witness the applications of robots in such settings, it is certainly important to implement novel means of adaptations for these robots to apply them in public spaces in various ways. Therefore, it is also significant to reviews the applications of various adaptive interactions in these scenarios.
Hoffman et al., 2007 [47] presented an adaptive action selection mechanism for a robotic teammate during a collaborative task. They proposed the use of an educated anticipatory action selection in an agent (robot) based on expectations of each other’s behaviour. However, the action sequence was based on an assumption that human collaborator will follow a roughly consistent set of actions. In order to compare the performance of the selection process, a reactive agent was also implemented. Both, a robot with anticipatory adaption and reactive mechanism were tested with a human teammate in a game based task where the robot and a human-mate worked collaboratively to build 10 carts. The human’s role was to bring parts (“a floor, a body, two kinds of axles, and two kinds of wheels”) to the workspace and robot’s role was to attach the car parts using the tools (“the welder, the rivet gun and two wrenches”). Results showed that participants performed significantly better in the adaptive anticipatory case compared to the reactive case.
Svenstrup et al., 2008 [48] conducted a field trial by placing a FESTO robotic platform at a shopping mall capable of identifying, tracking and following individuals in a natural way. The robot randomly roamed around the mall until an individual is identified. Once detected, the robot started smiling and playing the jingle-bell song and tries to follow the person until lost at a safe distance. Once done, it updated its expressions and starting roaming again. Results of the trial conducted with 48 participants showed that people had in general positive attitude towards the use of a social robot in this environment. However, some reservations were reported on the distance between the human and the robot.
Lee et al., 2009 [49] presented a Snackbot robot capable of autonomously navigating in the hall and delivering snacks to the people. The authors addressed the challenge of maintaining long-term engagement with the robot. They implemented an adaptation mechanism that enabled the robot to adapt according to the user preferences that includes snake choice and snake usage patterns. An experimental study was also later conducted at a workplace with 21 participants where the robot acted as a delivery person for a span of 4 months. Results of the long-term service robot interaction revealed that participants attached social roles to the robot that were beyond the delivery person. In addition, the interaction triggered new behaviours among employees such as drawing social comparisons and even jealousy [50].
Kanda et al., 2010 [51] conducted a field trial with a Robovie-IIF robot capable of detecting people and guiding them by providing directions in the shopping mall. The robot was programmed to identify users based on the RFID information and also provide information based on the previous interactions. A total of 235 participants interacted with the robot during 25 days that result in a sum of 2642 interactions. This interaction were later coded to measure user-experience along with questionnaires that were used to measure user perception of the robot. Results showed that the participants encouraged the use of robots in the mall for aforementioned purposes.
Shiomi et al., 2013 [52] and colleagues presented a semi-autonomous, speech recognition was controlled by the human, Robovie-II, and Robovie-miniR2 robots capable of acting as an advertising agent in the mall. Both Robovie robots were also capable of displaying speech utterance, gestures, and non-verbal behaviours according to person’s action. The authors conducted a field trail in order to measure the effect of robot’s presence on people’s participation and overall advertising process. The trail consisted 256 individuals who interacted with three robotic conditions GUI based robot, Robovie-miniR2, and Robovie-II. In order to measure the effect on user’s participation and overall effect on advertising, three observations (total interacting users, total printed coupons, and interaction initiation) were coded through video analysis for all three conditions. Results showed that a maximum number of people interacted, printed coupons, and initiated interaction first with Robovie-miniR2, after Robovie-II and GUI in a descending order.
Sekmen et al., 2013 [53] present an autonomous mobile robot capable of learning through adapting the behaviour and preference of the interacting user. The authors contributed a learning model that enabled the robot to update itself every time a user interacts with a robot. In order to update the model according to user state and preference, a Bayesian learning mechanism was implemented. The learning model utilised various robot’s capabilities such as face detection and recognition, speech recognition and localisation, natural language understanding, Internet information filtering, and navigating. Following the learning model, the robot acted as a tour-mate on a university campus and 25 students at the university were recruited to evaluate the robotic tour-mate in two different conditions (adaptive, and non-adaptive). Results showed that participants preferred an adaptive tour-mate robot to a non-adaptive one.
Rousseau et al., 2013 [54] reported an IRL-O- interactive omnidirectional robot platform that can be used to combine both verbal and non-verbal modalities in order to perform engaging interactions with people in controlled and real-world settings. The IRL-0 was programmed to autonomously engage with people in different interaction scenarios through displaying these verbal (voice (V)) and non-verbal (Facial Expressions (FE), Arm Gestures (AG), Head Movements (HM)) modalities. The robot initially detected legs of the user, once detected the robot walks towards the user and maintains a socially acceptable distance from the user. Later, it asks user for assistance and validates it. If the user has been previously detected and has refused robot’s offer for assistance, the robot convinces it. If not convinced, the robot says goodbye and goes back to its original position. The IRL-O robot’s interaction modalities were later tested in a within-subject experiment with 35 participants comprising of four conditions. In condition 1, it had V and FE and in condition 2, it had V, FE, and H and in condition 3, it had V, FE, AG and lastly in condition 4, it had all four modalities. The participants were asked to give their preferences on these modalities on IRL-O based on their use. Participants found V, AG, FE, HM to be useful by 100%, 77%, 31%, and 50% respectively. A field study was later conducted with IRL-O by placing it in a work environment for two weeks where 381 users stopped and interacted with one of the modalities of the robot at a museum. The user preferences on each modality were measured through coding which modality made the user stop and interact with the robot. Results showed that voice and facial expressions were the key reasons for the user to stop and interact with the robot.
Aly et al., 2013 [55] described architecture of a NAO humanoid robot capable of autonomously adapting according to user’s personality and later displaying combined verbal and non-verbal behaviours. An evaluation study was conducted with 35 participants through following steps; (1) NAO robot was able to autonomously identify participant’s personality through dialogue, (2) the robot asks the participant to choose a restaurant from a provided list, (3) during the response delay’s, the robot generated appropriate speech and gestures. During the interaction, the participant did ask details about different restaurants. The goal of the study was to find if a robot that matches user’s personality and displayed appropriate gesture would be perceived more expressive. Results showed that personality played an important role as user perception and preference was by the personality portrayed by a robot.
Keizer et al., 2014 [56] programmed an iCAT robot that played the role of a socially aware bartender robot. The robot was capable of detecting customer, tracking multiple customers, and taking their orders. To enable these capabilities in iCAT, the authors presented two implementation mechanisms for a Social Skill Recognizer (SSR). The input parameters for SSR includes location, facial expressions, gaze behaviour, and body language of all the users in the environment. An experimental evaluation was conducted with 37 adults to compare the two implementations of the SSR. In one implementation, they implemented a rule-based SSR where rules were hard-coded in the system and in another, they programmed a trained SSR. The purpose of the evaluation was to measure the detection rate, initial detection time, drink serving time and a number of engagement changes during the interaction. Results showed that trained SSR was found be more responsive in terms of a number of engagement changes however, no significant differences were observed.
Shiomi et al., 2015 [57] presented an autonomous wheelchair NEO-PR45 robot capable of adapting its speed and speech using the preferred speed information and pre-defined small talk base on user’s position and registered map information. A within-subject experimental study with 28 elderly people was participated in three conditions. In condition 1, the wheelchaired robot moved automatically at a fixed speed. In condition 2, the robot performed both speech and speed based adaptations. Lastly, in the third condition, the caregiver wheeled the participant. The purpose of the study was to measure the degree of comfort, enjoyment and easiness to make the request to the robot. Results based on quantitative questionnaire and interviews showed that adaptive wheelchaired robot was rated higher than the simple wheelchair robot and caregiver.
Kato et al., 2015 [58] and colleagues conduct a field trial to compare three different adaptive behaviours of a social robot at a shopping mall. The behaviours include: (1) the robot autonomously decides whether to approach the user based on the intention estimation (2) simply proactive, where the robot aimed at approaching everyone in the experimental field and (3) passive, where a robot waits until a visitor asked an inquiry. The authors measured the amount of robot’s successful, failed and missed attempt to initiate interaction with people who intended to interact with it. Results showed that the ratio of success in the proposed condition is significantly higher than simply proactive and passive condition.
Dang et al., 2015 [59] presented an autonomous NAO robot capable of playing “Operation Board game” in order to measure player’s stress level through collecting player’s heart rate and performance during the game. The game was able to generate true and false alarms based on user action categories as normal and stressful alarms. In addition, the robot adapted the coaching style according to player’s personality. The authors focused on the role of NAO’s personality adaption through verbal reactions in order to motivate people and help them improve their performance. A within-subject experimental study was conducted with 17 participants through following 5 different steps (Introduction to the game, player’s personality identification, recording of heart rate baseline, game play, rate different coaching styles of robot) in four different conditions (with or without Robot in normal alarm system and with or without Robot in stressful alarm system). Results showed that the players performed better when the robot coached them. There was a correlation between participant’s personality and their preference about the robot’s personality. The heart rates exceeds in case of false alarms and it was also reported that introverted player have greater heart rates as compared to extroverted ones.
Liu et al., 2016 [60] presented a model for an autonomous generating socially acceptable pointing behaviours during an interaction with the robot. In order to understand the behaviours, human-human interaction was monitored and interviews were conducted which resulted in three categories of pointing behaviours. The behaviours include gaze only, casual pointing or precise pointing. Through these observation, a behaviour selection model was implemented that enabled the Robovie robot to automatically calculate appropriate deictic behaviours through interacting with the user that included speech recognition and user tracking system. An evaluation of the behaviour selection model was later conducted at a shopping mall in which a total of 33 participants were hired to measure the naturalness, understandability, perceived politeness and overall goodness of the robot’s deictic behaviours. Results show that participants perceived behaviours polite and natural while the understandability was not perceived well.

Discussion

Our discussion on the summary of adaptive interactions in public spaces and work environments as shown in Table 3 will address the following issues. Anthropomorphism is one of the issue that needs attention during adaptive interactions in public spaces. As one of the studies showed that the medium sized robot was preferred to be the best choice of interaction at a shopping mall. However, we need to perform more studies on the validation of the right size for a robot. In addition, all studies have utilised wheeled and anthropomorphic robots however, we need to explore robots with legs and find user preferences about this issue.
Navigation was found to be one of the critical issues during HRI at public spaces and work environment. The adaptation of a robot on when to navigate towards an interacting user and how to address the user is a challenge. Similarly, the distance between the robot and the user has been guided by [61] however, studies have still reported intimidation on user side. Therefore, we need to conduct studies involving more participants to consolidate existing findings.
The robot’s adaptation has been majorly focused on the user preferences in most of these studies. All of these studies have result positive findings however, it also brings attention towards the utilisation of other sorts of adaptations during HRI in these settings. For instance, A human user advertising about a certain product can forget about a frequent user but if a robot adapts on these parameters, it may result in positive findings. Therefore, we believe more research needs to be conducted on implemented various ways of adaptations to find its benefits. As one of the reported study also showed that the engagement can be improved when robot adapted based on the social situations and another also reported the effect of adaptive version of a robot in comparison with a non-adaptive version.
Another issue is based on the behaviours a robot should portray after adapting based on the type of input. We need to conduct more studies to compare the effect of various verbal and non-verbal behaviours in a long-term setting in work and public environments.

3.4. At Home

Smart homes is a commonly used term these days. Researchers believe that in the future, we will have robots at home helping us in various ways. They will act as chefs, caregivers and cleaners. We find an example of a commercial robot Roomba, a vacuum cleaner robot [62] however, we don’t believe it is an adaptive social robot as it does not fall under the definition as given by [9,63]. In addition, in order for a robot to integrate at homes, it needs to have a social mechanism. We speculate that an ASR is an excellent candidate for the aforementioned job. Therefore, before such robots are designed it is important to research about their effect on user’s perception and their overall experience.
Torrey et al., 2006 [64] studied the effect of an adaptive dialogue during human robot interaction. The robot was autonomously programmed as a chef that was able to adapt according to individual’s cooking expertise (novice, expert). Two appropriate conditions were designed for individual expertise. In one condition suitable for experts, the robots asked the participants to identify cooking tool by their name and in another condition suitable for novices; the robot not only named the tools but also described them in couple of sentences. Two different experiments were conducted to measure information exchange and social relation for two different conditions during user’s interaction with a robot. Results showed that appropriate adaptive dialogue improved information exchange for novices but didn’t show an effect on experts. Adaptive dialogue didn’t effect social relation however, when time pressure was introduced to finish a task, the adaptive dialogue improved social relation for both novices and experts.
Torrey et al., 2007 [65] presented a robot chef capable of adapting dialogue based on robot’s awareness about human gaze and task progress. A trail was later conducted with two groups (experts and novices) possessing expertise in cooking, to measure the effect of adaption on task performance and communication between a user and a robot during four different conditions, (1) “Question Only”, the robot was able to respond to individual questions through a text based interface, (2) “Gaze Added”, the robot made decision on the bases of participant’s gaze activity, (3) “Delay Added”, the robot made decision based on task progress, and (4) “Immediate Added”, the robot provided information immediately without considering user’s task progress. The task was similar to Torrey et al., 2006 [64]. Task performance results showed there was a main effect for expertise; experts took less time to finish the task than novices. In addition, the effect of condition was not significant, however, participants made fewer mistakes in “Immediate Added” condition as compared to other three conditions. Communication measures results showed that there was significant effect for condition and expertise. In addition, the effect of condition was not significant, however, participants asked significantly higher number of questions for “Question only” than the “Delay Added” condition. In general, results didn’t find any benefits with respect to addition of gaze awareness during human-robot dialogue. In addition, researchers suggested the need of more research on gaze awareness.
Gross et al., 2011 [66] presented a CompanionAble robot to help elderly in home environments. It was a anthropomorphic robot that comprised a touch-based graphical interface, two OLED displays as eyes to express emotions, a tray to carry objects in the house and a docking battery for recharging. The robot was capable of recognising, detecting and tracking the user at various places in the house [67]. The work continued for three years and later an adaptive version of the robot was programmed and evaluated. The version comprised a robot-based health assistant that measure the different parameter such as pulse rate, oxygen level and give suggestions if the physical workout is required of the user. In addition, a better eye display was developed that enabled the robot to display its internal emotions (boredom, being a surprise, listening or sleeping) on user’s movement with the eyes. Moreover, the robot also had a “stroke sensor” on its head that was able to distinguish various activities of the user (slap on the head, stroke or tickle). A user trail was conducted with this version with 9 participants living independently in their private apartments to measure the acceptance of robot as a social companion.In general, participants felt safe around the robot but had reservations about leaving the robot alone in the apartment. Overall, they appreciated robot’s related technology to be integrated into homes.
Cooney et al., 2014 [68] presented a sponge robot (a small humanoid) capable of providing enjoyment to individuals who play with it by hugging, shaking and moving it in various ways. The authors addressed the challenge of understanding how a robot can realise what people do during play and also how this information can enhance enjoyment. In order to solve the problem, typical full body gestures collected through an observational study were mapped onto the robot. The robot then suggests different enjoyable ways through understanding the interaction. The sponge robot was evaluated with 20 Japanese participants for the different condition (rewards vs. suggestions and naïve design (behaviours based on intuitive knowledge) vs. proposed design (meaningful motions, rewards, suggestions)) in a within-subject design in order to measure how to play, perceived variety, control, intention, and enjoyment. Results showed that participants rated rewards contributed significantly to perceived variety and enjoyment whereas suggestion was rated significantly to how to play. However, no significant difference was observed for control and intention. In addition, proposed system did provide more enjoyable and interactive play than a naïve design.
Youssef et al., 2015 [69] contributed towards a research question that how communication protocols based on knocking could be developed between a human and a robot on a sociable dining table. In order to construct a communication protocol for a robot, a human and wizard of Oz controlled dish robot interaction behaviours were observed on a dining table. Based on the observatory study, an actor-critic algorithm was developed that enabled the robotic dish to adapt and then move on a dining table according to humans knocking patterns. Twenty participants in a between-subject design study evaluated two different knocking behaviour adaption of the dish robot. Results showed that the participants succeeded in establishing a successful communication protocol with the robot. However, the significant difference between the number of agreements and disagreements on knocking behaviour adaption between a robot and human were found.

Discussion

The summary of adaptive interaction studies in Home as shown in Table 4 shows that research on the applications of ASR is at a preliminary stage and we need to conduct more research on different ways of implementations for ASR that can be utilised in various ways at home. Anthropomorphism is an important issue as we found both animate robots being referred (in different sizes). We speculate that the task that needs to be accomplished will define the look of a robot. A robot as a caregiver can be envisioned to be bigger in size as it would be required to carry objects at home. Similarly, Children might imagine robot size to be small as they would want to use it in different playful interactions.
In general, adaptation during HRI at home is beneficial as it has resulted in positive findings. However, we need to be careful while choosing the type of adaptation for different tasks and types of the users. As it was identified in one of the studies that the gaze-based adaptation didn’t yield in benefits. However, more research needs to be conducted to consolidate these results. In addition, the type of adaptation can also depend on the area to be used in the house, for instance, an adaptation mechanism for the dinning table would be different from the one in the living room. Similarly, a robotic chef at homes in the kitchen can also have various ways of adaptation. All these aspects are missing and needs to researched.
Most of the studies have reported results based on one-off interactions and we believe these results might be effected in case of a long-term interaction. We believe that the reported benefits of adaptation on personality adaptation, user-mood adaptation needs to be validated during long-term interaction and also for different kinds of task and with varied user expertise. A task should define the type of adaptation as suggested from the results a non-verbal behaviour adaptation can or cannot work for the type of the task. Similarly, the effect of age of the participant should also reflect on the type of adaptation.

4. Future Directions and Challenges

Based on the review results and the definition of an adaptive social robot capabilities as given by [9], in this section, we provide future directions on the types of adaptations.

4.1. User and Adaptations

The characteristics of a user (level of expertise on a given task, age, or gender) need to be taken into consideration when designing adaptive robots. Most of the current adaptation mechanisms have focused on user performance on a certain task and user profile. Our review results found limited research reported on adaptive systems that have been designed to adapt according to the user characteristics.
Some studies have reported on the effect of user’s gender, age and skill level during the interaction. For instance, female users have been reported to be more social as compared to male [43]. In addition, the gender, age of and skill of the user have also affected the social engagement and interest of the user during the interaction [64,70]. Therefore, we need to design and evaluate robots that can adapt based on user characteristics in real-time and study their effect on perception, engagement and task performance of the user.

4.2. User Emotions and Adaptation

Emotions are one of the basic principles of social interaction [71] and the significance of understanding emotions and adapting to them during robotic interactions has been consistently reported in various studies based off short term interactions [72,73]. Most of these studies have presented and utilised algorithms that give information about user’s affective state through facial scan in real-time. However, results of these studies have been positive but we intend to direct researchers to develop principles that allow a great depth of interaction on an emotional level.
The recognition of emotions is one of the key technical challenges in state of the art HRI. Researchers need to develop principles that allow a great depth of understanding the interaction on an emotional level. It may be achieved through measuring the varying pitch of the voice during an interaction. Or through understanding the common patterns during the interaction to recognise the emotions. For instance, [74] presented a model to recognise user emotions by analysing keyboard and mouse movements in relation to their interactions with robots. The selection of robot’s behaviour based on the affective state of the user is another important aspect. Research indicated that the inclusion of an emphatic dialogue (use of praise, empathy) can result in creating a social engagement during Children-Robot Interaction [41]. However, we need to create emotional expressions for the robot based on the facial expression, gesture or voice animations. This may also refer to the appearance of the robot during various tasks.

4.3. Robot’s Memory and Adaptation

The process of creating a robot’s memory is under-studied in HRI and needs attention. Most recently, researchers have emphasised on implementing robots that can simulate having memory. It has been predicted the future of social human-robot interaction resides in the past [75] as it can help mitigate many HRI existing challenges [6].
Recent studies have utilised the data based on user profile and performance [41,73] to implement memory based adaptations, We believe that researchers need to find more sophisticated means to implement robots that modify their actions based on past interactions of the user. As such adaptation system would require storing user data, therefore, we need ethical guidelines for researchers on the use of data. We also need to understand user views on the acceptability of the use of user data.
Researchers in HRI need to conduct research on the adequate mechanism for an ASR to adapt based on the memory of the interaction. These mechanisms may include creating robot’s memory by keeping the history of the user emotions and creating adaptation mechanisms accordingly. We conjecture that different methods of adapting after creating a memory for a robot may result in achieving a solution towards most issues such as: maintaining social engagement or creating social relationship during long-term interactions.

4.4. Personality and Adaptation

In general, the personality of the user is categorised as extrovert or introvert. It is also known that mood of a user can influence user’s personality depending on various events that may happen during the day [76]. Hence, the personality of the user does not remain static and may vary depending on the mood of the user.
Our review shows that less work has been reported on ASRs that can modify their behaviour based on the user’s personality in real-time. Most of the work on such ASRs is based on the adaptation mechanism where user personality is understood through a standard pre-test questionnaire. In other words, most prior work has reported a limited set of personalities or adaptive behaviour. We envision that future research should be performed on understanding user’s personality in real-time. One of the methods can be based on a joint approach to machine learning that can enable robot’s to modify its behaviour in real-time.

4.5. Robot’s Voice Adaptation

Another aspect that has not been considered is robot’s voice adaptation and its effect on the user preferences in various environments. Recently, in 2016, Lubold [77] has conducted a study with a group of undergraduate students to measure the effect of voice-adaptation and social dialogue by a robotic learning companion on user’s perception of robot’s social presence. Their results showed that a social voice-adaptive dialogue has a significant effect on social presence as compared to a simple social dialogue. Therefore, we also need to implement various ways of robot’s voice adaptation and evaluate their effect on children’s learning and engagement.

4.6. Culture and Adaptation

In prior literature in HRI we did not find examples of robots adapting based on the user’s culture or demographic background. It is also shown in literature that children from one culture were found to be more expressive during HRI as compared to other culture [78]. In addition, every social environment has its own culture and it can have an effect on the user during a social interaction [79]. For instance, the use of robots in the home would be required to adapt their behaviour differently based on the culture followed at that particular home. Therefore, these factors also emphasise on the need of integrating culture when implementing ASRs.

5. Open Issue

Social Robotics and implementing adaptivity within them is relatively a new field and we understand that a range of research questions need to be answered in the near future. In this section, we will be presenting general issues that revolves around designing, implementing and evaluating Adaptive Social Robots.

5.1. Understanding the Context of Adaptation

It is important to understand which individual adaptive behaviours lead to positive influence on users or which ones result in intimidation or confusion in various contexts or environments. We believe that depending on the user characteristic and the social environment, a set of robot’s adaptation needs to be implemented.
The set of adaptation may depend on the environment a certain robot is operating in. We believe, for instance, that user personality based adaptation may not be needed in a public space domain. Most recently, Ahmad et al., 2017 [80] compared the effect of different adaptations based on user emotions, memory and game events towards maintaining social engagement during a long-term interaction at a school with children. Results showed that emotion-based adaptations were found out to be most effective, followed by memory-based adaptations. Game adaptation didn’t result in sustaining long-term social engagement. Therefore, we believe that more research is needed towards understanding the impact of a certain adaptation on users engagement or task performance during a certain scenario.

5.2. Evaluation Metrics

The evaluation metrics needed to evaluate adaptive systems should be investigated deeply. We, unfortunately, do not find a common protocol to evaluate ASR for different domains. There is a need for designing a protocol as it would help researchers to discuss their results with previous findings in a systematic manner. As adaptive system adapts and change their behaviour based on user behaviours, therefore, these systems should be evaluated for long-term interactions to confirm their potential.
Most of the results reported in our review are based on the video analysis conducted for the interaction sessions. Unfortunately, there is also no protocol to analyse these videos for a set of measurements for different domains. In summary, a set of guidelines are required which define evaluation methodologies for ASR as well as analysis of data emerging from the evaluation sessions.

5.3. Ethical Concerns

An adaptive robot needs to store information about the patterns of interaction with the user. Therefore, the privacy of data is one of the issues that needs to be taken into consideration. We need to define guidelines that can maintain ethical considerations and give directions on what kind of data needs to be stored and what would potentially be used, specially in the cases when the user group is children.
Another issue is about understanding the acceptable adaptations, as a certain group of users might be intimidated with the adaptation of the robot, or develop a sense of discomfort with the robot’s unpredictability. Therefore, we also need to research about the issue of user fears while interacting with the robot.

6. Conclusions

In this paper, we presented an in-depth literature review on the adaptive interactions reported in the field of HRI. Based on our review, we also presented future directions and discussed open issues in this field.
The studies analysed showed that a significant amount of research is being conducted on adaptive social robots and they have been utilised in different social domains. Overall the results of these studies have reported positive findings. Most of these studies are based on short-term interactions, therefore, longitudinal type research needs to be conducted to consolidate these findings.
Adaptation is highly desired in HRI but we still need to implement and evaluate various ASR for various domains based on user characteristics, emotions, and personality of the user. We also need to implement robot’s adaptation mechanism that focuses on creating robot’s memory based on more sophisticated means. In addition, culture is one of the missing links and needs attention when design ASR for future.

Author Contributions

The first author has written and analyzed all of the data in this paper. The second author assisted with the framing, structure, rationale and writing. The third editor guided the work and assisted with the editing and writing.

Conflicts of Interest

All authors included in this manuscript certify that they had no conflicts of interest.

References

  1. WU, W.Y. Adaptive User Interface. U.S. Patent 20,100,097,331, 22 April 2010. [Google Scholar]
  2. Liu, J.; Wong, C.K.; Hui, K.K. An adaptive user interface based on personalized learning. IEEE Intell. Syst. 2003, 18, 52–57. [Google Scholar]
  3. Walsh, D.; Lin, L.C.; Dils, P.B. Context-Adaptive User Interface for a Portion of a Display Screen. U.S. Patent D678,898, 26 March 2013. [Google Scholar]
  4. Macbeth, S.W.; Fernandez, R.L.; Meyers, B.R.; Tan, D.S.; Robertson, G.G.; Oliver, N.M.; Murillo, O.E.; Pedersen, E.R.; Czerwinski, M.P.; Pinckney, M.D.; et al. Activity-Centric Adaptive User Interface. U.S. Patent App. 11/426,804, 27 June 2006. [Google Scholar]
  5. Leite, I.; Martinho, C.; Paiva, A. Social robots for long-term interaction: A survey. Int. J. Soc. Robot. 2013, 5, 291–308. [Google Scholar] [CrossRef]
  6. Tapus, A.; Mataric, M.J.; Scassellati, B. Socially assistive robotics [grand challenges of robotics]. IEEE Robot. Autom. Mag. 2007, 14, 35–42. [Google Scholar] [CrossRef]
  7. Endsley, M.R. Level of automation effects on performance, situation awareness and workload in a dynamic control task. Ergonomics 1999, 42, 462–492. [Google Scholar] [CrossRef] [PubMed]
  8. Beer, J.; Fisk, A.D.; Rogers, W.A. Toward a framework for levels of robot autonomy in human-robot interaction. J. Hum.-Robot Interact. 2014, 3, 74. [Google Scholar] [CrossRef]
  9. Fong, T.; Nourbakhsh, I.; Dautenhahn, K. A survey of socially interactive robots. Robot. Auton. Syst. 2003, 42, 143–166. [Google Scholar] [CrossRef]
  10. Mubin, O.; Stevens, C.J.; Shahid, S.; Al Mahmud, A.; Dong, J.J. A review of the applicability of robots in education. J. Technol. Educ. Learn. 2013, 1. 209–0015. [Google Scholar] [CrossRef]
  11. Robinson, H.; MacDonald, B.; Broadbent, E. The role of healthcare robots for older people at home: A review. Int. J. Soc. Robot. 2014, 6, 575–591. [Google Scholar] [CrossRef]
  12. Saerbeck, M.; Schut, T.; Bartneck, C.; Janse, M.D. Expressive robots in education: Varying the degree of social supportive behavior of a robotic tutor. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (ACM 2010), Atlanta, GA, USA, 10–15 April 2010; pp. 1613–1622. [Google Scholar]
  13. Bruce, A.; Nourbakhsh, I.; Simmons, R. The role of expressiveness and attention in human-robot interaction. Robotics and Automation. In Proceedings of the 2002 IEEE International Conference on Communications, New York, NY, USA, 28 April–2 May 2002; Volume 4, pp. 4138–4142. [Google Scholar]
  14. Bartneck, C.; Kulić, D.; Croft, E.; Zoghbi, S. Measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots. Int. J. Soc. Robot. 2009, 1, 71–81. [Google Scholar] [CrossRef]
  15. Fortunati, L.; Esposito, A.; Lugano, G. Introduction to the special issue “Beyond industrial robotics: Social robots entering public and domestic spheres”. Inf. Soc. Beyond Ind. Robot. Soc. Robot. Enter. 2015, 31, 229–236. [Google Scholar] [CrossRef]
  16. Komatsubara, T.; Shiomi, M.; Kanda, T.; Ishiguro, H.; Hagita, N. Can a social robot help children’s understanding of science in classrooms? In Proceedings of the Second International Conference on Human-Agent Interaction (ACM 2014), Tsukuba, Japan, 28–31 October 2014; pp. 83–90. [Google Scholar]
  17. Jimenez, F.; Yoshikawa, T.; Furuhashi, T.; Kanoh, M. An emotional expression model for educational-support robots. J. Artif. Intell. Soft Comput. Res. 2015, 5, 51–57. [Google Scholar] [CrossRef]
  18. Benitti, F.B.V. Exploring the educational potential of robotics in schools: A systematic review. Comput. Educ. 2012, 58, 978–988. [Google Scholar] [CrossRef]
  19. Meho, L.I.; Yang, K. Impact of data sources on citation counts and rankings of LIS faculty: Web of Science versus Scopus and Google Scholar. J. Assoc. Inf. Sci. Technol. 2007, 58, 2105–2125. [Google Scholar] [CrossRef]
  20. Scholar, G. Top publications-Robotics. Available online: https://scholar.google.es/citations (accessed on 21 April 2017).
  21. François, D.; Polani, D.; Dautenhahn, K. Towards socially adaptive robots: A novel method for real time recognition of human-robot interaction styles. In Proceedings of the 8th IEEE IEEE-RAS International Conference on Humanoid Robots, Daejeon, Korea, 1–3 December 2008; pp. 353–359. [Google Scholar]
  22. Tapus, A.; Ţăpuş, C.; Matarić, M.J. User—Robot personality matching and assistive robot behavior adaptation for post-stroke rehabilitation therapy. Intell. Serv. Robot. 2008, 1, 169. [Google Scholar] [CrossRef]
  23. Robins, B.; Dautenhahn, K.; Te Boekhorst, R.; Nehaniv, C.L. Behaviour delay and robot expressiveness in child-robot interactions: A user study on interaction kinesics. In Proceedings of the 3rd ACM/IEEE International Conference on Human Robot Interaction (ACM 2008), Amsterdam, The Netherlands, 12–15 March 2008; pp. 17–24. [Google Scholar]
  24. Tapus, A. Improving the quality of life of people with dementia through the use of socially assistive robots. In Proceedings of the Advanced Technologies for Enhanced Quality of Life, Iasi, Romania, 22–26 July 2009; pp. 81–86. [Google Scholar]
  25. Boccanfuso, L.; O’Kane, J.M. CHARLIE: An adaptive robot design with hand and face tracking for use in autism therapy. Int. J. Soc. Robot. 2011, 3, 337–347. [Google Scholar] [CrossRef]
  26. McColl, D.; Nejat, G. Meal-time with a socially assistive robot and older adults at a long-term care facility. J. Hum.-Robot Interact. 2013, 2, 152–171. [Google Scholar] [CrossRef]
  27. Wainer, J.; Dautenhahn, K.; Robins, B.; Amirabdollahian, F. A pilot study with a novel setup for collaborative play of the humanoid robot KASPAR with children with autism. Int. J. Soc. Robot. 2014, 6, 45–65. [Google Scholar] [CrossRef] [Green Version]
  28. Stafford, R.Q.; MacDonald, B.A.; Jayawardena, C.; Wegner, D.M.; Broadbent, E. Does the robot have a mind? Mind perception and attitudes towards robots predict use of an eldercare robot. Int. J. Soc. Robot. 2014, 6, 17–32. [Google Scholar] [CrossRef]
  29. Coninx, A.; Baxter, P.; Oleari, E.; Bellini, S.; Bierman, B.; Henkemans, O.B.; Cañamero, L.; Cosi, P.; Enescu, V.; Espinoza, R.R.; et al. Towards long-term social child-robot interaction: Using multi-activity switching to engage young users. J. Hum.-Robot Interact. 2016, 5, 32–67. [Google Scholar] [CrossRef] [Green Version]
  30. Broadbent, E.; Stafford, R.; MacDonald, B. Acceptance of healthcare robots for the older population: Review and future directions. Int. J. Soc. Robot. 2009, 1, 319–330. [Google Scholar] [CrossRef]
  31. Williams, A.B. The qualitative impact of using LEGO MINDSTORMS robots to teach computer engineering. IEEE Trans. Educ. 2003, 46, 206. [Google Scholar] [CrossRef]
  32. Serholt, S.; Barendregt, W.; Leite, I.; Hastie, H.; Jones, A.; Paiva, A.; Vasalou, A.; Castellano, G. Teachers’ views on the use of empathic robotic tutors in the classroom. In Proceedings of the 23rd IEEE International Symposium on Robot and Human Interactive Communication, Edinburgh, UK, 25–29 August 2014; pp. 955–960. [Google Scholar]
  33. Ahmad, M.I.; Mubin, O.; Orlando, J. Understanding behaviours and roles for social and adaptive robots in education: teacher’s perspective. In Proceedings of the Fourth International Conference on Human Agent Interaction, Singapore, 4–7 October 2016; pp. 297–304. [Google Scholar]
  34. Salter, T.; Michaud, F.; Létourneau, D.; Lee, D.; Werry, I.P. Using proprioceptive sensors for categorizing human-robot interactions. In Proceedings of the 2007 2nd ACM/IEEE International Conference on Human-Robot Interaction (HRI), Washington, DC, USA, 9–11 March 2007; pp. 105–112. [Google Scholar]
  35. Gonzalez-Pacheco, V.; Ramey, A.; Alonso-Martín, F.; Castro-Gonzalez, A.; Salichs, M.A. Maggie: A social robot as a gaming platform. Int. J. Soc. Robot. 2011, 3, 371–381. [Google Scholar] [CrossRef] [Green Version]
  36. Janssen, J.B.; van der Wal, C.C.; Neerincx, M.A.; Looije, R. Motivating children to learn arithmetic with an adaptive robot game. In Proceedings of the International Conference on Social Robotics, Amsterdam, The Netherlands, 24–25 November 2011; pp. 153–162. [Google Scholar]
  37. Szafir, D.; Mutlu, B. Pay attention!: Designing adaptive agents that monitor and improve user engagement. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (ACM 2012), Austin, TX, USA, 5–10 May 2012; pp. 11–20. [Google Scholar]
  38. Kühnlenz, B.; Sosnowski, S.; Buß, M.; Wollherr, D.; Kühnlenz, K.; Buss, M. Increasing helpfulness towards a robot by emotional adaption to the user. Int. J. Soc. Robot. 2013, 5, 457–476. [Google Scholar] [CrossRef]
  39. Brown, L.; Kerwin, R.; Howard, A.M. Applying behavioral strategies for student engagement using a robotic educational agent. In Proceedings of the 2013 IEEE international conference on Systems, Man, and Cybernetics (SMC 2013), Manchester, UK, 13–16 October 2013; pp. 4360–4365. [Google Scholar]
  40. Ros, R.; Baroni, I.; Demiris, Y. Adaptive human-robot interaction in sensorimotor task instruction: From human to robot dance tutors. Robot. Auton. Syst. 2014, 62, 707–720. [Google Scholar] [CrossRef]
  41. Leite, I.; Castellano, G.; Pereira, A.; Martinho, C.; Paiva, A. Empathic robots for long-term interaction. Int. J. Soc. Robot. 2014, 6, 329–341. [Google Scholar] [CrossRef]
  42. Uluer, P.; Akalın, N.; Köse, H. A new robotic platform for sign language tutoring. Int. J. Soc. Robot. 2015, 7, 571–585. [Google Scholar] [CrossRef]
  43. De Greeff, J.; Belpaeme, T. Why robots should be social: Enhancing machine learning through social human-robot interaction. PLoS ONE 2015, 10, e0138061. [Google Scholar] [CrossRef] [PubMed]
  44. Obaid, M.; Barendregt, W.; Alves-Oliveira, P.; Paiva, A.; Fjeld, M. Designing Robotic Teaching Assistants: Interaction Design Students’ and Children’s Views. In Proceedings of the International Conference on Social Robotics, Paris, France, 26–30 October 2015; pp. 502–511. [Google Scholar]
  45. Kanda, T.; Shiomi, M.; Miyashita, Z.; Ishiguro, H.; Hagita, N. An affective guide robot in a shopping mall. In Proceedings of the 4th ACM/IEEE International Conference on Human-Robot Interaction (HRI), San Diego, CA, USA, 9–13 March 2009; pp. 173–180. [Google Scholar]
  46. Asif, M.; Sabeel, M.; Mujeeb-ur Rahman, K.Z. Waiter robot-solution to restaurant automation. In Proceedings of the 1st student multi disciplinary research conference (MDSRC), At Wah, Pakistan, 14–15 November 2015. [Google Scholar]
  47. Hoffman, G.; Breazeal, C. Effects of anticipatory action on human-robot teamwork efficiency, fluency, and perception of team. In Proceedings of the ACM/IEEE international conference on Human-robot interaction, Washington, DC, USA, 9–11 March 2007; pp. 1–8. [Google Scholar]
  48. Svenstrup, M.; Bak, T.; Maler, O.; Andersen, H.J.; Jensen, O.B. Pilot study of person robot interaction in a public transit space. In Proceedings of the International Conference on Research and Education in Robotics, Heidelberg, Germany, 22–24 May 2008; pp. 96–106. [Google Scholar]
  49. Lee, M.K.; Forlizzi, J. Designing adaptive robotic services. In Proceedings of the International Association of Societies of Design Research 2009, Seoul, Korea, 18–22 October 2009. [Google Scholar]
  50. Lee, M.K.; Kiesler, S.; Forlizzi, J.; Rybski, P. Ripple effects of an embedded social agent: A field study of a social robot in the workplace. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (ACM 2012), Austin, TX, USA, 5–10 May 2012; pp. 695–704. [Google Scholar]
  51. Kanda, T.; Shiomi, M.; Miyashita, Z.; Ishiguro, H.; Hagita, N. A communication robot in a shopping mall. IEEE Trans. Robot. 2010, 26, 897–913. [Google Scholar] [CrossRef]
  52. Shiomi, M.; Shinozawa, K.; Nakagawa, Y.; Miyashita, T.; Sakamoto, T.; Terakubo, T.; Ishiguro, H.; Hagita, N. Recommendation effects of a social robot for advertisement-use context in a shopping mall. Int. J. Soc. Robot. 2013, 5, 251–262. [Google Scholar] [CrossRef]
  53. Sekmen, A.; Challa, P. Assessment of adaptive human-robot interactions. Knowl.-Based Syst. 2013, 42, 49–59. [Google Scholar] [CrossRef]
  54. Rousseau, V.; Ferland, F.; Létourneau, D.; Michaud, F. Sorry to interrupt, but may I have your attention? Preliminary design and evaluation of autonomous engagement in HRI. J. Hum.-Robot Interact. 2013, 2, 41–61. [Google Scholar] [CrossRef]
  55. Aly, A.; Tapus, A. A model for synthesizing a combined verbal and nonverbal behavior based on personality traits in human-robot interaction. In Proceedings of the 8th ACM/IEEE International Conference on Human-Robot Interaction, Tokyo, Japan, 3–6 March 2013; IEEE Press: Piscataway, NJ, USA, 2013; pp. 325–332. [Google Scholar]
  56. Keizer, S.; Ellen Foster, M.; Wang, Z.; Lemon, O. Machine Learning for Social Multiparty Human-Robot Interaction. ACM Trans. Interact. Intell. Syst. 2014, 4, 14. [Google Scholar] [CrossRef]
  57. Shiomi, M.; Iio, T.; Kamei, K.; Sharma, C.; Hagita, N. Effectiveness of social behaviors for autonomous wheelchair robot to support elderly people in Japan. PLoS ONE 2015, 10, e0128031. [Google Scholar] [CrossRef] [PubMed]
  58. Kato, Y.; Kanda, T.; Ishiguro, H. May i help you?: Design of human-like polite approaching behavior. In Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction (ACM 2015), Portland, OR, USA, 2–5 March 2015; pp. 35–42. [Google Scholar]
  59. Dang, T.H.H.; Tapus, A. Stress Game: The Role of Motivational Robotic Assistance in Reducing User’s Task Stress. Int. J. Soc. Robot. 2015, 7, 227–240. [Google Scholar] [CrossRef]
  60. Liu, P.; Glas, D.F.; Kanda, T.; Ishiguro, H.; Hagita, N. A Model for Generating Socially-Appropriate Deictic Behaviors Towards People. Int. J. Soc. Robot. 2017, 9, 33–49. [Google Scholar] [CrossRef]
  61. Sisbot, E.A.; Alami, R.; Siméon, T.; Dautenhahn, K.; Walters, M.; Woods, S. Navigation in the presence of humans. In Proceedings of the 2005 5th IEEE-RAS International Conference on Humanoid Robots, Tsukuba, Japan, 5–7 December 2005; pp. 181–188. [Google Scholar] [Green Version]
  62. Forlizzi, J.; DiSalvo, C. Service robots in the domestic environment: A study of the roomba vacuum in the home. In Proceedings of the 1st ACM SIGCHI/SIGART conference on Human-robot interaction (ACM 2006), Salt Lake City, UT, USA, 2–3 March 2006; pp. 258–265. [Google Scholar]
  63. Fink, J.; Mubin, O.; Kaplan, F.; Dillenbourg, P. Roomba is not a Robot; AIBO is still Alive! Anthropomorphic Language in Online Forums. In Proceedings of the 3rd International Conference on Social Robotics (ICSR 2011), Amsterdam, The Netherlands, 23–25 March 2011. [Google Scholar]
  64. Torrey, C.; Powers, A.; Marge, M.; Fussell, S.R.; Kiesler, S. Effects of adaptive robot dialogue on information exchange and social relations. In Proceedings of the 1st ACM SIGCHI/SIGART Conference on Human-Robot Interaction (ACM 2006), Salt Lake City, UT, USA, 2–3 March 2006; pp. 126–133. [Google Scholar]
  65. Torrey, C.; Powers, A.; Fussell, S.R.; Kiesler, S. Exploring adaptive dialogue based on a robot’s awareness of human gaze and task progress. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (ACM 2007), Washington, DC, USA, 9–11 March 2007; pp. 247–254. [Google Scholar]
  66. Gross, H.M.; Schroeter, C.; Mueller, S.; Volkhardt, M.; Einhorn, E.; Bley, A.; Martin, C.; Langner, T.; Merten, M. Progress in developing a socially assistive mobile home robot companion for the elderly with mild cognitive impairment. In Proceedings of the 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), San Francisco, CA, USA, 25–30 September 2011; pp. 2430–2437. [Google Scholar]
  67. Gross, H.M.; Mueller, S.; Schroeter, C.; Volkhardt, M.; Scheidig, A.; Debes, K.; Richter, K.; Doering, N. Robot companion for domestic health assistance: Implementation, test and case study under everyday conditions in private apartments. In Proceedings of the 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 28 September–2 October 2015; pp. 5992–5999. [Google Scholar]
  68. Cooney, M.; Kanda, T.; Alissandrakis, A.; Ishiguro, H. Designing enjoyable motion-based play interactions with a small humanoid robot. Int. J. Soc. Robot. 2014, 6, 173–193. [Google Scholar] [CrossRef]
  69. Youssef, K.; Asano, T.; De Silva, P.R.S.; Okada, M. Sociable Dining Table: Meaning Acquisition Exploration in Knock-Based Proto-Communication. Int. J. Soc. Robot. 2016, 8, 67–84. [Google Scholar] [CrossRef]
  70. Cameron, D.; Fernando, S.; Collins, E.; Millings, A.; Moore, R.; Sharkey, A.; Evers, V.; Prescott, T. Presence of life-like robot expressions influences children’s enjoyment of human-robot interactions in the field. In Proceedings of the AISB Convention 2015: The Society for the Study of Artificial Intelligence and Simulation of Behaviour, Canterbury, UK, 20–22 April 2015. [Google Scholar]
  71. Andersen, P.A.; Guerrero, L.K. Principles of communication and emotion in social interaction. In Handbook of Communication and Emotion: Research, Theory, Applications, and Contexts; Academic Press: Cambridge, MA, USA, 1998; pp. 49–96. [Google Scholar]
  72. Belpaeme, T.; Baxter, P.E.; Read, R.; Wood, R.; Cuayáhuitl, H.; Kiefer, B.; Racioppa, S.; Kruijff-Korbayová, I.; Athanasopoulos, G.; Enescu, V.; et al. Multimodal child-robot interaction: Building social bonds. J. Hum.-Robot. Interact. 2012, 1, 33–53. [Google Scholar] [CrossRef] [Green Version]
  73. Ahmad, M.I.; Mubin, O.; Orlando, J. Adaptive Social Robot for Sustaining Social Engagement during Long-Term Children-Robot Interaction. Int. J. Hum.-Comput. Interact. 20. [CrossRef]
  74. Cuadrado, L.E.I.; Riesco, Á.M.; López, F.D.L.P. ARTIE: An Integrated Environment for the Development of Affective Robot Tutors. Front. Comput. Neurosci. 2016, 10, 77. [Google Scholar]
  75. Baxter, P. Memory-Centred Cognitive Architectures for Robots Interacting Socially with Humans. arXiv 2016, arXiv:1602.05638. [Google Scholar]
  76. Smith, S.M.; Petty, R.E. Personality moderators of mood congruency effects on cognition: The role of self-esteem and negative mood regulation. J. Pers. Soc. Psychol. 1995, 68, 1092. [Google Scholar] [CrossRef] [PubMed]
  77. Lubold, N.; Walker, E.; Pon-Barry, H. Effects of voice-adaptation and social dialogue on perceptions of a robotic learning companion. In Proceedings of the The Eleventh ACM/IEEE International Conference on Human Robot Interaction, Christchurch, New Zealand, 7–10 March 2016. [Google Scholar]
  78. Shahid, S.; Krahmer, E.; Swerts, M. Alone or together: Exploring the effect of physical co-presence on the emotional expressions of game playing children across cultures. Fun Games 2008, 5294, 94–105. [Google Scholar]
  79. Rau, P.P.; Li, Y.; Li, D. Effects of communication style and culture on ability to accept recommendations from robots. Comput. Hum. Behav. 2009, 25, 587–595. [Google Scholar] [CrossRef]
  80. Ahmad, M.I.; Mubin, O.; Orlando, J. Effect of Different Adaptations by a Robot on Children’s Long-term Engagement: An Exploratory Study. In Proceedings of the 13th International Conference on Advances in Computer Entertainment Technology (ACM 2016), Osaka, Japan, 9–12 November 2016; p. 31. [Google Scholar]
Table 1. Summary of Adaptive Interaction studies in the health care and therapy domain.
Table 1. Summary of Adaptive Interaction studies in the health care and therapy domain.
ReferencesRobotStudy DesignRobot CapabilitiesAdaptive Features
Francois et al., 2008Aibo Anthro: YesSubjects: 5 autistic children No. of Interactions: 1 Interaction Type: autonomous Measures: interaction style recognition Method: video analysisprovides feedback after detecting the interaction styleuser-interaction style based dialogue adaptation
Tapus et al., 2008ActiveMedia Pioneer 2-DX mobile robot anthropomorphic: NoSubjects: 19 adults No. of Interactions: One-off Interaction Type: autonomous Conditions: (robot vs. Human) Measures: task performance Method: Video analysisupdates dialogue, and human user movementUser’s task performance-based adaptations
Robins et al., 2008KASPER anthropomorphic: YesSubjects: 18 children No. of Interactions: one-off Interaction Type: WoZ controlled robot Measures: (1) Duration of the turn taking pause between a child and robot during both games. (2) Child imitating reactions duration. Method: video analysisplays the game and display gestures and facial expressions.game-based adaptations
Tapus et al., 2009Torso anthropomorphic: YesSubjects: 9 adults No. of Interactions: once every week for six months Interaction Type: autonomous Conditions: (robot vs. Human) Measures: task performance Method: Video analysisDisplay Gestures, Facial expressions, and Utters SpeechUser’s personality-based adaptations
Boccanfuso et al., 2011CHARLIE anthropomorphic: YesSubjects: 3 children with grade I diabetics No. of Interactions: one-off Interaction Type: autonomous Measures: Speed and accuracy of robot’s hand and face detection abilities Method: Video analysisplaying an imitation game and perform face trackingDetecting faces and imitating hand movements
McColl et al., 2013Brian anthropomorphic: YesSubjects: 8 elderly people No. of Interactions: One-off Interaction Type: autonomous Measures: user engagement Method: Questionnairesupdates dialogue and gesture based on the users eating behaviourDialogue and gesture-based adaptations
Wainer et al., 2013KASPER anthropomorphic: YesSubjects: 6 autistic children No. of Interactions: One-off Interaction Type: autonomous Conditions: (robot vs. Human) Measures: Enjoyment, collaboration Method: Video analysisDisplay Gestures, Facial expressions, and Utters SpeechGame event-based adaptations
Stafford et al., 2013CHARLIE anthropomorphic: YesSubjects: 25 Elderly No. of Interactions: 3 interactions/child between one and two months Interaction Type: Autonomous Measures: User experience Method: QuestionnairesSpeech generation, face recognition, understanding touch sensors, navigation to users roomUser profiling (scheduling visits, reminding medications, blood pressure measurements)
Coninx et al., 2016NAO anthropomorphic: YesSubjects: 3 children with grade I diabetics No. of Interactions: 3 interactions/child between one and two months Interaction Type: autonomous with WoZ controlled speech Measures: User experience Method: Questionnaires and logged DataSwitching between activities, display gestures, dances, and Utters SpeechUser profiling (name, age, performance, preferences) User emotions detection Memory adaptations
Table 2. Summary of Adaptive Interaction studies in Education domain.
Table 2. Summary of Adaptive Interaction studies in Education domain.
ReferencesRobotStudy DesignRobot CapabilitiesAdaptive Features
Salter et al., 2007ROBALL anthropomorphic: NoSubjects: 12 children No. of Interactions: one-off Interaction Type: Autonomous Measures: Accelerometers, Tilt sensors. Method: video analysismoving and avoiding abstavlesuser-playing patterns based adaptation
Janssen et al., 2011NAO anthropomorphic: YesSubjects: 20 children Conditions: between-subject (Personalized vs. Group level versions) No. of Interactions: 3 Interaction Type: semi-autonomous Measures: motivation Method: Questionnairesgenerate context-aware dialogue during game.game-event-based adaptation
Szafir et al., 2012Virtual AgentSubjects: 30 children - 10 per group Conditions: between-subject (low immediacy vs. random immediacy vs. adaptive behaviours ) No. of Interactions: one-off Interaction Type: Autonomous Measures: user attention and task performance Method: Questionnairesupdates dialogue, controls voice and displays gestures.user-interaction-engagement based adaptation
Brown et al., 2013DARWIN anthropomorphic: YesSubjects: 24 children Conditions: between-subject (verbally interactive robot vs. non-verbally interactive robot vs. verbally and non-verbally interactive robot vs. no-robot) No. of Interactions: one-off Interaction Type: semi-autonomous Measures: user engagement Method: Questionnairesdisplayed supportive dialogue, gestures during the game.game-event-based adaptation
Kuehnlenz et al., 2013EDDIE anthropomorphic: YesSubjects: 84 adults Condition: Full Emotion Adaption vs. Implicit Adaption, Explicit Adaption vs. or no adaption No. of Interactions: one-off Interaction Type: autonomous Measures: robot’s helpfulness Method: Questionnairesdisplays emotional verbal and facial expressionsemotion-based adaptations
Ros et al., 2014NAO anthropomorphic: YesSubjects: 12 children No. of Interactions: 18 Interaction Type: Autonomous Measures: social engagement Method: video analysis, Questionnairesupdates both verbal (text-to-speech) and non-verbal (LED’s, head, poses, and dance moves)user-profiling, memory based adaptation
Liete et al., 2014iCAT anthropomorphic: YesSubjects: 16 children No. of Interactions: 5 Interaction Type: Autonomous Measures: engagement, perceived support and social co-presence Method: QuestionnairesEmphatic dialogue, facial expressionsuser profiling (name, performance), user emotions-based Adaptation
Uluer et al., 2015ROBOVIE anthropomorphic: YesSubjects: 3 groups (18 graduate students, 6 children with typical hearing, 18 hearing impaired children) No. of Interactions: One-off Interaction Type: semi-autonomous Measures: learning performance Method: Video analysisplaying game, display Gestures, LED lights and Utters SpeechGesture specific, LED’s specific and speech/sound adaptations
Greeff et al., 2015LightHead anthropomorphic: YesSubjects: 41 adults Conditions: social vs. non-social No. of Interactions: one-off Interaction Type: semi-autonomous Measures: Robots learning performance and gaze behaviour Method: Questionnaires and video analysisplaying turn taking language game with a human teacher in order to learn words.gaze-based adaptation, user-performance based facial and verbal expressions
Table 3. Summary of Adaptive Interaction studies in work environment and public spaces.
Table 3. Summary of Adaptive Interaction studies in work environment and public spaces.
ReferencesRobotStudy DesignRobot CapabilitiesAdaptive Features
Hoffman et al., 2007Virtual AgentSubjects: 32 adults No. of Interactions: one-off Interaction Type: autonomous with WoZ controlled speech Measures: Time taken to complete a task. Method: logged Dataattach car parts through anticipating actionsUser and game based adaptations
Svenstrup et al., 2008FESTO anthropomorphic: YesSubjects: 48 adults No. of Interactions: one-off Interaction Type: Autonomous Measures: user experience Method: Questionnaires and interviewsdetecting individual, playing music and showing expressionsuser-identification based adaptation.
Lee et al., 2009SnackBot anthropomorphic: YesSubjects: 21 adults No. of Interactions: four-months field study Interaction Type: Autonomous with WoZ controlled speech recognition Measures: user experience Method: Questionnaires and interviewsutters context-aware speech, and deliver food through navigating inside the halluser-preference based adaptation.
Kanda et al., 2010Robovie-IIF anthropomorphic: YesSubjects: 235 adults No. of Interactions: 25-days field trial Interaction Type: Autonomous with WoZ controlled speech recognition Measures: enjoyment and social interaction, visitor’s perception Method: Questionnaires and video analysisidentify users, providing shopping information, route guidance, inquire personal informationuser and memory-based adaptation.
Shiomi et al., 2013Robovie-II Robovie-miniR2 anthropomorphic: YesSubjects: 256 (only interacting users) Conditions: GUI, Robovie-miniR2, and Robovie-II. No. of Interactions: field-trial Interaction Type: Autonomous with WoZ controlled speech recognition Measures: Robots learning performance and gaze behaviour Method: Questionnaires and video analysisutters speech, gestures, and non-verbal behaviours according to person’s action.User specific gesture, dialogue based adaptations
Sekmen et al., 2013Pioneer 3-AT mobile robot anthropomorphic: NoSubjects: 25 adults Conditions: adaptive vs. non-adaptive. No. of Interactions: one-off Interaction Type: Semi-autonomous Measures: user preference Method: Questionnairesdetecting and recognising face, and speech, understanding natural language, filtering information from Internet and navigating through following the mapSpeech and user-based adaptation.
Rousseau et al., 2013Robovie anthropomorphic: YesSubjects: 381 visitors No. of Interactions: field-trail Interaction Type: Autonomous Measures: user preference on robot’s behaviours Method: video analysisdetecting users, navigating toward the user, facial expressions, head movements, and arm gesturesuser-identification based adaptation.
Aly et al., 2013NAO anthropomorphic: YesSubjects: 35 children No. of Interactions: one-off Interaction Type: WoZ controlled robot Measures: engagement Method: Questionnairesutters speech and display gesturespersonality-based adaptations
Keizer et al., 2014iCAT anthropomorphic: YesSubjects: 37 adults Conditions: Rule-based Social Skill Recognizer (SSR) vs. trained SSR. No. of Interactions: one-off Interaction Type: Autonomous Measures: detecting customer, detection time, system response time, drink-serving time, number of engagement changes. Method: Questionnaires and video analysisdetecting customer, track multiple customers, serve drinks and take orders.Speech and user-based adaptation.
Kato et al., 2015Robovie anthropomorphic: YesSubjects: 26 visitors Conditions: intention estimation algorithm vs. proactive vs. non-adaptive No. of Interactions: field trail Interaction Type: Autonomous with WoZ controlled speech recognition Measures: interaction intention success rate, Method: video analysisdetecting visitors and interacting with them.user-interaction-intention based adaptation.
Shiomi et al., 2015NEO-PR45 anthropomorphic: YesSubjects: 28 adults Conditions: simple-robot vs. adaptive robot vs. human-caregiver. No. of Interactions: one-off Interaction Type: Autonomous Measures: ease of user, enjoyment, degree of comfort Method: Questionnaires and interviewsprovides speech based feedback and adapts speed based on user preference.Speed and speech-based adaptation.
Dang et al., 2015NAOSubjects: 17 adults Conditions: Robot vs. no-robot in normal alarm system and stressful alarm system No. of Interactions: one-off Interaction Type: semi-autonomous Measures: interaction preference Method: video analysisgenerate true or false alarms through speech and gesturesuser-stress-level and personality based adaptations
Liu et al., 2016Robovie anthropomorphic: YesSubjects: 33 adults No. of Interactions: one-off Interaction Type: WoZ controlled robot Measures: naturalness, understandability, perceived politeness and overall goodness of the robot’s deictic behaviours Method: Questionnairedisplays deictic behaviours through gaze, casual and precise pointinguser pointing behaviours based adaptations
Table 4. Summary of Adaptive Interaction studies in Home.
Table 4. Summary of Adaptive Interaction studies in Home.
ReferencesRobotStudy DesignRobot CapabilitiesAdaptive Features
Torrey et al., 2006Pearl anthropomorphic: YesSubjects: 49 adults No. of Interactions: One-off Interaction Type: autonomous Measures: Information exchange and social relations. Method: Questionnaire and Video analysisutters context aware speechDialogue-based adaptations
Torrey et al., 2007Pearl anthropomorphic: YesSubjects: 66 adults Conditions: “Question Only”, “Gaze Added”, “Delay Added”, “Immediate Added” No. of Interactions: one-off Interaction Type: autonomous Measures: Performance, communication and subjective evaluation Method: Video analysis, Questionnaires and Interviewsutters context-aware speech with gaze movementDialogue-based Adaptations
Cooney et al., 2014SPONGE anthropomorphic: NoSubjects: 20 adults (within-subject) Conditions: (Naïve design vs. proposed design) and (reward, vs. suggestion) No. of Interactions: One-off Interaction Type: autonomous Measures: total interacting users, total printed coupons, and interaction initiation. Method: video analysisprovides rewards and suggestions based on understanding human gesturesDialogue Adaptation Based on Human Gestures
Gross et al., 2015CompanionAble ’Max’ anthropomorphic: YesSubjects: 9 elderly No. of Interactions: trial for three days Interaction Type: semi-autonomous Measures: technology acceptance Method: Interviewsdisplay emotions, recognise, detect and track person, give recommendations, understand haptic inputUser-preference and emotion based adaptations
Youssef et al., 2015DISH anthropomorphic: NoSubjects: 3 groups (18 graduate students, 6 children with typical hearing, 18 hearing impaired children) No. of Interactions: One-off Interaction Type: semi-autonomous Measures: learning performance Method: Video analysisunderstanding knocking behaviour and moving on the table.User-knocking behaviour based adaptations

Share and Cite

MDPI and ACS Style

Ahmad, M.; Mubin, O.; Orlando, J. A Systematic Review of Adaptivity in Human-Robot Interaction. Multimodal Technol. Interact. 2017, 1, 14. https://doi.org/10.3390/mti1030014

AMA Style

Ahmad M, Mubin O, Orlando J. A Systematic Review of Adaptivity in Human-Robot Interaction. Multimodal Technologies and Interaction. 2017; 1(3):14. https://doi.org/10.3390/mti1030014

Chicago/Turabian Style

Ahmad, Muneeb, Omar Mubin, and Joanne Orlando. 2017. "A Systematic Review of Adaptivity in Human-Robot Interaction" Multimodal Technologies and Interaction 1, no. 3: 14. https://doi.org/10.3390/mti1030014

APA Style

Ahmad, M., Mubin, O., & Orlando, J. (2017). A Systematic Review of Adaptivity in Human-Robot Interaction. Multimodal Technologies and Interaction, 1(3), 14. https://doi.org/10.3390/mti1030014

Article Metrics

Back to TopTop