Next Article in Journal
Designing Digital Twins of Robots Using Simscape Multibody
Next Article in Special Issue
Beyond Explicit Acknowledgment: Brain Response Evidence of Human Skepticism towards Robotic Emotions
Previous Article in Journal
A Framework for Modeling, Optimization, and Musculoskeletal Simulation of an Elbow–Wrist Exosuit
Previous Article in Special Issue
Context-Aware Robotic Assistive System: Robotic Pointing Gesture-Based Assistance for People with Disabilities in Sheltered Workshops
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Town Crier: A Use-Case Design and Implementation for a Socially Assistive Robot in Retirement Homes †

by
Ana Iglesias
1,‡,§,
Raquel Viciana
2,‡,
José Manuel Pérez-Lorenzo
2,‡,
Karine Lan Hing Ting
3,‡,
Alberto Tudela
4,‡,
Rebeca Marfil
4,‡,
Malak Qbilat
1,‡,
Antonio Hurtado
1,‡,
Antonio Jerez
4,‡ and
Juan Pedro Bandera
4,*,‡
1
Computer Science and Engineering Department, University Carlos III of Madrid, 28911 Leganés, Spain
2
Telecommunication Engineering Department, University of Jaén, 23700 Jaén, Spain
3
Living Lab ActivAgeing/LIST3N, Technological University of Troyes, 10004 Troyes, France
4
Department of Electronic Technology, University of Málaga, 29071 Málaga, Spain
*
Author to whom correspondence should be addressed.
This paper is an extended version of our paper published in Iglesias, A.; José, R.V.-A.; Perez-Lorenzo, M.; Hing Ting, K.L.; Tudela, A.; Marfil, R.; Dueñas, Á.; Bandera, J.P. Towards Long Term Acceptance of Socially Assistive Robots in Retirement Houses: Use Case Definition. In Proceedings of the 2020 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC), Ponta Delgada, Portugal, 15–17 April 2020; pp. 134–139.
These authors contributed equally to this work.
§
Current address: Department of Electronic Technology, Telecommunications School, Universidad de Málaga, Campus de Teatinos s/n, 29071 Málaga, Spain.
Robotics 2024, 13(4), 61; https://doi.org/10.3390/robotics13040061
Submission received: 26 February 2024 / Revised: 3 April 2024 / Accepted: 4 April 2024 / Published: 9 April 2024
(This article belongs to the Special Issue Social Robots for the Human Well-Being)

Abstract

:
The use of new assistive technologies in general, and Socially Assistive Robots (SARs) in particular, is becoming increasingly common for supporting people’s health and well-being. However, it still faces many issues regarding long-term adherence, acceptability and utility. Most of these issues are due to design processes that insufficiently take into account the needs, preferences and values of intended users. Other issues are related to the currently very limited amount of long-term evaluations, performed in real-world settings, for SARs. This study presents the results of two regional projects that consider as a starting hypothesis that the assessment in controlled environments and/or with short exposures may not be enough in the design of an SAR deployed in a retirement home and the necessity of designing for and with users. Thus, the proposed methodology has focused on use-cases definitions that follow a human-centred and participatory design approach. The main goals have been facilitating system acceptance and attachment by involving stakeholders in the robots design and evaluation, overcoming usage barriers and considering user’s needs integration. The implementation of the first use-case deployed and the two-phase pilot test performed in a retirement home are presented. In particular, a detailed description of the interface redesign process based on improving a basic prototype with users’ feedback and recommendations is presented, together with the main results of a formal evaluation that has highlighted the impact of changes and improvements addressed in the first redesign loop of the system.

1. Introduction

The worldwide population is growing older, especially in certain regions such as the south of Spain [1]. The development of new tools to support the work of formal caregivers in retirement homes (homes where older people live together to be cared for) in certain monotonous tasks motivates the use of different technologies, which usually fall under the term Ambient Assisted Living (AAL). Among these technologies, Socially Assistive Robots (SARs), designed to provide assistance through social interaction [2], have emerged as a promising option, due to their proactivity, autonomy, adaptability and potential acceptability. Moreover, they can act as powerful social facilitators, when correctly integrated into shared environments [3,4,5]. However, a successful integration of these technologies in real-world contexts remains as one of the main challenges that projects in this research field. The lack of results based on long-term experimentation in real-world settings and the limited consideration of the user perspective in the early design and evaluation stages, along with difficulties in correctly describing, evaluating and regulating these devices [6,7], are currently preventing SARs from being widely used. As for the evaluation, the performance assessment should consider a social perspective beyond the technical dimension. Moreover, SAR performance should be evaluated in real scenarios and under everyday task conditions. However, approaches that consider long-term evaluation “in the wild” [8], i.e., in real-world settings without the constraints of test-bed or lab scenarios, are still uncommon [7].
This study aims to contribute to the development of socially accepted and efficient SARs. Hence, the methodology to actively involve users in the design process of the SAR is described in Section 2, via combining aspects of human-centred design and participatory design approaches. Here, the users are both residents (the older people living in the retirement home) and the formal caregivers taking care of them. Possible use-cases that were designed based on the insights gained from interviews and a focus group with the stakeholders are also enumerated in Section 3. In Section 3.2 and Section 4, details are provided about the implementation of the first use-case with a working platform within a pilot test deployed in the retirement home, which is described in Section 5. The first results of the pilot test is detailed in Section 5.6. In addition, the interface of the robotic platform was improved throughout the design, implementation and roll-out of the pilot test, with feedback from the stakeholders being sought at several stages throughout the process. The process of the interface design and improvement is detailed in Section 6. The study presented in this paper was conducted during the COVID-19 pandemic years, with strict limitations on access to the evaluation environment. The details of the problems encountered during the evaluation and how we addressed them are also provided.
Throughout the whole process, the AUSUS evaluation framework, an evolution of the USUS framework [9] developed within the authors’ previous research [10], was used to detect possible interaction problems with the robotic platform, and stakeholders’ feedback and emerging needs were considered during the continuous and iterative definition of the use-case and interface design. The main purpose of the AUSUS evaluation framework is to achieve a holistic and ‘in place’ evaluation of Human–Robot Interaction (HRI) for Socially Assistive Robots (SARs), considering first and foremost the needs of older adults, in terms of usability, accessibility, user experience and acceptance criteria.

2. Current Challenges for Socially Assistive Robots

SARs have become the research focus of many recent studies [5,11,12], and in particular conceived to support caregivers in retirement homes. SARs are proposed to assume simple and repetitive tasks, such as reminding residents of their medical appointments, or taking their pills at the right time, leaving more time for the caregivers to provide personalized care. However, despite the existence of commercial social robots since the beginning of the 21st century, they still face limitations in adapting their behaviour to their possible interlocutors. Moreover, the design of SARs deployed in a retirement home should consider the frequently limited interaction capabilities of residents, the design of accessible and natural interfaces and content adapted to cultural and taste aspects [13].
A possible approach to gather social skills is based on the definition of architectures such as ROCARE (Robotic Coach Architecture for Elder Care) [14], which evaluates how to provide the robot with social skills in a similar way to the development of human social cognition, by starting with models of dyadic or one-to-one interaction, passing through the triadic in which intention is already involved, and even collaborative interaction. Other approaches consider usability and accessibility as the key aspects in the development process of endorsing social robots with interaction capabilities. Thus, they focus on the interface design and consider the interaction naturalness as the key feature for the engagement of different users [15]. A well-designed interface should allow users to socialize with or at least interact with robots without previous knowledge or experience of robots. Thus, as in everyday interaction, naturalness is usually related to the need for multimodal interfaces that facilitate interaction with users, avoiding accessibility barriers that hinder interaction [16,17].
However, the process of including an accessibility criteria in the interaction metaphors and interfaces is still uncommon in HRI, even in social robotics. Indeed, in the case of SARs intended to act as companion robots (e.g., PARO, AIBO, ifbot), whose main goal is the improvement of the emotional state, the evaluation in terms of accessibility and usability requirements is more limited [18]. One of the main reasons for this limitation is the added complexity arising from the additional specifications of SARs in terms of task variety, security and technological challenges. To the authors’ knowledge, only some exceptions as the CLARC EU Project (ECHORD++ FP7-ICT-601116) [19] have considered accessibility in HRI for a SAR.
Even with the consequent generational narrowing of the digital gap, recent studies still show limitations in terms of the acceptability of the current technology. In this sense, there are studies that evaluate design requirements with different stakeholders and the impact and acceptance degree of new technologies [17,18,20]. In particular, the study of Winkle et al. [21] provides a complete analysis of a mutual shaping approach by combining elements of human-centred design and participatory design of an SAR. As the main outcome, a significant shift in participants’ acceptance was found as a consequence of sharing and shaping the knowledge. Our research follows this same approach, and it deepens and puts into practice previous outcomes about the contribution of stakeholders’ involvement in the SAR design in terms of acceptability and adequacy [19], with the aim of obtaining both a pragmatic and reflexive methodological conclusion.
Following the principles of user-centred design for an SAR which was conceived for a retirement home, a key aspect to consider is the User Interface (UI) requirements. In particular, the design and development of interfaces that are accessible and usable. Indeed, as stated in previous studies with older adults, the lack of previous experience with similar technologies together with natural changes and limitations associated with ageing [15] usually entails additional constraints in the interaction with Information and Communication Technologies (ICT). Hence, the robotic platform needs to be designed in an inclusive way, providing an accessible interface in order to allow all the users to interact with the system with the same opportunities. In this paper, the main accessibility guidelines and recommendations in the Human–Computer Interaction (HCI) [22,23,24] and Human–Robot Interaction (HRI) fields [25] are followed and specified as system requirements. Moreover, our experience in previous research projects [19], where older adults interact with robot platforms, is considered for the analysis and design process.

3. Definition of the Use-Case

HCI, Software Engineering, Human-Factors Engineering and Assistive and Rehabilitation Technology disciplines propose different guidelines for the design of human–robot interfaces. These guidelines have been considered for the definition of this use-case, using an iterative design approach (Figure 1) that is explained in detail in the next subsections.
The particularity and added value of our approach is the degree of involvement of the stakeholders and that the focus is on understanding and meeting their needs.The principles of human-centred approaches [26,27] and participatory design [28,29] have been combined, not as a “toolkit” of methods, but to be constantly adapted to the situated and specific context [30] of the retirement home organisational environment, user’s characteristics and use-case scenario. This study details how this approach has been followed during the whole design and evaluation process.
Participants have been classified into three categories, depending on the expected level of interaction with the SAR and their role in the AAL. Thus, the primary group consists of residents, healthcare professionals and caregivers working in the retirement home; the secondary group are relatives, visitors, healthcare professionals and caregivers (formal and informal) not working in the retirement home; and the tertiary group refers to the other workers (e.g., management staff, external hardware and software providers).

3.1. Capture of Retirement Home Needs and Viability Study

In this phase, the needs of the users at a specific retirement home (Vitalia Teatinos in Málaga-Spain) have been analysed through ethnographic observations and workshops with the primary stakeholders and the engineers. First, the needs and technical limitations of the robotic platform (e.g., physical barriers) were analysed from observing the environment and daily life routines in three different rooms. During one of these visits, several participatory co-design methods (collaborative mind-mapping, post-it and task sorting) during a participatory workshop allowed a preliminary assessment of some tasks in which a basic robot without arms could be useful. For this workshop, eight primary participants were recruited by the occupational therapist: 4 men and 4 woman; 3 residents and 5 professionals (occupational therapist, physical therapist, psychologist, social worker and nurse). The session lasted 90 min, and as a result the main needs of residents and caregivers working at the retirement home were identified and sorted. The advantage of using a participatory design approach in this case was that the users themselves were the ones who decided which robotic functionality was most appropriate for the retirement home according to their own needs, thus contributing more actively to the design of the robotic system and increasing the likelihood of success in gaining acceptance of the platform in the retirement home.
The engineering team took as the end-user input this list of tasks and requirements and analysed the feasibility of properly implementing those tasks with the limitations of the available robotic platform. Some examples of the SAR tasks, sorted according to the primary stakeholders’ priorities, are as follows: (i) postural control; (ii) selection of the menu options for lunch and dinner; (iii) announcer of events at the retirement home. According to the analysis of needs, priorities and viability, the integration of the robot into the retirement home started with the most simple task: the announcer (Section 3.2).

3.2. Design of the First Use-Case: Announcer Task

This use-case has been selected to be implemented first on the basis of (i) its technical simplicity, which allows designing basic prototype interfaces that can be iteratively improved, and (ii) the active role of the robot as a social entity in the retirement home, able to approach users and provide relevant information. Thus, conceiving a long-term evaluation, this task could promote a faster integration of the SAR into their life routines. The use-case was designed collaboratively during the participatory workshop detailed in the previous section. The workshop involved describing the different activities the robot would perform, taking into account the needs of both staff and residents, and considering the limitations of the technology. The procedure of the use-case consisted of the robot moving once a day in two of the selected rooms in the retirement home and announcing the agenda of the day (activities or events scheduled for each of the resident groups). Although for the first tests in the real environment the robot was teleoperated, autonomous navigation was also integrated and tested in controlled environments during the design phase. It allowed the robot to move to the most appropriate position to make the announcement, but including also some randomization in the selection of these positions to avoid mechanical repetitions. Additionally, the teleoperator was also allowed to choose different positions before and during the experiment, aimed towards making sure that all that were present properly understood the announced information independently of the visual/hearing/moving limitations of the residents. This autonomous behaviour has been gradually integrated into the real use-case, as soon as it fulfilled robustness, safety and efficiency requirements. In fact, the robot currently performs the use-case autonomously in the retirement home, once it has met the conception of a “human-friendly robot”, that is to say, when interaction and safety issues have been inseparable [31]. It is no coincidence that the terms “human-friendly robots” or “human-friendly robot design” in the literature [32] are more technology- rather than human-centred. However, in this research, design decisions have been made in accordance with the human-centred approach.

4. Robotic Platform and Interfaces

This study uses the CLARA robot (See Figure 2) for the pilot experiment. It is a SAR developed in the concluded CLARC EU Project (ECHORD++ FP7-ICT-601116) [19]. The CLARA robot uses the CORTEX cognitive architecture [33], which eases the robot into adapting to new scenarios and incorporating new use-cases. Hence, it is possible to use this robotic platform, initially designed to drive functional, cognitive and motor tests in one-to-one interactions, for a completely new set of tasks, without changing its inner software architecture. Figure 3 shows the CORTEX components for the announcer use-case, which are in charge of different tasks (monitoring, interfacing, speech generation, battery management, etc.). All of them are connected with the Deep State Representation (DSR) graph. This graph works as the inner representation of the world and a shared blackboard for all the modules to communicate through the so-called agents. Most of the proposed architecture is programmed using the Robocomp framework [33], although the Robot Operating System (ROS) framework [34] is also used for some components connected via dedicated proxies. All modules rely on the Ice middleware [35] for communication.
The main specifications that were considered in the announcer’s design were centred around two interfaces:
  • The teleoperator interface, which allows a remote operator to control the robot via a computer, tablet or mobile phone through a web browser. Its first version (Figure 4a) provided the following options to the operator: (i) to announce a message that can be written by the teleoperator or selected from a list; (ii) to announce a randomly generated message to greet the residents; (iii) to play some specific sounds (horn); (iv) to set the robot speaking volume; (v) to go to a mapped location. Additionally, this interface displays a live video streaming from an IP camera installed in the robot’s head, the battery levels for both the robot and the teleoperation joystick, the selected navigation mode (joystick/autonomous) and a text console with information about the actions executed by the robot. Later, the interface was updated with some additional features, as explained in Section 5.6.
  • The interface of the robot with the residents. This interface accomplishes the next requirements, gathered iteratively via the recommendations of the stakeholders in the participatory design, and also based on the user’s abilities and disabilities and interaction needs according to a user-centred approach: (i) be multimodal, supporting voice, images and text displayed on a screen to inform about the events; (ii) display specific events associated with birthdays or anniversaries; (iii) warn about the information announcement with a repetitive sound of a horn in order to attract attention; (iv) say goodbye and return to the home position after the announcement. The touch-screen placed on the robot’s torso (see Figure 2) has been used as the screen interface to update the visual information targeted at the residents. In a first version, the screen showed a list with a brief version of the daily agenda, as well as a text with a transcript of the robot speech at any given moment, or a loudspeaker icon when the horn sound was being played (Figure 5a). This interface was deeply modified after the pilot tests, as explained in Section 5.6. Its design and evaluation process is detailed in Section 6.
Both visual interfaces displayed in Figure 4 and Figure 5 have been implemented with web technologies and connected to the CORTEX architecture through a web server (see Figure 3).

5. Pilot Experiment

The design of the SAR capabilities and features, and the use-cases, are conceived to follow an incremental redesign process (see Section 3), where users were involved in the entire process of SAR development, from the capture of requirements to the evaluation of the system. The needs of the users are always the central focus of the entire process. Stakeholders have collaborated to design the system and provided valuable feedback in successive iterations, during evaluations of the interfaces and use-cases. Moreover, the system in the future will be incorporating different use-cases to be evaluated in the long-term, as well as new interaction and interface capabilities.
This section describes the first pilot experiment that was performed in the retirement home in February 2020 in two phases, following the AUSUS evaluation framework. Each phase consisted of one execution of the announcer use-case. A fortnight separated the two phases, giving time to the technical team to incorporate several updates to the SAR, as described below. This section presents a summary of the preliminary results, which is the main qualitative outcome of the first pilot experiment. The next section provides a detailed evaluation of one of the robot’s interfaces, specifically the information interface, and assesses the interaction between older adults and the SAR.

5.1. Aims

The first of these phases was designed with the following objectives:
  • From a Technological Perspective, the objectives were (i) to detect possible issues in the SAR performance during the interaction in the retirement home (in a real environment, in contrast to a controlled one) and; (ii) to technically evaluate the interfaces to detect accessibility and usability barriers.
  • From a Social Perspective, the objectives were (i) to introduce the robot to the residents, to explain to them the main aims of the project and to make them aware that it will be present in their daily lives for a few weeks for the first pilot experiment; (ii) to evaluate the first impressions of the residents about the SAR as a phase of a participatory design, by involving them in the process of deciding certain aspects of the design of the user case, robot functionality and recommendations about the interface design.
The second phase of the pilot experiment was to prepare the long-term experiments to be carried out with the residents. The main aims were (i) to design the evaluation setup and protocol to execute the announcer use-case, following a participatory design approach and involving the caregivers in the process, time to be spent in each room, frequency and positions where the message should be repeated, etc.; (ii) to draw conclusions about the best way to gather user’s feedback (by interviews, questionnaires, focus groups or observation); and (iii) to supervise the extent to which the prototyping cycle is able to incorporate valuable improvements from the collected feedback by analysing the SAR performance once the minor technical improvements suggested in the first phase were implemented.

5.2. Participants

Two categories of end-users were involved in this first pilot experiment: residents and professionals in the retirement home. Visitors were not yet included in these experiments, although they will be present in the long-term evaluation process.
From an initial participatory workshop with the occupational therapist and the technical team, it was found that only two of the seven existing common rooms in the retirement home were suitable for the pilot experiment, due to the lower degree of dependency of the residents who were there and the physical characteristics of the rooms, which allowed the robotic platform to move around. Hence, the inclusion criteria for this pilot test allowed participants who had no cognitive impairments, or very mild ones (i.e., Mini Mental Test score over 24 [36]). In the two phases of the pilot, the robot interacted with approximately 40 residents (approx. 20 per room), most of them were present in the introduction session. For this first pilot experiment, inclusion criteria related to their abilities/disabilities or preferences were not taken into account. The occupational therapist presenting the robot, along with several caregivers (2–3 per room), were also present during the experiments.
The tests were performed in a real environment, without changing at all the daily routines of the retirement home. Hence, the people who were not initially included in the experiments, including relatives and professionals, met the SAR in the corridors. Moreover, during the experiment there were residents and caregivers going in and out of the rooms. Finally, during the first phase of the experiment, and due to the very positive response of the residents, the therapists asked the robot to move around an open-air terrace, where other residents were enjoying some activities related to the upcoming carnival, so that they could also meet it.
Regarding the technical team for this pilot tests, it consisted of two engineers, who were responsible for teleoperating the robot by using a tablet and a joystick. These operators were also in charge of observing the resident interactions with the robot, providing feedback about the usability and accessibility of the interfaces, following a user-centred approach. The technical team also included a rehabilitation specialist who prepared and conducted the initial contact with the residents.

5.3. Material

The pilot test was linked to complementary research questions—technical viability, usability–accessibility, utility and acceptability—aiming towards evaluating both the technical performance of the SAR and its interfaces and, related to social aspects, the first impressions that the robot produced in the residents and professionals at Vitalia. Data were collected using two main sources: (i) video recordings from the camera mounted on the robot and (ii) informal interviews with the residents, caregivers and technicians. Therefore, an interest in analysing activity based on video-ethnographic analysis [37,38] is combined with observation and experiential users’ feedback.
Before the experiments, the participants filled in an informed consent. As detailed above, testing in a daily life field scenario means unexpected people were around the robot or even interacted with it. Privacy was always guaranteed as images and videos taken inside the retirement home were subjected to Vitalia regulations regarding this particular aspect.

5.4. Environment

Two shared rooms in the resident home were used for the pilot experiment. These rooms were the living-rooms used by the participants to interact with other residents or to perform daily activities. The rooms were located consecutively in the same corridor, and both were also connected to an outdoor terrace. They were spacious, with several chairs. Some residents were sitting in these chairs during the tests. Others were walking around or standing up. Many of them were participating in other activities or speaking with other residents when the robot arrived. Moreover, 2–3 caregivers were also in the rooms. No TV or music was played in the rooms during the evaluation phases in order to allow people to listen to the robot.

5.5. Procedure

Before the pilot experiment took place, consent forms were distributed among caregivers working in Vitalia Teatinos, and they helped residents fill in the consent form during their daily routines, some days before the pilot started.
In the first phase, the residents and professionals in the resident home were informed about the project aims and that a robotic platform would go to the two specified rooms to announce events and the daily menu, so the residents and professionals who wanted to participate in the test stayed in these rooms. Moreover, in order to help with the introduction of the SAR in the daily life of the retirement home, the occupational therapist of Vitalia suggested the activity of naming the robot, following a participatory design, by choosing and agreeing together on a name. The robot was finally named ‘Felipe Vitálico’ by the residents participating in the experiment. The naming procedure occurred naturally, without any intervention of the researchers of staff. During the process, a resident made a comparison between the robot’s appearance and that of the current King of Spain, Felipe VI, describing it as sophisticated and haughty. The remaining participants in the naming process found the comment amusing and agreed to use the suggested name. As a surname, they preferred to reference the name of the retirement home. “Vitálico” means “from the Vitalia resident home” in the Spanish language.
During the two phases, the robotic platform was configured to be fully teleoperated by the technicians. The robotic voice volume was fixed to the maximum level, to allow the robot to speak aloud and be heard in the whole room. Different voices were tested, the first option being the Microsoft Helena Spanish voice. The technicians stayed out of the rooms while the robot was performing the use-case. However, they always kept line of sight to check the interaction between the users and the robot. In addition, the teleoperator could utter simple phrases such as introductions, greetings and farewells to exhibit more interaction capabilities when a resident was approaching the robot to initiate a conversation.
The robot started the session in the corridor. Then, it moved to the first room. Once it was located in an adequate spot in the middle of the room, it made a horn sound (similar to a town crier horn) to capture the attention of the users around it. Then, it said hello and started announcing the events and the menu. The announcement was repeated twice, as many residents did not hear it properly the first time. Once the announcements were done, it said goodbye and moved out to the other room. In that second room, due to the hour in which the tests were performed, tables and chairs were distributed in groups, ready for lunch-time. Residents were walking around or were sitting in groups. Hence, the robot was teleoperated to approach each of the three main groups, and made the announcements for each of them. After it finished announcing the events and the menu, it said goodbye and moved directly to its charging place.

5.6. Results

This subsection presents the preliminary results based on the observation and informal interviews with professionals, residents and the robot teleoperators of the evaluation group, obtained following a user-centred design. These preliminary results about additional needs and requirements were used to detect possible interaction problems. Moreover, during the interviews and participatory design sessions, users proposed changes to the interface design and functionality. These changes are described below, and users were able to draw new mock-ups of interfaces if needed during the co-design sessions.

5.6.1. Technical Issues and Recommendations

The first impressions of the pilot experiment, even with such a simple and controlled use-case, is the large gap between the interaction capabilities in the real scenario in contrast with its normal operation in a controlled environment. The complexity of interaction in a crowded environment (noise, people with limited interaction capabilities, barriers, uncontrolled events, etc.) became evident when contrasted with first impressions obtained in the lab. However, interesting conclusions could be drawn for the redesign, consisting of the teleoperation interface for the second intervention, and for the initial design of other use-cases of incremental complexity, as detailed below.
Teleoperation interface. Selecting phrases from pop-up lists quickly proved not to be a valid method to allow a fluent interaction. The teleoperator did not have time to write, nor even to select, a proper phrase when, for example, someone said ’hello’ to the robot and expected a response from it. The “delay” from a conversational mechanism perspective [39] limits a fluid interaction capacity and can be seen as a disadvantage rather than an improvement. Moreover, while this issue might have been mitigated if the robot incorporated some mechanisms to acknowledge the reception of a phrase, the difficulties to automatically recognize speech from the users in a robust way within the environment of the retirement home made this option not possible for the employed platforms.
Hence, we changed the teleoperation interface by allowing the teleoperator to make the robot speak by simply clicking a button. The cognitive architecture of the robot will select a random phrase (or a phrase composed by random components) when that button is clicked. Figure 4b shows the new teleoperation interface. As depicted, the interface incorporates more commands that would allow selecting a phrase from a pop-up list or randomly picking a phrase rather than writing it. Four categories were used to group these phrases: greetings, saying goodbye, events agenda and food menu choices for the day.
Person detection. The robot’s ability to detect people facing it, and willing to interact with it, is significantly poorer when the robot works in crowded daily life environments than when it works in a controlled lab environment. The person detection system of the robot allows the teleoperator to send a greetings command each time the SAR detects a person. However, using only a vision-based human detection system, the ability of the robot to detect people is limited. Therefore, the system was changed to allow the teleoperator to send when needed a greetings command. In addition, a more robust multimodal detection system is currently being developed and tested in the robot, fusing visual features, deep neural network training, laser scans and RFID readings. This system will be further reinforced by an automatic speech recognition module, able to detect when a person greets the robot.
Voice volume. While in a moderately quiet laboratory environment the maximum volume of the robot’s loudspeakers was adequate, it was not enough to make the robot be heard in the crowded shared rooms of the retirement home. That situation was even worse when the robot moved outdoors. Therefore, only people within an approximate distance of less than 1.5 m could properly understand it. In situations where the therapists asked the people to keep silent in the shared rooms, that range could reach 2.5 m or 3 m around the robot. Therefore, many residents expressed they could just barely understand what the robot said.
This intelligibility problem was corrected between phases one and two of the pilot test by providing the robot with more powerful (10 W) loudspeakers, and by modifying the robot chassis to increase the voice projection (holes were drilled where the loudspeakers are located behind the chassis). In the second phase of the experiment, these upgrades allowed the robot to be heard anywhere in the room.
Attention-catching mechanisms. Residents and professionals in the shared rooms were expecting the robot to come in, as they had been previously informed about it. However, they easily lost their attention after the robot began to speak. Some of the reasons were as follows: (i) the robot remained static while speaking; (ii) its voice was monotonous; (iii) the person didn’t understand it properly.
We recommend to address these issues by the following: (i) providing the robot with back channelling behaviours that allow acknowledging or simply attracting attention. The first proposal to be tested will make the robot perform randomized and spontaneous small movements while speaking. These movements could help in keeping the attention of the residents, although some tuning will be necessary to find a compromise to avoid annoying and useless motion; (ii) evaluating different voice generation systems for the robot, or using recorded messages; (iii) using more sounds to attract attention during the speech, in the same way that the horn sound is played before the robot begins the announcements.

5.6.2. User’s Acceptance of the Robotic System

Acceptance evaluation. In this first pilot experiment, acceptance was evaluated with observations and informal interviews. The acceptability questionnaires prepared were not finally administered due to the lack of time and the busy schedule that the residents had that day. This decision was taken following the guidelines which were signed in the agreements, considering ethical aspects, and therefore the experiment should avoid over tiring the residents, becoming invasive in any manner, or disrupting their usual organisation. Moreover, the retirement home was closed to visits only a few days after the second phase of the pilot due to the COVID-19 pandemic.
Our recommendation in future experiments is to be ready for busy schedules with different options: (i) ask the professionals if they could help the residents to fill in online post-test questionnaires; (ii) gather from the workers the general information in focus groups or participatory workshops; (iii) complete the post-questionnaires with a reduced number of users (not all the users in the room).
First impressions. They were very positive considering only the observations. Indeed, only one out of forty residents involved argued about the robot’s presence, and the occupational therapist indicated that this was the normal behaviour of this participant when facing changes in their routine.
During the first few minutes, all the residents focused their attention on the robot, to the point where some of them realized that one of the robot’s eyes was slightly bigger than the other. The reason for this difference is that two different cameras are mounted in the head of the robot: an IP camera that helps in the teleoperation process and a webcam connected to the embedded PC to record sessions. Many of the residents indicated their willingness to integrate the robot into their daily activities and, in some cases, even introduced it to their grandchildren.
The participatory process of robot naming was particularly enjoyable and was a determining factor in the user’s acceptance of the robot. As a relevant example of how the perception of the robot evolved during these first few minutes, the hesitating resident wanted to name it after her child, after seeing it move and speak.
Both residents and caregivers seemed to use the SAR (their ‘Felipe’) as a good excuse to have a good time. Hence, its role as social facilitator was fully demonstrated, even in this pilot test. It was also interesting to note their initiative to bring the robot to the terrace to meet the other residents.
The implication of residents in the participatory workshop organized before the pilot test was also very positive: these people talked about the robot (even before seeing it for the first time) to the other residents, and they were willing to see how the robot looked and performed.
Some relatives met the robot in the corridor when it was moving from one room to another. They wanted to take pictures, asked the researchers about the capabilities and features of the robot and were glad about the initiative.
The reaction and acceptance of professionals working in the retirement home when they saw the robot was also noteworthy, as they could see how the robot was able to successfully perform the functionality they decided in the first participatory workshops. Furthermore, they were also interested in the current and future features of the robot, and explored the possibility of using it for different purposes and users in the future (e.g., for people with dementia, or in interventions for drug addictions).
False expectations. Although the first interventions were positive in showing the robots potential, false expectations were also easily created. Thus, despite the robot’s inhuman appearance and the occupational therapist’s explanations of its basic task-adapted functionality and limitations, most of the residents demanded greater functionality. As an example of this, when it announced the menu choices for the day, four of them asked it to provide additional new dishes and they really thought the robot could understand what they were saying and could intercede in changing the menu. In general, residents expected, from the interaction with a social robot, behaviours of human face-to-face interaction [40] to be maintained with Felipe. Thus, these types of robots are understood from the beginning to be used more as a social facilitator rather than as a useful tool, so dyadic and even triadic interaction [14] modality must be considered from the beginning. These results led to a redesign phase based first on improving the speech module, either by improving the teleoperation interface shown in Figure 4 or by using advanced language models to automate this module using artificial intelligence algorithms (although it does not yet include speech recognition), and second, to include autonomous human recognition in the capabilities of the robot.
Information interface. Many residents indicated that they preferred to access information such as that provided by the robot through touch-screen or voice-accessible menus. Therefore, the use of the chest-screen was modified not only to provide the advertised information in a visible form but also to incorporate accessible information through touch interaction. Figure 5b shows our first prototype for the new chest screen interface, based on the proposals of the stakeholders as part of the evaluation process in a participatory co-design session, in which accessibility criteria were considered as a key part of the design process. Hence, messages will be featured by the robot using a voice announcement while its transcription is displayed on the screen, augmented with adequate representative icons. Displayed colours and text have been selected considering accessibility criteria. Icons have been downloaded from the ARASAAC Augmentative and Alternative Communication symbols database (Available online: https://arasaac.org/, accessed on 3 April 2024). The ongoing process of designing and evaluating this interface is described in Section 6.

6. Robot Information Interface: Co-Design Process

Following participatory design principals, the user’s feedback and results of the co-design session gained from the pilot experiment were employed to enhance the interfaces of the announcer use-case in different iterations, as explained below. After finishing implementing the interfaces, it was necessary to evaluate these updated versions with real users and the final robotic platform (autonomous robot which approaches the users and announce events), before deploying the robot CLARA in the retirement home (2nd phase of the pilot).
This section describes the iterative process of the robot’s information interface co-design and evaluation based on the AUSUS framework [10] and its continuous adaptation based on the results obtained in previous iterations.
As this project was carried out during the COVID-19 pandemic, it was not always possible to involve older adults in the retirement home, so the evaluations were carried out in different phases:
  • First evaluation. Evaluation in a controlled environment: the process of improving the interfaces by taking into account the opinion of real users was carried out in a controlled environment, where people who were not as vulnerable to COVID-19 as the older adults could participate in the improvement of the interfaces and give their feedback and collaborating in participatory sessions on the design of the interfaces. This evaluation was carried out in one of the research laboratories of the authors of this paper. This evaluation is described in detail in Section 6.1 and led to the improvement of the interfaces prior to their use with older people in the retirement home.
  • Second evaluation. Evaluation in the retirement home: This evaluation implements the second phase of the pilot experiment, in which the improved interfaces were used for five months in the retirement home. Older people and the staff at the retirement home could interact with the robot, and objective data about this interaction is detailed in Section 6.2. After the evaluation, stakeholders participated in a co-design session to provide feedback again on the interface and propose new interfaces and protocols, including the colour and position of buttons, navigation, the robot’s voice, sequence of activities performed by the robot, etc.

6.1. Evaluation in a Controlled Environment

This section describes the user evaluation that was conducted to assess the usability, accessibility, user experience and acceptability of the robot’s information interface according to the AUSUS methodology. This evaluation was performed in two phases, consisting of the following: (1) Partial and informal evaluations, which were performed during the development of the interfaces. The procedure followed combined user-centred design and participatory design approaches, allowing the participants to express their impressions, recommendations and feedback after a free interaction with the robot interfaces, which helped the researchers to redesign the interfaces according to refined requirements; (2) Formal evaluations after redesigning the interfaces with previous outcomes via following specific procedures to assess usability, accessibility and acceptability.

6.1.1. Participants

Ten participants (6 males and 4 females) were voluntarily involved in the evaluation. Table A2 shows that all participants were familiar with using phones, tablets or computers for communication, work, study or entertainment. Only 30% of the participants had interacted with drones or robots for research and study. It is worth noting that 30% of participants considered their interaction with intelligent virtual assistants as interacting with robots.
The selected sample reflects a diversity in terms of communication abilities, age, gender and experience in using electronic devices and robots. Cognitive disabilities were considered as an exclusion criterion for selecting participants for the evaluation. There were nine Spanish and one Iranian among the participants. In Table A1 (Appendix A) the severity of each disability is described using a 5-point Likert Scale, where 1 is the most severe case (I can’t hear, see, move, etc.) and 5 is not having a disability at all (I can hear, see, move perfectly, etc.). Table A1 shows that a quarter of the participants have severe hearing and visual disabilities. The majority of the participants would like to interact in the future with robots in the following areas: welfare, work spaces, home, entertainment and as a personal assistant. One of them stressed the importance of using robots in all aspects of life, provided that humans are not replaced by them. Only two participants expressed their unwillingness to interact with robots because they preferred to interact with people instead of machines, and both of them had never interacted with any robot before.

6.1.2. Environment

The evaluation sessions were carried out in the Artificial Intelligence SCALAB and GigaDB laboratories at Leganés campus of the Carlos III University of Madrid, in June 2021. Only the participants, the robot and the evaluator were present during all the sessions, to provide a comfortable and private environment, as well as to assist participants in focusing while interacting and accomplishing the required tasks for evaluation.

6.1.3. Materials

The following methods and materials were used:
  • CLARA Robot. Software and hardware architecture as detailed in Section 4 and Section 5.
  • Questionnaires. Pre-test and post-test questionnaires were used to collect data from the participants. In the Pre-test questionnaire, eighteen questions were used to define participants’ socio-demographic information and their experience with electronic devices and robots. These data allowed the researchers to perform a deeper analysis of the results, correlated with their previous experience and socio-demographic information. The post-test questionnaire, detailed in Appendix A, included 26 questions to investigate participants’ responses on usability, accessibility and acceptance aspects of both the hardware and software interfaces of the use-case. These questions were answered following a 5-point Likert Scale, where 1 corresponds to “strongly disagree” and 5 corresponds to “totally agree”.
  • Informal interview. After completing the questionnaires, users provided valuable informal feedback on their interactions with the robotic platform. Users provided feedback on the information interface design (proposing changes) and reported issues encountered during interactions.

6.1.4. Procedure

The sessions were conducted as follows:
  • Introduction and pre-test information. Once the participants were selected, the goal of the evaluation process was explained to them. Then, the participants signed a consent form. Finally, the ten participants answered the pre-test questionnaire.
  • Evaluation session. In order to test the robot interface in depth, the participants were provided with a list of tasks, to be accomplished during their interaction with the robot. Seventeen short and direct tasks were designed to ensure all possible robot interaction modes were used during the experiment. These tasks involved obtaining information about the scheduled activities in their daily agenda, checking the date and time, information about the weather and information about the birthdays of the residents and staff. Moreover, the users could adjust the robot’s voice settings.
  • Post-test feedback. After each evaluation session, the researchers had an informal interview with the participants, in which they also filled in the post-test questionnaire using a computer.

6.1.5. Evaluation Results

Questionnaire completeness was checked, and the obtained responses from the post-test questionnaires and interviews were analysed and correlated to participants’ characteristics and previous experience, which were defined based on the responses to pre-test questionnaires. The results obtained of the three factors (usability, accessibility and user acceptance) can be found in Appendix B in Table A3, Table A4, Table A5, Table A6 and Table A7, and Figure 6 depicts the average values and the standard deviations for every table.
  • Usability. Table A3 shows the six questions that evaluated usability: five had a 5-point Likert scale answer system and one was an open-ended question. The global usability was rated as 4.3 on average, with a standard deviation of 0.76 (Figure 6). The questions focused on four factors, which were intuitive interaction, effectiveness, interfaces’ appearance and robustness. As a summary of the results, the majority of participants (7 of 10) agreed that the interfaces were intuitive and easy to understand. The understanding capability was limited (neutral answer) due to the foreign mother language of one user and the moderate visual disability of another user. All participants agreed that the robot was effective in assisting them to complete the required task, except for one user who reported little experience using electronic devices and had both hearing and visual disabilities (neutral answer). The interface appearance was well appreciated by all the users, except from those with moderate and severe visual disabilities. The interface was robust in general and only 4 out of 10 users detected some issues: (i) the voice of the robot did not match the subtitle displayed on the screen (essentially they were not synchronized because the subtitle lagged one–two seconds behind the robot’s speech) and (ii) the buttons were small and the user was confused between the functionality of the buttons to control the voice parameters, such as volume, velocity or tone.
  • Accessibility. Fourteen 5-point Likert scale questions were devoted to assessing the accessibility of the robot interfaces. More precisely, these questions evaluated to what extent the robot’s interfaces were perceivable, understandable and operable. The results for each subfactor are detailed in Table A4, Table A5 and Table A6, with average values of 4.2, 4.7 and 4.6, with standard deviations of 0.91, 0.45 and 0.74, respectively (see Figure 6). The users indicated that the robot’s voice was not clear enough and it should be corrected before its integration at the retirement home; the information provided through the display (including the captioning of the robot’s voice) was clearly perceived by the majority of the users, all being able to read everything one meter away from the screen. Related to the understanding factor, the participants were able to understand the robot’s speech, the purpose of each interface of the events announcer application and the flow of interaction with these interfaces. Moreover, they all found the application windows logically ordered. Finally, related to the interface operable factor, the participants were able to operate the application’s software and hardware interaction components in general, such as the touch screen and voice volume buttons, and they were able to navigate through the town crier application windows. Eight out of ten users attempted to increase the volume of the robot platform. Only one user encountered difficulties adjusting the volume due to the small size of the buttons.
  • User acceptance. Table A7 contains questions designed to ascertain whether participants were satisfied with how the robot performed its functions, and to determine their willingness to use the robot in the future. An average value of 4.5 was obtained for this factor, with a standard deviation of 0.76 (see Figure 6), and in the interviews the participants indicated in general that they were satisfied with the robot’s interface and that they would like to use it in the future for the same purpose and for other tasks.

6.1.6. Evaluation Recommendations

After the in-depth evaluation, important recommendations were elicited from the user’s feedback, and they were incorporated into the version used in the long-term experiment in the retirement home. Some examples of recommendations were linked directly to the interface design, such as the following: the letter size of the interfaces should be configurable in order to make the interface more accessible to users with visual disabilities; a more intuitive interface is needed for users with a combination of hearing and visual disabilities and not used to using digital devices; it is especially important to synchronize the robot’s voice with the captioning; a clearer and more natural voice for the robot would help users to understand it better.

6.1.7. Advantages and Limits

One of the main advantages of this evaluation with younger users is that, with the active participation of users, usability and accessibility barriers were detected and the researchers could improve the interfaces to achieve a better user experience and acceptance before integrating them into the next pilot phase in the retirement home. Moreover, it allowed obtaining and evaluating a prototype of the interfaces formally, based on a scientific evaluation framework, following the criteria of not disturbing too much to the final users. However, as a limitation, we are aware that the interaction characteristics of the people involved in this evaluation do not necessarily match the interaction characteristics of the stakeholders in the retirement home.Also, it must be noted that the novelty factor may have introduced a bias in the outcomes, which may disappear when the long-evaluation is completed.
Nevertheless, this evaluation is considered valuable because it has helped the researchers to improve the interfaces, but evaluations of the interfaces with the users of the retirement home are needed. The next subsection details a mid-term evaluation carried out in the retirement home.

6.2. Evaluation in the Retirement Home

An initial and mid-term evaluation of the performance of the interfaces with the users in the retirement home was also made based on quantitative parameters. The main aim of this evaluation was to objectively measure the use of the touchable interface on the robotic torso by the older adults.
For these tests in Vitalia Teatinos retirement home (Málaga), the users’ age ranged from 65 to 85 years old. There were nine residents actively involved in the tests, four men and five women. The evaluation lasted for five months, from 15 November 2021 to 21 July 2022 (during these months and again due to COVID-19, there was another lockdown period from December 2021 to March 2022).
The environment and procedure was similar to the pilot experiment described in Section 5.4 and Section 5.5, respectively. The main difference in the procedure was that while the users were interacting with the robotic platform it saved some anonymized interaction data for a further objective analysis. The main outcomes of this analysis are detailed next.
First, the number of times that every interface was selected through the main interface in the chest screen of the robot was measured (see Figure 5b). Figure 7 shows that most requirements were related to the weather information.
Also, how the users managed the control of the volume from the robot interface was monitored. During their interactions with the robot, 50% of the participants regularly used the volume-up button, while none of them showed interest in using the volume-down button. Moreover, only in 5.26% of the tasks of Figure 7 was there an attempt to increase the volume of the interface over its maximum value. Based on this value, it can be concluded that the loudspeakers’ volume was satisfactory the majority of the time. Six out of ten participants used the volume button while interacting with the robotic platform.
Regarding the tactile control of the screen, the researchers observed that some of the users tried to press the same button in the interface several times in a row. For this reason, the number of times the users pressed a button on the screen more than once within a temporal range of three seconds was also measured. Figure 8 shows the individual percentage of the number of attempts to press a button with one or more consecutive taps. It can be seen that this percentage is low for the buttons that select the main tasks: “Activities”, “Birthdays”, “Calendar” and “Weather”. However, this percentage is significantly higher in the case of the buttons for “Back”, “Home” and “Reload”. It can be seen in Figure 5b that the main difference in the graphical interface among these buttons is their size, the smaller ones being those that present a higher percentage. Since the feedback and response times of the actions linked to these buttons are similar, it may be deduced that the difference observed in Figure 8 are explained by their differences in the graphical interface. This issue will be considered in the next interfaces redesign, concluding that larger buttons are better for these situations.
Finally, in addition to the collection of objective measures during the interaction sessions and observations, informal interviews were conducted with the stakeholders after each of the sessions. As a summary of the main results, as was the case with the evaluation of the younger users, the users found the interfaces usable (intuitive, effective, they liked the look and feel of the interface and they found it robust) and they found some accessibility barriers, most of them already detected in previous evaluations, such as ocasionally the volume of the loudspeakers, captioning delay when the robot was speaking, the position of the screen or when the robot was unable to establish a social dialogue with the older adults, among others. However, despite these accessibility inconveniences, they were generally very satisfied with the functionality and interfaces of the robot and were willing to interact with it in the future for this or other use-cases. In fact, after the evaluation sessions, most of the users gave us more ideas for future tasks for the robot, such as making appointments with doctors, reminding them to take medication, or even reminding them when their relatives are coming to visit them, among others.

7. Discussion

The main scientific contributions of this study are four fold: it details a use-case where an SAR is integrated in a retirement home to study mid-term user’s adherence, acceptability and utility of the robotic system; it also describes in detail the research and evaluation methodology following a user-centred and participatory design and detailing how the different methods were employed during the specification and implementation of the use-case. Moreover, the usefulness and effectiveness of the AUSUS framework, previously proposed by the authors in article [10], is also validated. Additionally, it has also allowed the researchers to iteratively introduce improvements to the platform by establishing new technological challenges derived from its evaluation in real environments.

7.1. Methodology

In particular, a participatory design workshop in this initial definition of the use-case allowed—both from a pragmatic and reflexive perspective—the researchers to identify needs and encourage stakeholders’ participation. Also, coherent with the participatory design characteristic of mutual learning, technical information about the robot were shared between researchers, engineers and participants, so as to eliminate false expectations at this early stage. Furthermore, the active participation of the users throughout the process (capture of requirements, successive design of the use-case, functionality of the robot and interfaces) made them much more involved in the project, facilitating the acceptance and the attachment to the robotic platform in the retirement home and the success of the project.
Related to the user-centred design approach, from the initial interviews to the final evaluation of the system, and by conceiving the robot and its ecosystem designed with a person-centred care as the main goal, we found that this approach could significantly increase the well-being of residents. Indeed, as reported by the stakeholders, attention should be paid firstly in allowing caregivers to spend more quality time with the older adults, by reducing their workload in repetitive or simple tasks, and secondly contributing to making the robot a social facilitator by adapting its behaviour to the older adults’ acceptability criteria. Moreover, in each iteration of the interface design and evaluation process, users could see that their recommendations and initial interaction problems were considered, and the system was improved step by step thanks to their help. This made the users co-participants responsible of the integration success. Following similar approaches [12,17,21], the use-case presented here aided in gathering first empirical insights, which constitute an intermediary hypotheses that will be examined during the next iterative phases. Indeed, these insights have already allowed us to delimit a simple task that fulfils both premises, and that will also facilitate the definition of more complex tasks with an accessible interaction adapted to the residents’ expectations and needs.

7.2. Main Outcomes

The first pilot test presented in this study has shown that the announcer robot is producing a very good first impression in all users. It is important to highlight that social facilitation is a key added value of social robots, with respect to other assistive technologies. This pilot experiment has also shown that this design process—which is iterative, combining human-centred and participatory design approaches—will struggle to avoid rejection (caused by false expectations) on the one hand, and boredom (caused by a too naive robot) on the other hand. We strongly believe that keeping users in the design process is the key feature to succeed in this challenging scenario.
This study also describes in depth the user’s evaluation developed during and after the programming process of the robots interfaces. Users’ participation has allowed us to improve the interfaces before starting the long-time pilot integration at the retirement home, which is currently being conducted.
As part of the study, a list of design tips for the development of new SARs for older adults could be summarised from the experiment.
  • People detection. Use an effective detection system for crowded environments.
  • Robot voice and volume. The speakers should be powerful enough to allow interaction with people with moderate hearing loss.
  • Use attention-grabbing mechanisms such as using a non-monotonous voice, moving while speaking, using non-distracting noises, etc.
  • Avoid false expectations about the robot’s specific tasks.
  • Teleoperator interface. If a teleoperator is responsible for interacting with users, the interface should be as simple and fast as clicking on predefined phrases to avoid wasting time writing new phrases or selecting phrases from a large list.
  • Information interface. Since different people with different abilities/disabilities/use contexts will interact with the robot, it is absolutely necessary to implement accessible interfaces using multiple channels of communication (a loud robot voice, captioning, audio description, touch screen and voice accessible menus, allow connection of assistive tools, use icons and colours with responsibility, helpful for people with cognitive disabilities, etc.). A proposal of accessibility guidelines has been published by some of the authors in [25]. On the other hand, it is important to avoid delays between the robot’s voice and the subtitles. Navigation through the interface should be usable, including the position, size and icons used for buttons, information, etc. We recommend allowing for different configurations of all these factors, according to the abilities and context of use of the users interacting with the robot at any given moment.

7.3. Ongoing Work and Research Directions

After the pilot study, the robot was provided in the lab with all the suggested updates. Moreover, two other use-cases were defined and implemented, so that Felipe was able to return in 2022 to the retirement home, with a complete new set of features to be shown, used, evaluated and redesigned by the residents and professionals of Vitalia Teatinos. These long-term evaluations, which lasted for months, are currently being performed, and while their analysis lies out of the scope of this paper, so far experiments for the announcer use-case validate proposed methodology by showing very good qualitative results in terms of acceptability and utility.
We are currently working on improving the robot’s interfaces and the implementation of the use-case, together with new use-cases to carry out long-term evaluations in the residential home. The long-term evaluation will avoid bias in the results due to the novelty of the robotic platform. The use of a more effective recognition system and a communication system based on generative artificial intelligence tools and large language models are two of the main improvements we are working on to achieve a more natural human–robot interaction.
We are also involved in new research projects in this area, including SARs in hospitals and nursing homes to help people make appointments with doctors, remember to take medication, initiate videoconferences with the resident’s relatives, and more.

Author Contributions

Conceptualization, J.P.B., A.I. and R.V.; methodology, A.I., K.L.H.T., M.Q. and R.V.; software, A.T., R.M. and J.M.P.-L.; validation, A.I., K.L.H.T., A.T., A.J. and A.H.; formal analysis, A.I. and K.L.H.T.; investigation, A.T., A.I., M.Q., R.M. and J.P.B.; resources, A.J.; data curation, A.I., A.J. and J.M.P.-L.; writing—original draft preparation, A.I., R.V., A.T., J.P.B. and J.M.P.-L.; writing—review and editing, A.I., R.V., M.Q., J.P.B. and J.M.P.-L.; visualization, J.P.B.; supervision, A.I. and J.P.B.; project administration, J.P.B.; funding acquisition, J.P.B., A.I. and R.V. All authors have read and agreed to the published version of the manuscript.

Funding

This work has been funded by the regional Project AT17-5509-UMA ‘ROSI’ (Plan Andaluz de Investigación, Desarrollo e Innovación - PAIDI 2020, Junta de Andalucía) and UMA18-FEDERJA-074 ‘ITERA’ (Programa Operativo FEDER Andalucía 2014-2020. Convocatoria 2018), the National Research Projects PID2022-137344OB-C32, RTI2018-099522-A-C44, TED2021-131739B-C21 and PDC2022-133597-C42 and a grant for the requalification of the Spanish University System (2021–2023) by the Ministry of Science, Innovation and Universities and UC3M.

Institutional Review Board Statement

The study was conducted according to the guidelines of the Declaration of Helsinki and approved by the Provincial Ethical Committee of the Andalusian Public Healthcare system (Comité de Ética de la Investigación Provincial de Málaga, session hold on 24 September 2020).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

All data generated or analysed during this study are included in this published article. The data sources (use case recordings, interviews, etc.) employed to generate this data are not publicly available because they contain sensitive personal information.

Acknowledgments

The authors would like to thank the people who were involved in the evaluation of the system, at UC3M and Vitalia Teatinos retirement home. In particular, we would like to thank Ana for her willingness, enthusiasm and help in making this project a reality. We would also like to thank Kyriaki Papageorgiou, who has enriched this project by being part of it. Finally, we would like to thank Mario Siles and José Javier Bosh from the Universidad Carlos III de Madrid for their ideas on how to improve the robot display interface.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AALAmbient Assisted Living
DSRDeep State Representation
HCIHuman–Computer Interaction
HRIHuman–Robot Interaction
ICTInformation and Communication Technologies
ROSRobot Operating System
SARSocially Assistive Robot

Appendix A. HRI User’s Evaluation: Participant’s Characteristics

The characteristics of the users enrolled at the evaluation of the robot interface are detailed in Table A1 and Table A2, where the participants self-report their abilities regarding aspects of vision, hearing, etc., using Likert scales.
Table A1. Participants’ characteristics (1).
Table A1. Participants’ characteristics (1).
CharacteristicsCategoriesPercentages
Age20–3060%
40–5010%
50–6020%
60–7010%
GenderMale60%
Female40%
NationalitySpanish90%
Iranian10%
Type of disabilities (if any)Hearing disabilities(High) 10%
(Moderate) 10%
(Low) 30%
(None) 50%
Hearing aids(None) 100%
Visual disabilities(High) 20%
(Moderate) 40%
(Low) 20%
(None) 20%
Visual aids(Glasses) 70%
Motor disabilities(Moderate) 10%
(Low) 20%
(None) 70%
Motor aids(None) 100%
Reading agility(Moderate) 20%
(Low) 30%
(None) 50%
Table A2. Participants’ characteristics (2).
Table A2. Participants’ characteristics (2).
CharacteristicsCategoriesPercentages
Experience in
using electronic
devices
Using mobile phones(Frequently) 30%
(Moderate use) 40%
(Low use) 20%
(None) 10%
Other uses of the
phone beside calls
(Messaging, calls, social
networks and surfing the
internet) 80%
(games and shopping) 20%
Using computers or
tablets
(Frequently/High use) 30%
(Moderate use) 50%
(Low use) 10%
(None) 10%
Using computer or
tablet for
(For work and study) 60%
(For work, study
and entertainment) 40%
Experience in
interaction with
robots
I have interacted
with a robot
(None) 30%
(Rarely) 30%
(Low use) 20%
(Moderate use) 10%
(Frequently/high use) 10%
Type of used robot
and what for
(Intelligent virtual assistant)
30%
(Study and research) 30%
(Drones) 10%
Opinions about
interacting with robots
and the preferred use
cases of robots
(I don’t like to interact with
robots) 20%
(Welfare) 10%
(Work and home) 40%
(Personal assistive) 10%
(Entertainment) 10%
(In all aspects of life without
replacing individuals) 10%

Appendix B. HRI User’s Evaluation: Questionnaire Results

This section details the questions answered by the users, as well as their mean and standard deviation:
  • Usability Questionnaire in Table A3
  • Accessibility (perception) Questionnaire in Table A4
  • Accessibility (understandable) Questionnaire in Table A5
  • Accessibility (operating) Questionnaire in Table A6
  • Satisfaction Questionnaire in Table A7
Table A3. Questions dedicated to evaluate usability aspect.
Table A3. Questions dedicated to evaluate usability aspect.
#FactorsQuestionsMeanStandard Deviation (SD)
1Intuitive
interaction
I find the robotic application intuitive.4.20.63
2Intuitive
interaction
The displayed formation on the screen
is easy to understand.
4.20.91
3EffectivenessI have been able to perform tasks easily.4.30.67
4EffectivenessI have not needed any help during the
completion of the tasks.
4.40.52
5Interfaces
appearance
Interfaces appearance helped me to
clearly distinguish the different
available functions.
4.31.06
6RobustnessI have found errors in the interfaces,
and they are…
**
Table A4. Questions dedicated to evaluate accessibility—perception factor.
Table A4. Questions dedicated to evaluate accessibility—perception factor.
#FactorsQuestionsMeanStandard Deviation (SD)
7PerceptionThe robot voice was clear to me.3.30.82
8I was able to read the displayed
subtitles on the robot screen at
all times.
4.60.70
9It was easy to perceive the displayed
messages and subtitles at the same
time and along with the robot voice.
4.20.79
10The colors chosen for the interfaces
made it easy to read the information.
4.80.42
11The used font size was appropriate
for reading at a one-meter distance.
4.21.03
Table A5. Questions dedicated to evaluate accessibility—understanding factor.
Table A5. Questions dedicated to evaluate accessibility—understanding factor.
#FactorsQuestionsMeanStandard Deviation (SD)
12UnderstandingI was able to understand what
the robot was saying at all times.
4.70.48
13I have clearly understood that the
purpose of the main screen was to
click on each button to access the
calendar, birthdays, weather and
activity interfaces, respectively.
4.90.32
14I have clearly understood that the
purpose of the calendar interfaces
was to know the current day and
time, and to click on any day to
check the scheduled activities for
that day.
4.60.52
15I have clearly understood that the
purpose of the birthday interface
was to show all of today’s
birthdays and those for the next
two days.
4.70.48
16I have clearly understood that the
purpose of the weather interface
was to know the weather forecast
for today and the next few days.
50.0
17I have clearly understood that the
purpose of the activities interface
was to know the schedule of
activities for today.
4.80.42
18I have found the order of the
application windows logical.
4.40.52
Table A6. Questions dedicated to evaluate accessibility—operating factor.
Table A6. Questions dedicated to evaluate accessibility—operating factor.
#FactorsQuestionsMeanStandard Deviation (SD)
19OperatingAt all times, I have been able
to tap the robot screen to navigate
through the application windows.
4.90.33
20The application has enabled me to
control the volume of the robot voice,
so I can hear it perfectly.
4.30.95
21At all times, I have known
what the running function of the
application is, and how to return to
the main interface.
4.50.71
Table A7. Questions dedicated to evaluate user acceptance—satisfaction factor.
Table A7. Questions dedicated to evaluate user acceptance—satisfaction factor.
#FactorsQuestionsMeanStandard Deviation (SD)
22SatisfactionI liked how the robot told me what
day it was, and what activities were
on the calendar for that day. I’d also
like it to do so in the future.
4.60.70
23I liked how the robot told me the
birthdays for today and for the next few
days. I’d also like it to do so in
the future.
4.40.84
24I liked how the robot told me the
forecast for today and for the next few
days. I’d also like it to do so in the
future.
4.60.70
25I liked how the robot told me the
scheduled activities of the day. I’d
like it to do so in the future.
4.50.71
26In the future, I’d like the robot to
be more complete and to include
new tasks.
4.20.92

References

  1. Servicio de Difusión y Publicaciones del Instituto de Estadística y Cartografía de Andalucía. Proyección de Población de Andalucía por Ámbitos Subregionales 2009–2035; Junta de Andalucía: Seville, Spain, 2012. [Google Scholar]
  2. Feil-Seifer, D.; Mataric, M. Defining Socially Assistive Robotics. In Proceedings of the 2005 IEEE C9th International Conference on Rehabilitation Robotics, Chicago, IL, USA, 28 June–1 July 2005; pp. 465–468. [Google Scholar]
  3. Li, Y.; Liang, N.; Effati, M.; Nejat, G. Dances with Social Robots: A Pilot Study at Long-Term Care. Robotics 2022, 11, 96. [Google Scholar] [CrossRef]
  4. Abdi, J.; Al-Hindawi, A.; Ng, T.; Vizcaychipi, M.P. Scoping review on the use of socially assistive robot technology in elderly care. BMJ Open 2018, 8, e018815. [Google Scholar] [CrossRef] [PubMed]
  5. Anghel, I.; Cioara, T.; Moldovan, D.; Antal, M.; Pop, C.D.; Salomie, I.; Pop, C.B.; Chifu, V.R. Smart Environments and Social Robots for Age-Friendly Integrated Care Services. Int. J. Environ. Res. Public Health 2020, 17, 3801. [Google Scholar] [CrossRef] [PubMed]
  6. Hall, A.; Brown, C.; Stanmore, E.; Todd, C. Implementing monitoring technologies in care homes for people with dementia: A qualitative exploration using Normalization Process Theory. Int. J. Nurs. Stud. 2017, 7, 60–70. [Google Scholar] [CrossRef] [PubMed]
  7. Seibt, J.; Damholdt, M.F.; Vestergaard, C. Integrative social robotics, value-driven design, and transdisciplinarity. Interact. Stud. 2020, 21, 111–144. [Google Scholar] [CrossRef]
  8. Brown, B.; Reeves, S.; Sherwood, S. Into the wild: Challenges and opportunities for field trial methods. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’11), Vancouver, BC, Canada, 7–12 May 2011; pp. 1657–1666. [Google Scholar]
  9. Weiss, A.; Bernhaupt, R.; Lankes, M.; Tscheligi, M. The USUS evaluation framework for human-robot interaction. In Proceedings of the AISB2009: Proceedings of the Symposium on New Frontiers in Human-Robot Interaction, Edinburgh, UK, 8–9 April 2009; Volume 4, pp. 11–26. [Google Scholar]
  10. Iglesias, A.; Viciana, R.; Pérez-Lorenzo, J.; Lan Hing Ting, K.; Tudela, A.; Marfil, R.; Dueñas, A.; Bandera, J. Towards long term acceptance of Socially Assistive Robots in retirement houses: Use-case definition. In Proceedings of the 2020 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC), Ponta Delgada, Portugal, 15–17 April 2020; pp. 134–139. [Google Scholar]
  11. Booth, K.E.; Mohamed, S.C.; Rajaratnam, S.; Nejat, G.; Beck, J.C. Robots in retirement homes: Person search and task planning for a group of residents by a team of assistive robots. IEEE Intell. Syst. 2017, 32, 14–21. [Google Scholar] [CrossRef]
  12. Kriegel, J.; Grabner, V.; Tuttle-Weidinger, L.; Ehrenmüller, I. Socially Assistive Robots (SAR) in In-Patient Care for the Elderly. Stud. Health Technol. Inform. 2019, 260, 178–185. [Google Scholar] [PubMed]
  13. Kachouie, R.; Sedighadeli, S.; Khosla, R.; Chu, M.T. Socially Assistive Robots in Elderly Care: A Mixed-Method Systematic Literature Review. Int. J.-Hum.-Comput. Interact. 2014, 30, 369–393. [Google Scholar] [CrossRef]
  14. Fan, J.; Bian, D.; Zheng, Z.; Beuscher, L.; Newhouse, P.; Mion, L.; Sarkar, N. A Robotic Coach Architecture for Elder Care (ROCARE) Based on Multi-user Engagement Models. IEEE Trans. Neural Syst. Rehabil. Eng. 2016, 25, 1153–1163. [Google Scholar] [CrossRef] [PubMed]
  15. Breazeal, C. Affective Interaction between Humans and Robots. In Advances in Artificial Life; Kelemen, J., Sosík, P., Eds.; Springer: Berlin/Heidelberg, Germany, 2001; pp. 582–591. [Google Scholar]
  16. Obrenovic, Z.; Abascal, J.; Starcevic, D. Universal accessibility as a multimodal design issue. Commun. ACM 2007, 50, 83–88. [Google Scholar] [CrossRef]
  17. Courbet, L.; Morin, A.; Bauchet, J.; Rialle, V. Preliminary Evaluation of a Digital Diary for Elder People in Nursing Homes. In Smart Technologies in Healthcare; CRC Press: Boca Raton, FL, USA, 2017; pp. 178–194. [Google Scholar] [CrossRef]
  18. Olde Keizer, R.A.; van Velsen, L.; Moncharmont, M.; Riche, B.; Ammour, N.; Del Signore, S.; Zia, G.; Hermens, H.; N’Dja, A. Using socially assistive robots for monitoring and preventing frailty among older adults: A study on usability and user experience challenges. Health Technol. 2019, 9, 595–605. [Google Scholar] [CrossRef]
  19. Voilmy, D.; Suárez, C.; Romero-Garcés, A.; Reuther, C.; Pulido, J.C.; Marfil, R.; Manso, L.J.; Ting, K.L.H.; Iglesias, A.; González, J.C.; et al. CLARC: A Cognitive Robot for Helping Geriatric Doctors in Real Scenarios. ROBOT (1). In Proceedings of the Advances in Intelligent Systems and Computing, Madrid, Spain, 26–28 July 2017; Springer: Berlin/Heidelberg, Germany, 2017; Volume 693, pp. 403–414. [Google Scholar]
  20. Astorga, M.; Cruz-Sandoval, D.; Favela, J. A Social Robot to Assist in Addressing Disruptive Eating Behaviors by People with Dementia. Robotics 2023, 12, 29. [Google Scholar] [CrossRef]
  21. Winkle, K.; Caleb-Solly, P.; Turton, A.; Bremner, P. Mutual shaping in the design of socially assistive robots: A case study on social robots for therapy. Int. J. Soc. Robot. 2019, 12, 847–866. [Google Scholar] [CrossRef]
  22. World Wide Web Consortium. Web Content Accessibility Guidelines (WCAG) 2.0; World Wide Web Consortium: San Francisco, CA, USA, 2008. [Google Scholar]
  23. Nu, F. Mobile Navigation Guideline. 2014. Available online: https://www.funka.com/contentassets/d005946001ef460eb4df58a4fc967b83/mobile-navigation-guidelines-funka-2014.pdf (accessed on 3 April 2024).
  24. BBC. Accessibility Standards and Guidelines. 2014. Available online: https://www.w3.org/WAI/GL/mobile-a11y-tf/wiki/BBC_Mobile_Accessibility_Standards_and_Guidelines (accessed on 3 April 2024).
  25. Qbilat, M.; Iglesias, A. Accessibility Guidelines for Tactile Displays in Human-Robot Interaction. A Comparative Study and Proposal. In Proceedings of the International Conference on Computers Helping People with Special Needs, Linz, Austria, 11–13 July 2018; Springer: Berlin/Heidelberg, Germany, 2018; pp. 217–220. [Google Scholar]
  26. Abras, C.; Maloney-Krichmar, D.; Preece, J. User-centered design. In Encyclopedia of Human-Computer Interaction; Bainbridge, W., Ed.; Sage Publications: Thousand Oaks, CA, USA, 2004; Volume 37, pp. 445–456. [Google Scholar]
  27. Bannon, L. Reimagining HCI: Toward a More Human-Centered Perspective. Interactions 2011, 18, 50–57. [Google Scholar] [CrossRef]
  28. Bannon, L.J.; Ehn, P. Design: Design matters in Participatory Design. In Routledge International Handbook of Participatory Design; Routledge: London, UK, 2012; pp. 37–63. [Google Scholar]
  29. Vargas, C.; Whelan, J.; Brimblecombe, J.; Allender, S. Co-creation, co-design, co-production for public health: A perspective on definition and distinctions. Public Health Res. Pract. 2022, 32, e3222211. [Google Scholar] [CrossRef] [PubMed]
  30. Suchman, L. Human-Machine Reconfigurations: Plans and Situated Actions, 2nd ed.; Learning in Doing: Social, Cognitive and Computational Perspectives; Cambridge University Press: Cambridge, UK, 2006. [Google Scholar] [CrossRef]
  31. Heinzmann, J.; Zelinsky, A. Building Human-Friendly Robot Systems. In Robotics Research; Springer: London, UK, 2000. [Google Scholar]
  32. Zinn, M.; Roth, B.; Khatib, O.; Salisbury, J.K. A New Actuation Approach for Human Friendly Robot Design. Int. J. Robot. Res. 2004, 23, 379–398. [Google Scholar] [CrossRef]
  33. Bustos, P.; Manso, L.J.; Bandera, A.J.; Bandera, J.P.; Garcia-Varea, I.; Martinez-Gomez, J. The CORTEX cognitive robotics architecture: Use cases. Cogn. Syst. Res. 2019, 55, 107–123. [Google Scholar] [CrossRef]
  34. Stanford Artificial Intelligence Laboratory. Robotic Operating System. 2007. Available online: https://historyofinformation.com/detail.php?id=3661 (accessed on 3 April 2024).
  35. Henning, M. A new approach to object-oriented middleware. IEEE Internet Comput. 2004, 8, 66–75. [Google Scholar] [CrossRef]
  36. Tombaugh, T.N.; McIntyre, N.J. The mini-mental state examination: A comprehensive review. J. Am. Geriatr. Soc. 1992, 40, 922–935. [Google Scholar] [CrossRef] [PubMed]
  37. Nielsen, J. Usability Engineering; Morgan Kaufmann Publishers, Inc.: San Francisco, CA, USA, 1993. [Google Scholar]
  38. Knoblauch, H.; Tuma, R. Videography: An interpretive approach to video-recorded mi-cro-social interaction. In The Sage Handbook of Visual Methods; Sage Publications: Thousand Oaks, CA, USA, 2011; pp. 414–430. [Google Scholar]
  39. Sacks, H.; Schegloff, E.; Jefferson, G. A Simplest Systematics for the Organization of Turn Taking for Conversation. In Studies in the Organization of Conversational Interaction; Schenkein, J., Ed.; Academic Press: New York, NY, USA, 1978; pp. 7–55. [Google Scholar] [CrossRef]
  40. Heritage, J. Conversation analysis as social theory. In The New Blackwell Companion to Social Theory; Turner, B.S., Ed.; Blackwell: Oxford, UK, 2008; pp. 300–320. [Google Scholar]
Figure 1. Human-centred methodology and life cycle.
Figure 1. Human-centred methodology and life cycle.
Robotics 13 00061 g001
Figure 2. Felipe Vitálico working at Vitalia Teatinos.
Figure 2. Felipe Vitálico working at Vitalia Teatinos.
Robotics 13 00061 g002
Figure 3. Software components for the announcer use case.
Figure 3. Software components for the announcer use case.
Robotics 13 00061 g003
Figure 4. Teleoperation web interface. (a) Old interface (b) New interface. The interfaces (in Spanish language) show the messages spoken by the robot, below the box for the IP Camera Video. They also include buttons to say certain texts, change voice volume, play sound and go to a certain room.
Figure 4. Teleoperation web interface. (a) Old interface (b) New interface. The interfaces (in Spanish language) show the messages spoken by the robot, below the box for the IP Camera Video. They also include buttons to say certain texts, change voice volume, play sound and go to a certain room.
Robotics 13 00061 g004
Figure 5. Screen robot interface. (a) Old interface showing the daily agenda and a greetings message in Spanish; (b) New interface for the announcer (‘Pregonero’ in Spanish), were each icon is labeled with an adequate Spanish text.
Figure 5. Screen robot interface. (a) Old interface showing the daily agenda and a greetings message in Spanish; (b) New interface for the announcer (‘Pregonero’ in Spanish), were each icon is labeled with an adequate Spanish text.
Robotics 13 00061 g005
Figure 6. Average outcomes for AUSUS evaluation.
Figure 6. Average outcomes for AUSUS evaluation.
Robotics 13 00061 g006
Figure 7. Number of times that every interface has been selected.
Figure 7. Number of times that every interface has been selected.
Robotics 13 00061 g007
Figure 8. Percentage of the number of times that every button has been consecutively pressed.
Figure 8. Percentage of the number of times that every button has been consecutively pressed.
Robotics 13 00061 g008
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Iglesias, A.; Viciana, R.; Pérez-Lorenzo, J.M.; Ting, K.L.H.; Tudela, A.; Marfil, R.; Qbilat, M.; Hurtado, A.; Jerez, A.; Bandera, J.P. The Town Crier: A Use-Case Design and Implementation for a Socially Assistive Robot in Retirement Homes. Robotics 2024, 13, 61. https://doi.org/10.3390/robotics13040061

AMA Style

Iglesias A, Viciana R, Pérez-Lorenzo JM, Ting KLH, Tudela A, Marfil R, Qbilat M, Hurtado A, Jerez A, Bandera JP. The Town Crier: A Use-Case Design and Implementation for a Socially Assistive Robot in Retirement Homes. Robotics. 2024; 13(4):61. https://doi.org/10.3390/robotics13040061

Chicago/Turabian Style

Iglesias, Ana, Raquel Viciana, José Manuel Pérez-Lorenzo, Karine Lan Hing Ting, Alberto Tudela, Rebeca Marfil, Malak Qbilat, Antonio Hurtado, Antonio Jerez, and Juan Pedro Bandera. 2024. "The Town Crier: A Use-Case Design and Implementation for a Socially Assistive Robot in Retirement Homes" Robotics 13, no. 4: 61. https://doi.org/10.3390/robotics13040061

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop