1. Summary
Social robots are rapidly emerging as a transformative technology aimed at enhancing human well-being. By addressing needs in healthcare, eldercare, accessibility, safety, and emotional support, these robots have the potential to improve quality of life across diverse populations. However, designing robots that are trustworthy, competent, and empathetic poses unique challenges, requiring interdisciplinary collaboration across robotics, psychology, human–computer interaction, and social sciences. This Special Issue brings together a collection of cutting-edge research that explores the mechanisms, applications, and long-term integration of social robots. From empirical studies on trust and emotional engagement to innovative applications in caregiving, education, and public safety, these contributions aim to advance our understanding of how robots can effectively interact with people and address societal needs.
2. Introduction
As artificial intelligence (AI) continues to make dizzying progress, social robots are being increasingly considered to help improve the lives of various individuals who lack sufficient assistance from human caregivers. This includes a pressing need from aging populations worldwide [
1,
2,
3,
4], as well as persons with dementia [
5,
6] or disabilities [
7]. A challenge is that socially assistive robots (or SARs) are not yet a reality in our daily surroundings. These robots must first demonstrate efficient human–robot interaction capabilities and then provide various forms of real value at low cost. Researchers must identify (1) the mechanisms underlying good interactions with robots that support well-being [
8,
9,
10], (2) promising new opportunities and challenges [
1,
2,
5,
6,
7,
11], and (3) how acceptance and usage of robots progress over the long term [
3,
4].
This Special Issue presents recent research on innovative designs for social robots to enhance the well-being of interacting individuals, and how they are perceived. The contributions involve eleven robot platforms (Buddy [
8], BuSaif [
9], CLARA [
3], Double 3 [
4], Embodied Voice Assistant (EVA) [
5], Furhat [
10], Husky [
9], KUKA [
7], NAO/Pepper [
2,
9], and VGo [
6]), which, in conjunction with technologies such as computer vision [
3,
5,
9], cloud computing [
9], and electroencephalography (EEG) [
8] were used to gain insights from users in at least six countries (Canada [
2], France [
8], Germany [
7], Italy [
10], Spain [
3], and the United States [
4,
6]).
(1)
Mechanisms for Well-being. Robots that seek to help people should appear trustworthy, competent, and empathetic to cooperate effectively and build close relationships. In an interesting study, Gigandet et al. used Buddy from Blue Frog Robotics and EEG to empirically explore people’s trust in a robot’s physical and emotional capabilities [
8]. Physically impossible claims were accepted by some (e.g., a robot without arms that claims to knit could have hidden limbs), whereas emotional claims (such as a robot “feeling happy”) appeared to more generally elicit feelings of incongruity—a potential challenge for social robots seeking to be trusted in emotional interactions with people. Additionally, Elfaki and colleagues challenged the common practice of using limited, stand-alone, embedded computing for the “brains” of robots, introducing a cloud-based framework that could allow for enhanced competence, which was tested in four robots: BuSaif, Husky, NAO, and Pepper [
9]. A perception of competence could help people to listen to a robot’s suggestions, such as reminders to take medicine prescribed by a doctor. Furthermore, Staffa et al. found that the initial emotional states and personalities of their participants significantly influenced their perceptions of a virtual humanoid robot, Furhat, which is described as a bartender [
10]. Such customization of a robot’s identity could encourage empathy and help achieve good interactions.
(2)
Promising Applications. Various tasks will be required of SARs. Within the realm of elderly care, Asgharian et al. conducted a useful review of tasks performed by mobile service robots, whose visual depictions help readers to envision and compare the different characteristics of each robot [
1]. The reviewed studies suggested that robots can help with daily activities such as reminding, household tasks, safety, or health monitoring, and decrease caregiver workloads [
1]. Moreover, FakhrHosseini and colleagues explored through online interviews how 20 caregiving dyads (consisting of care recipients with dementia and primary caregivers) perceived VGo, a telepresence social robot, and reporting a generally positive feeling from both sides [
6]. The participants also commented on the desirability, inter alia, of voice commands, light- or scenario-based reminders, and integration with external services like AI home assistants, YouTube, automatic doors, and smoke alarms.
Specific tasks have also been explored, related to dancing, eating, working, and safety, such as in the following studies. Li et al. considered that dancing with robots could provide enjoyment and improve mood, cognitive flexibility, endurance, balance, and motor control skills in older adults who might have depression, loneliness, mild dementia, or Parkinson’s disease [
2]. In a first pilot of robot dance therapy at a long-term care home, the authors found that 81 participants, comprising 54 staff and 37 residents, felt positively about robot-facilitated dance sessions with two robots, Pepper and Nao, with some differences related to position, gender, and prior experience with robots. Additionally, Astorga and colleagues proposed that a robot accompanying persons with dementia during meals could alleviate a major concern in caregivers by reducing disruptive behaviors, such as becoming distracted while eating, throwing food, or refusing to eat [
5]. Positive feedback was obtained (a) following a user-centered process of interviewing six professional caregivers and showing another fourteen caregivers videos. The authors also (b) adapted an open-source robot, EVA, to detect food, dishes, eating, and human faces via an Intel RealSense D435i depth camera and Google Cloud Vision API, toward enabling autonomous interactions. Additionally, Drolshagen et al. pointed out that working with a robot could also help persons with disabilities to find, and enjoy the benefits of, meaningful employment [
7]. After developing a robotic system comprising a manipulator, depth camera, and screen (to assist disabled persons via personalized action sequences from waving to pointing), a user study indicated that participants became three times more successful in a task, and accepted the robot as a tutor. Furthermore, Cooney and colleagues presented the idea that robots in our surroundings could also defend us in our time of need [
11]. The same article (1) reviewed the latest police/military robots, ethics and legal questions, and required capabilities, while also (2) highlighting cultural differences between Japan and the US in how such robots are perceived, based on conducting an experiment with 611 participants.
(3)
The Long Term. Given that various challenges can emerge over the course of a person’s life, robots intended to help people in a meaningful way should be accepted not only over the short term, but also over longer periods of time. Iglesias et al. explored using a robot over five months as an announcer at a retirement home, which included initial workshops with eight participants, technical trials with ten young adults, and a study with nine elderly residents [
3]. For this, the CLARA robot was adopted, based on the CORTEX cognitive architecture using a Deep State Representation (DSR) graph, the Robocomp framework, Robot Operating System (ROS), and the Ice middleware. As a result, designers of such robots were advised to use robust person detection algorithms that work in crowded environments, ensure large robot voice volume, incorporate attention-holding mechanisms, avoid false expectations about the robot’s tasks, keep teleoperator interfaces simple and fast, and provide end users with a multimodal, timely, and navigable information interface. Furthermore, Baggett and colleagues tracked the progress of four older adults in accepting a Double 3 mobile telepresence robot over seven months [
4]. High complexity and fluidity were observed through considering seven phases—Expectation, Encounter, Adoption, Adaptation, Integration, Identification and Non-Use. Some participants simultaneously experienced characteristics of multiple phases, with their experience with technology affecting transitioning.