Next Article in Journal
DDD TinyML: A TinyML-Based Driver Drowsiness Detection Model Using Deep Learning
Next Article in Special Issue
Human Factors Considerations for Quantifiable Human States in Physical Human–Robot Interaction: A Literature Review
Previous Article in Journal
Assessing Industrial Communication Protocols to Bridge the Gap between Machine Tools and Software Monitoring
Previous Article in Special Issue
Behavioural Models of Risk-Taking in Human–Robot Tactile Interactions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hold My Hand: Development of a Force Controller and System Architecture for Joint Walking with a Companion Robot

by
Enrique Coronado
1,2,*,
Toshifumi Shinya
2 and
Gentiane Venture
1,2
1
National Institute of Advanced Industrial Science and Technology (AIST), Tokyo 135-0064, Japan
2
Department of Mechanical Systems Engineering, Faculty of Engineering, Koganei Campus, Tokyo University of Agriculture and Technology (TUAT), Tokyo 184-8588, Japan
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(12), 5692; https://doi.org/10.3390/s23125692
Submission received: 27 March 2023 / Revised: 30 May 2023 / Accepted: 16 June 2023 / Published: 18 June 2023

Abstract

:
In recent years, there has been a growing interest in the development of robotic systems for improving the quality of life of individuals of all ages. Specifically, humanoid robots offer advantages in terms of friendliness and ease of use in such applications. This article proposes a novel system architecture that enables a commercial humanoid robot, specifically the Pepper robot, to walk side-by-side while holding hands, and communicating by responding to the surrounding environment. To achieve this control, an observer is required to estimate the force applied to the robot. This was accomplished by comparing joint torques calculated from the dynamics model to actual current measurements. Additionally, object recognition was performed using Pepper’s camera to facilitate communication in response to surrounding objects. By integrating these components, the system has demonstrated its capability to achieve its intended purpose.

1. Introduction

The robotics discipline has rapidly expanded in recent years, not only introducing industrial and collaborative robots (cobots) in factories but also social and service robots to everyday-life scenarios such as restaurants [1], schools [2], shops [3], homes [4,5] and other public spaces [4,6]. Robotic systems in these scenarios are designed to achieve different types of applications, such as care [7], entertainment [8], and education [9]. In many of these applications, researchers explore alternative approaches that propose the use of robots for improving people’s quality of life. A very recent example has been described in [10,11], where a novel type of robot, denoted as “robject”, is designed to strengthen the bond between people and in this way to improve the mental health of potential users. Mental Health is a state of overall psychological well-being [12]. Mental health encompasses all aspects of human cognitive functioning, including thoughts, ideas, motivations, and directions that originate from the mind. These elements have a profound impact on how individuals communicate, behave, and function in both personal and social contexts. By prioritizing mental health, individuals can optimize their well-being and lead fulfilling lives [13]. In fact, ensuring suitable mental health levels is an essential component for maintaining a high quality of life.
Robots have played a crucial role in assisting individuals with autism, dementia, and other cognitive challenges, contributing to the improvement of their treatment. Moreover, these robots have provided valuable companionship to those facing feelings of isolation and loneliness. Additionally, they have been instrumental in enhancing the quality of care and support received by individuals with disabilities from healthcare professionals. Therefore, in recent years, the use of robots to address mental health issues has gained significant attention, particularly to provide solutions that help to mitigate the increase of mental health issues, such as autism [14], dementia [15], and social anxiety [16]. Relevant and recent literature reviews in this area has been presented in [17,18,19]. In this context, one of the primary drivers behind this trend is the growing sense of social isolation and loneliness that many individuals are experiencing [17,20,21]. Moreover, these issues have been recently boosted due to the numerous changes the COVID-19 pandemic has brought to our daily lives. In the case of older adults, who are already at higher risk of social isolation, the pandemic has only intensified feelings of loneliness [22]. This loneliness can exacerbate self-destruction, especially by losing previously active roles and assuming more passive positions [23]. This can lead to a loss of motivation to accomplish activities of daily living (ADL), which can lead to unsafe circumstances and reduced quality of life [24]. However, loneliness is not limited to older adults. Younger generations have also felt the impact of social isolation and disrupted routines. This issue is boosted by the rise in single-person households in countries such as Korea and Japan [25]. As the loneliness pandemic continues to affect all-ages individuals worldwide, the search for technological solutions that can help alleviate its negative mental and physical effects will become increasingly relevant. One possible solution is the use of regular physical activity or exercise to mitigate these negative effects. In particular, moderate exercise plays a significant role in maintaining a healthy lifestyle by offering a range of physical and mental health benefits by releasing endorphins, elevating mood and improving mental well-being [26,27]. Therefore, the development of technological solutions helping to establish a routine and sense of purpose through exercise can potentially combat feelings of isolation, resulting in a safer and more fulfilling life.
Robotics is an emerging technology that has the potential to assist and motivate individuals in their hobbies and entertainment activities [28,29]. Therefore, well-designed robotics applications can represent a suitable solution for enabling individuals to enjoy a better quality of life. However, usability challenges associated with robotics technology can hinder its adoption [30]. These factors or challenges, such as acceptance, attitudes, user-friendliness, trust, utility, safety, anxiety, perceived intelligence, social presence, human-likeness, aesthetics, has been described and reviewed in [30,31,32,33]. In this context, social robots are one of the most promising technologies for assisting people in their daily lives. Compared to other types of robots, social robots are designed to be more approachable, and user-friendly [34]. This facilitates the development of suitable and intuitive applications for the general public [35].
This study aims to develop a system that helps humans to take walks comfortably with a commercial humanoid robot, specifically Pepper robot [36]. To leverage the advantage of the humanoid robot’s familiarity, we aim to implement a Huma-Robot Interaction (HRI) system that behaves similarly to humans when walking hand-in-hand. When two humans walk while holding hands, they need to adjust their direction and velocity depending on the force applied by their partner. To emulate this behavior, we propose a method to estimate the force applied to the robot and a technique to control the robot’s movement based on the estimated force. Moreover, the proposed system can recognize objects and use this information to adapt the robot’s behaviors (speech and gestures).
This paper is organized as follows. Section 2 discusses related work and clarifies the contributions of this article. Section 3 introduces the proposed system architecture enabling a social robot to walk hand-in-hand with a human. Section 4 presents the experimental settings and discusses the obtained results. Conclusion follows.

2. Related Work and Contributions

Different robotics devices and solutions have been proposed to assist humans in the last few decades. Examples include meal-assistance robots [37,38,39], brain-computer interfaces controlling wearable robots [40,41], and robot-assisted therapy [42,43]. In this context, some researchers have proposed smart robotics walkers to sustain the mobility of elderly people, or with some motion disabilities [44,45,46]. Smart robotics walkers “intend to assist the mobility function of disabled people that present reduced lower motor function and low balance, by improving their autonomy” [47] by facilitating a stable gait and easy maneuverability. However, the focus of this article is not to support mobility. Instead, we propose a system architecture enabling an industrial-produced social robotic platform to walk hand-in-hand with humans. The persuasive and motivational capabilities of robots are particularly relevant in various cases, especially those involving people’s health recovery and maintenance [48]. Numerous studies have demonstrated the effectiveness and practicality of robotics systems in motivating people during rehabilitation [49,50] as well as in promoting healthy activities like exercise [51,52] across different age groups. The application discussed in this work shares similar objectives by developing a system architecture that can potentially be used to promote a healthy lifestyle through walking. Therefore, the development of a smart robotics walker device for individuals with motion disabilities falls outside the scope of this work.
Despite the increasing popularity of social and humanoid robots, few researchers have proposed solutions for comfortable physical human-robot interaction (pHRI) during walking. In this context, Granados et al. [53] proposed a mobile humanoid robot that uses pHRI to lead humans during dance training. These dances have various motion directions, including forward and backward walking. The authors of [54] recently presented the Ibuki actuated child-like android and an application that enables walking hand-in-hand with this robot. This application allows the user to pull on Ibuki, causing it to change direction and follow the user closely. They also developed a behavior where Ibuki can lead a person by holding their hands. For this, the Ibuki robot is equipped with an internal sensor, specifically a potentiometer embedded at the shoulder. This sensor is used to detect changes in the shoulder roll joint angle. In this way, the robot can detect when the user pulls on the robot’s arm. Additionally, a Laser Range Finder (LRF) was implemented in Ibuki to avoid obstacles during movement. In fact, walking hand-in-hand with a robot requires highly sophisticated and advanced interaction skills. In this scenario, a crucial skill that a robot needs to possess is the ability to respond to the partner’s movements or applied force and adapt its movements accordingly. Moreover, the robot needs to have a suitable situational awareness level to help humans avoid obstacles or create more engaging interactions. However, few commercially available or industrial-produced social robots are sophisticated enough to meet these requirements.
Pepper [36], developed by the French-Japanese company Aldebaran/Softbank Robotics, is a sophisticated social robot popular and accessible in some countries, such as Japan. Pepper has been designed to interact with humans in various contexts, such children robot interaction [8], rehabilitation care [55] and public spaces [56], making it a suitable candidate for enabling humans to walk hand-in-hand through pHRI. In [57], a solution enabling Pepper to act as a walking trainer for elderly people and rehabilitation patients is proposed. Their authors use a wheel compliance approach to match the motion intention of patients with the robot’s pace. For this, they utilize a method that does not require sensors to detect soft pressure on the robot’s surface. This enables them to evaluate the difference between the external and required displacements necessary to keep the patients in a stable position. Then, the robot’s wheels receive the direction and intensity commands that enable the robot to match the user’s speed. However, patients must lean on the robot, which can cause overheating of Pepper’s knee joint due to the substantial amount of weight that Pepper must support. As noted in [57], this issue can sometimes result in the system needing to be restarted. To the best of the authors’ knowledge, only a limited number of research works have explored Pepper’s capabilities for creating a hand-in-hand walker companion. Notable studies, such as [58,59], are among the few that have investigated this area. However, the main target of both studies was primarily Human-Child Interaction. In both studies, children can pull the robot’s hand in the desired direction of movement. The robot uses a direction recognition algorithm that compares the current position of the end effector (left hand) and a default pose. For example, when a child pulls the robot forward, it moves forward in response. Similarly, when the child pulls the robot backward, it moves backward. If the child pulls the robot’s hand to the right, the robot rotates clockwise, while pulling the hand to the left causes the robot to rotate counterclockwise. Moreover, the velocity of the movements varies according to the height of the left arm. Our approach differs significantly from the studies by Kochigami and colleagues in [58,59], as we use the force, rather than hand position, applied to the Pepper robot to control the Pepper’s direction and velocity of movement.
We summarize the contributions of this article as follows:
  • Development of a system architecture for pHRI that allows humans to walk while holding hands with Pepper robot
  • Integration of a vision-based interactive system that recognizes objects in the environment and adapts the robot’s behaviors accordingly.

3. System Architecture

This article presents a system architecture for pHRI enabling a Pepper robot to walk with humans while holding hands. Pepper has physical dimensions comparable to those of a primary school child, with a height of 1210 mm, a width of 480 mm, a depth of 425 mm, and a weight of 29 kg. Pepper consists of a head, torso, two arms, and one leg. Moreover, Pepper is standing on a holonomic mobile base that is equipped with three omnidirectional wheels [60]. These wheels allow Pepper to move forward, backward, left, and right and rotate left and right. Pepper has 20 degrees of freedom throughout its body, allowing it to replicate human-like movements. When operating normally, Pepper has a function called Autonomous Life. This operational mode was designed to create a greater familiarity with people by replicating movements similar to human breathing and enabling the robot to track people’s gaze, among other things. In addition, Pepper is equipped with a feature that stops the movement of its arms and body when objects are detected in its vicinity using infrared sensors, bumpers, and cameras. For our research, we disabled the Autonomous Life mode and the stop feature since they could affect force estimation and navigation while standing close to the robot.
To enable the Pepper robot to walk while holding hands with a person, the robot’s movements must be synchronized with the person’s movements. In this study, the person’s relative position is estimated by analyzing the force exerted by the person’s hand on the robot’s hand. Based on this estimation, the robot’s speed and direction are adjusted to ensure smooth and coordinated walking. Figure 1 shows the proposed system architecture for achieving this goal. It consists of four main sub-systems: (i) a data acquisition sub-system used to get information from sensors of Pepper, (ii) an object recognition interface used to provide basic situation awareness, (iii) a Python 3 script implementing the Robotic Toolbox [61] used for estimating the exerted forces on the robot, (iv) and a cognition and control module enabling Pepper to adapt its behaviors. More details about the functionalities of these sub-systems are explained below. In this study, we established connectivity between Python 2 and Python 3 modules using the NEP libraries [62], designed by our research team members. Our goal in selecting this approach was to create a system architecture that is accessible to different types of users. For this, NEP libraries are designed to be cross-platform, ensuring that other researchers can easily replicate and utilize our system, regardless of their preferred platform.
The general control algorithm, depicted in Figure 2, enables the robot to walk hand-in-hand with a human while interacting with the environment. In this particular scenario, the force estimation is performed only when the user is holding the robot’s left hand, allowing the human to guide the robot’s movements. Concurrently, the object recognition system facilitates simple interaction with the environment. It is important to note that object recognition is not utilized for obstacle avoidance purposes. If a new object is detected after a predefined time interval, the robot stops and suggests an action to the human. Otherwise, it continues to follow the human’s guidance.

3.1. Data Acquisition

Pepper is equipped with different sensors that enable it to respond to its surroundings. In this study, we primarily used the joint angle and current sensors implemented in Pepper to monitor the status of the motors in each joint. This information is crucial for the control strategy proposed in this article. We also employed contact sensors in the foot to initiate and reset the robot application. To acquire data from these sensors, we utilized a Python 2 script that runs on the Pepper Software Development Kit (SDK) (http://doc.aldebaran.com/2-5/dev/python/index.html, accessed on 20 May 2023). Additionally, we used images from the Pepper camera to recognize objects in the environment. For this purpose, we developed an interface (shown in Figure 3) to capture images from the Pepper robot. The interface runs on a Node.js (https://nodejs.org/en/, accessed on 20 May 2023) application that executes Python scripts with the necessary arguments (e.g., Figure 3 demonstrates how to set up the Pepper camera to obtain images at a resolution of 320 × 240). The executed Python 2 script by this interface uses the OpenCV (https://opencv.org/, accessed on 20 May 2023) and Pepper SDK frameworks to get and stream images from the Pepper robot. These images are published to the network and read by the Object Recognition module.

3.2. Object Recognition

We developed an additional interface on top of Node.js to set up and launch a Python 3 script running the YOLO (You Only Look Once) algorithm. This user interface, shown in Figure 4, allows the selection of the trained model to use the topic where the robot images are published, and the image size resolution for processing using YOLO (e.g., by selecting yolo-320 a resolution of 320 × 320 is set). Using smaller image sizes in YOLO models can speed up object recognition processes but may result in lower accuracy. The Object Recognition modules output a list of recognized objects, their confidence scores, and their bounding box positions (origin, height, width, and center). With this interface, the proposed system can detect objects of interest and provides valuable data to adapt the robot’s behaviors. A pre-trained deep learning model using YOLO is composed of three main files: a .cfg file that defines the deep neural network architecture and parameters, a list .names file with the list of the possible classes to recognize, and a .weight file with the values of weights of the neural network. These files must be saved in a folder with a name that is easy to recognize (in the figure defined as yolo4) on a specific path of the computer where several pre-trained models can be located. The location of this specific and default path depends on the operating system in which the interface is executed. In the case of Windows, the path is “C:/nep/deep_models/objects”. This interface is able to locate the models saved on the default path and set the arguments that enable the Python 3 script running the YOLO to load the required files.

3.3. Force Estimation Using Python Robotic Toolbox

Pepper does not have force nor torque sensors. Therefore, we estimate the force exerted by the human partner on Pepper’s arm using a dynamic model. We used the Robotics Toolbox Python library to calculate the forces applied by humans on Pepper when holding hands. This library, developed by Peter Corke et al. [61], specializes in robot-related calculations, such as kinematics and dynamics, and facilitates the creation of robot models through the specification of their Denavit-Hartenberg (DH) parameters. Additionally, it can be used to create 3D representations of robot models, allowing visualization of their movements in simulations. In this work, we used the Robotics Toolbox to create Pepper’s dynamic model and calculate the torques required for posture control. Moreover, the Jacobian used for force estimation calculations was also computed using the Robotics Toolbox. For simplicity, Pepper was modeled as a 9-link robot from its foot to left hand in the model, as shown in Figure 5. DH parameters used in the creation of the Pepper models using the Robotics Toolbox Python library are shown in Table 1. The coordinate system of Pepper and force the directions of the applied force on Pepper’s left hand are shown in Figure 6.
To estimate the force exerted by the human partner on Pepper’s arm, we use the dynamic model expressed by the following equation:
τ = M ( θ ) θ ¨ + C ( θ , θ ˙ ) + G ( θ ) + h ( θ ˙ )
Equation (1) describes the relationship between the driving torques ( τ ) and the joint angles ( θ ). In this equation, M represents the inertia matrix, C is the vector of Coriolis and centrifugal torques, G is the gravity torque vector, and h is the vector of friction torques. When walking while holding hands, the robot arm’s movement is mainly at a constant speed, with few sudden changes in posture. Thus, it is assumed that the robot arm’s movement during walking is nearly static, and the joint velocities ( θ ˙ ) and joint accelerations ( θ ¨ ) are close to zero. Therefore the influence of the term M ( θ ) θ ¨ , h ( θ ˙ ) , and C ( θ , θ ˙ ) is considered small, and ignored in the calculation. We estimate the actual driving torque from the readings of the current sensor and the motor constants. By applying the values obtained from the joint angle values of motors to Equation (1), the torque required for posture control can be derived from the dynamic model. When an external force is exerted on the robot arm, the difference between the torque calculated by the dynamic model and the actual torque generated by the external force is referred to as τ d . The torque applied to Pepper’s hand by the human is calculated using the estimated torque values obtained from Equation (1). According to the principle of virtual work, the following equation is derived:
f d = ( J T ) 1 τ d
where f d represents the external force and τ d = τ ^ τ is the difference between the estimated driving torque τ ^ and the actual torque τ . The J represents the Jacobian matrix, which relates the change in end effector position Δ x to small changes in joint angles Δ θ as follows:
Δ x = J ( θ ) Δ θ
The actual operating torque is calculated by obtaining the electric current value from the current sensor installed on each of Pepper’s motors and multiplying it by the motor constant. The motor constants used for calculating the actual torque in Pepper are shown in Table 2. However, the current sensor installed in Pepper can only obtain absolute values and cannot determine the direction of the current. Therefore, the direction of the actual operating torque obtained from this current was assumed to be the same as that of the estimated drive torque.
The Jacobian matrix can be calculated from the link parameters and joint angles. Since the created model of Pepper has 9 degrees of freedom, the Jacobian matrix becomes an irregular matrix of size (9 × 3). As a result, an inverse matrix does not exist, so the pseudo-inverse matrix J of the Jacobian matrix is used for the actual calculations instead.
f d = ( J ( θ ) T ) τ d
For building the dynamic model of robot, we used the values of mass, center of mass (CoM), and inertia matrix I 0 from the official webpage of the Pepper robot (http://doc.aldebaran.com/2-0/family/juliette_technical/masses_juliette.html, accessed on 30 May 2023).
The Robotics Toolbox is only available to be executed in Python 3, and the Pepper SDK can only be executed in Python 2. Therefore, we use the Python libraries of the NEP framework, which support both Python 2 and Python 3, to send the outputs of the force estimation module to the robot cognition and control module executing the Pepper SDK.

3.4. Robot Cognition and Control

This module is in charge of sending the control commands that control the direction and speed of the robot platform based on the force detected. Moreover, this module receives the values of the object recognition module to adapt the robot’s behaviors according to the status of the environment. The robot behaviors adapted by this module are speech and gesture. To execute these behaviors, we used the functionalities provided by the Pepper SDK. An example of implemented behaviors is enabling Pepper to draw the user’s attention to objects of potential danger in their surroundings or of potential interest. To accomplish this, the robot makes arm and head movements in the direction of the detected object while providing verbal communication. In a second example, Pepper can suggest taking a sit when a chair is detected. For this, the robot can obtain the coordinates of the recognized object in the image and make individualized movements accordingly. Figure 7 provides a visual representation of how Pepper points to the position of detected objects. In this application, speech is performed when Pepper stops its movement. Moreover, two types of intervals were set, one for not reacting to the same object for a certain period and the other for not performing a speech act for a certain period. These intervals prevent the robot from reacting too frequently to objects and stopping too often.
The Pepper robot is equipped with tactile sensors located on its palms and fingertips. These tactile allow the robot to sense when someone is holding its hand. Once the sensors detect the pressure, they send signals to the robot’s internal system enabling it to interpret the touch and respond appropriately. In this work, the control algorithms are executed only when the robot’s internal system detects that the Pepper’s hand is touched. Therefore, the estimation of the force is not executed when the user is not holding the robot. As a result, force estimation is not performed when the user is not holding the robot, which prevents the robot from unintentionally navigating in an unexpected direction.

4. Experimental Validation

This section presents the experimental results from an initial test of our proposed system architecture conducted in an open and public coffee space room adjacent to the GVlab at the Tokyo University of Agriculture and Technology. As depicted in Figure 5, this public space is primarily obstacle-free. It is important to note that in this particular application, the human assumes control of the robot, and therefore, no obstacle avoidance features were implemented. A video proving the technical suitability of our proposed system architecture in a 3 min experiment is available online (https://youtu.be/aVKmIPbaSO8, accessed on 30 May 2023). Figure 8 shows some captures from this video. This figure shows how our proposed system can accurately detect objects in the environment using images obtained from Pepper’s camera. Moreover, Figure 9 shows an example of the values obtained from the force estimation module when applying a forward force on the Pepper robot. The plot shown on the left of Figure 9 represents the real-time results from the module using the Robotics Toolbox to calculate the applied force to the robot.

4.1. Object Recognition

In this test, object recognition was carried out using images captured by Pepper’s camera to detect different objects, such as “refrigerator”, “chair”, “bottle” and “monitor”. These objects were placed between 2 m and 4 m from Pepper, as shown in Figure 10. Pepper SDK provides the ALVideoDevice to facilitate image acquisition by establishing a Wi-Fi socket connection between Pepper’s internal computer and an external computer (in this case, where object recognition is performed) through Wi-Fi. For using ALVideoDevice features, a suitable resolution and color space must be set to use this module. In this scenario, the framerate of obtained images is highly dependent on the resolution defined in the ALVideoDevice module. For this application, it was essential to select the resolution carefully in order to provide acceptable accuracy levels in the object recognition system. Figure 10 depicts object recognition results using a 640 × 480 pixels resolution. Our findings indicate that the recognition of objects was successful if the object occupied about a quarter of the image. As shown in Figure 10, objects with a size of approximately 80 cm can be detected with high accuracy up to a distance of 4 m. However, the object recognition system also misidentified a training machine (not in the learning dataset) on the right side as a monitor. In Figure 11, we present object detection results at 2 m and 4 m distances using a 320 × 240 pixels resolution. We observed that the accuracy of the object recognition system significantly decreased in this resolution, even at close distances, compared to the high-resolution case. To improve the accuracy of our system, we use the reliability score obtained by the object recognition module. In the case of the training machine, which was mistakenly recognized as a monitor in Figure 10, the reliability score was smaller and unstable compared to the chair recognized at the same time, and it frequently changed between the recognized and unrecognized states. Therefore, to reduce the misrecognition of objects in the walking assistance system, we exclude objects with low average reliability scores or those recognized with low-reliability scores. Furthermore, objects with ambiguous outlines, such as bottles, can be difficult to detect, resulting in low-reliability scores. To address this issue, we established a different threshold for the reliability score for each object. However, it’s important to note that these values may vary depending on different scenarios, camera resolutions, and illumination conditions.
The novelty of this work does not involve the training and analysis of specific object recognition methods or algorithms. Instead, we propose a system architecture that offers a user-friendly interface, allowing the loading of any pre-trained YOLO-v4 model using OpenCV and TensorFlow. Table 3 presents the list of objects available for recognition using the selected pre-trained model. The choice of this specific pre-trained model was based on its inclusion of common objects found in indoor scenarios.
According to the results of object recognition, short utterances were spoken along with gestures. An example of a spoken utterance is shown below. If multiple lines are assigned to a single object, the utterances are spoken sequentially as they are recognized.
  • “chair”: “There is a chair. Do you need a rest?”
  • “chair”: “There is a chair again. Don’t you still need a rest?”
  • “person”: “Hello. It’s a good day for walking”
  • “laptop”: “There is a laptop. Someone is working hard on the research”
  • “bottle”: “Oh, there is a bottle. We can’t buy them in TUAT”
  • “bottle”: “I found the bottle again. Waste should be thrown away”
  • “bottle”: “Would you like to go get something to drink?”
  • “cell phone”: “I found a cell phone. Did someone leave it?”

4.2. Force Estimation

To evaluate the suitability of force estimation, we conducted an experiment where a person applied forces in six directions (forward, backward, left, right, up, and down) with varying magnitudes to the Pepper robot’s hand while fixed in a specific posture. The results of force estimation for the x, y, and z-axes are shown in Figure 12, where the x-axis points forward, the y-axis points to the left, and the z-axis points vertically upward. We observed that the reading values in the direction of the applied force proportionally increased when force was applied to the corresponding direction, indicating the system’s ability to measure both the magnitude and direction of the force. However, we observed errors in some cases, particularly when forces were applied along the x or y-axis. For instance, as seen in Figure 12, an applied force in the x negative direction is detected when the force is mainly applied in the z positive direction. Moreover, when force was applied in the positive x-direction or the positive and negative y-directions, we observed significant components of force in the z-direction and the x-direction, respectively, even though no force was being applied in those directions. This can be attributed to the fact that the force applied by the human hand is not purely axial. Moreover, we noticed that errors in the readings of angles and torques increased as the joints exceeded their operating range. In this work, we propose a method that uses the force estimation results to regulate Pepper’s walking speed while holding hands with a human. However, only the force estimation results in the x and y directions are employed to adapt Pepper walking. Thus, the effect of the significant errors in the force estimation results along z is considered negligible for this particular application. Also a precise force measurement is not necessary, the direction and rough amplitude are sufficient for this application, therefore the proposed force observer is sufficient.

4.3. Limitations

Our tests suggest that the estimated force can be accurately detected, providing a smooth feeling when walking while holding hands with the Pepper robot. However, there were some cases where the force control accuracy could be reduced. We observed that this issue is presented due to two specific reasons. On the one hand, we observed that the results of force estimation were affected by Pepper’s slight posture changes and variations in current values. This can result in frequent changes in speed, leading to unnatural movements. On the other hand, errors in force estimation are also present when the body shakes over small asperity on the ground or when the arm is suddenly pulled strongly. Additionally, Pepper is designed as a robot with no branching from its legs to its left hand in the dynamics model, so errors in force calculation may occur due to changes in the posture of the right hand or head, causing Pepper’s center of gravity to shift. In our system, when significant discrepancies occur in force estimation during force control, parameters of the force control can be readjusted by pushing Pepper’s bumper at the foot.
Since the main objective of this study is to demonstrate the technological suitability of the system, no participants were recruited for experiments. The system was tested by the authors of this article. As a result, this research does not require consent from any individuals, as no human subjects (aside from the authors of this article) were involved in the study. The involvement and recruitment of external participants will be considered for future work, which can potentially involve the analysis of usability and user experience factors. Furthermore, these studies can be expanded to encompass various types of soil surfaces on which Pepper walks or different interaction scenarios.

5. Conclusions

This article presented a novel software architecture enabling an industrial-produced humanoid robot to walk hand-in-hand with humans using a pHRI approach. The proposed systems integrate object recognition skills to enable the robot to react to the surrounding environment. Due to the lack of force sensors in Pepper, an estimation of the force applied by the human hand to the robot hand was required. The force estimation was performed using dynamic simulation based on the difference between actual and estimated joint torque. Although we were able to create a suitable system for our purpose, some issues arose due to the assumptions made during modeling in the simulation, which resulted in some problems in certain cases where there were large changes in posture.
The walking speed of humans can vary based on factors like age, fitness level, terrain, and purpose of movement. While the maximum velocity of the Pepper robot may be slower than a healthy human’s typical walking speed, it’s crucial to consider that the robot’s primary objective is not to replicate human locomotion but to engage with humans in social and service contexts. When walking hand in hand with other humans, the average velocity tends to be slower, influenced by individual walking speeds, physical condition, and comfort level. This slower pace offers opportunities for social bonding, conversations, shared experiences, and a sense of togetherness. Individuals naturally adjust their pace to accommodate slower walkers, ensuring the group remains connected. In this context, future research directions could investigate potential discomfort or impatience, as well as the impact on social bonding, by considering different walking velocities and examining how the Pepper robot’s slowdown in walking pace during hand-in-hand interactions affects human partners of various age groups.
Another important aspect to consider in future work is the influence of Pepper’s structure on the walking activity. This includes not only the shoulder angle but also the size and height of individuals. Children, with their shorter height and smaller size, are more prone to potential collisions with the robot’s base during walking. They may encounter challenges in maintaining a safe distance and may not fully comprehend the associated risks. Additionally, children tend to have less awareness of their surroundings and exhibit more unpredictable movements, further increasing the likelihood of accidental contact with the robot’s base. To mitigate these collision risks, the utilization of Pepper’s range sensors becomes crucial. By leveraging the range sensor data, the robot can effectively detect obstacles and potential collision hazards, including objects and body parts that may come into contact with its base. This enables the robot to adapt its walking trajectory, slow down, or stop altogether when necessary to avoid accidents.
Finally, as part of our future work, we aim to enhance Pepper’s capabilities by incorporating additional vision-based skills, such as face and speech recognition. These advanced functionalities would enable Pepper to exhibit more sophisticated and interactive behaviors.

Author Contributions

Conceptualization, E.C., T.S. and G.V.; methodology, E.C., T.S. and G.V.; formal analysis, T.S.; investigation, E.C., T.S. and G.V.; resources, E.C. and T.S.; data curation, T.S.; writing—original draft preparation, E.C.; writing—review and editing, E.C. and G.V.; visualization, T.S.; supervision, E.C. and G.V.; project administration, G.V.; funding acquisition, G.V. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Guan, X.; Gong, J.; Li, M.; Huan, T.C. Exploring key factors influencing customer behavioral intention in robot restaurants. Int. J. Contemp. Hosp. Manag. 2022, 34, 3482–3501. [Google Scholar] [CrossRef]
  2. Venture, G.; Indurkhya, B.; Izui, T. Dance with me! Child-robot interaction in the wild. In Proceedings of the Social Robotics: 9th International Conference (ICSR 2017), Tsukuba, Japan, 22–24 November 2017; Springer: Berlin/Heidelberg, Germany, 2017; pp. 375–382. [Google Scholar]
  3. Chen, Y.; Wu, F.; Shuai, W.; Chen, X. Robots serve humans in public places—KeJia robot as a shopping assistant. Int. J. Adv. Robot. Syst. 2017, 14, 1729881417703569. [Google Scholar] [CrossRef] [Green Version]
  4. Capy, S.; Rincon, L.; Coronado, E.; Hagane, S.; Yamaguchi, S.; Leve, V.; Kawasumi, Y.; Kudou, Y.; Venture, G. Expanding the Frontiers of Industrial Robots beyond Factories: Design and in the Wild Validation. Machines 2022, 10, 1179. [Google Scholar] [CrossRef]
  5. Do, H.M.; Pham, M.; Sheng, W.; Yang, D.; Liu, M. RiSH: A robot-integrated smart home for elderly care. Robot. Auton. Syst. 2018, 101, 74–92. [Google Scholar] [CrossRef]
  6. Gasteiger, N.; Hellou, M.; Ahn, H.S. Deploying social robots in museum settings: A quasi-systematic review exploring purpose and acceptability. Int. J. Adv. Robot. Syst. 2021, 18, 17298814211066740. [Google Scholar] [CrossRef]
  7. Martinez-Martin, E.; del Pobil, A.P. Personal robot assistants for elderly care: An overview. In Personal Assistants: Emerging Computational Technologies; Springer: Cham, Switzerland, 2018; pp. 77–91. [Google Scholar]
  8. Coronado, E.; Indurkhya, X.; Venture, G. Robots meet children, development of semi-autonomous control systems for children-robot interaction in the wild. In Proceedings of the 2019 IEEE 4th International Conference on Advanced Robotics and Mechatronics (ICARM), Toyonaka, Japan, 3–5 July 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 360–365. [Google Scholar]
  9. Alam, A. Educational robotics and computer programming in early childhood education: A conceptual framework for assessing elementary school students’ computational thinking for designing powerful educational scenarios. In Proceedings of the 2022 International Conference on Smart Technologies and Systems for Next Generation Computing (ICSTSN), Villupuram, India, 25–26 March 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 1–7. [Google Scholar]
  10. Capy, S.; Osorio, P.; Hagane, S.; Aznar, C.; Garcin, D.; Coronado, E.; Deuff, D.; Ocnarescu, I.; Milleville, I.; Venture, G. Yōkobo: A Robot to Strengthen Links Amongst Users with Non-Verbal Behaviours. Machines 2022, 10, 708. [Google Scholar] [CrossRef]
  11. Deuff, D.; Milleville-Pennel, I.; Ocnarescu, I.; Garcin, D.; Aznar, C.; Capy, S.; Hagane, S.; Osorio Marin, P.F.; Coronado Zuniga, E.; Rincon Ardila, L.; et al. Together alone, Yōkobo, a sensible presence robject for the home of newly retired couples. In Proceedings of the Designing Interactive Systems Conference, Virtual Event, 13–17 June 2022; pp. 1773–1787. [Google Scholar]
  12. Sharma, U.; Kumar, R. Positivity in mental and physical health. Int. J. Indian Psychol. 2015, 2, 65. [Google Scholar]
  13. Nara, K. Study of mental health among sportspersons. Int. J. Phys. Educ. Sport. Health 2017, 4, 34–37. [Google Scholar] [CrossRef]
  14. Kim, E.S.; Berkovits, L.D.; Bernier, E.P.; Leyzberg, D.; Shic, F.; Paul, R.; Scassellati, B. Social robots as embedded reinforcers of social behavior in children with autism. J. Autism Dev. Disord. 2013, 43, 1038–1049. [Google Scholar] [CrossRef] [PubMed]
  15. Šabanović, S.; Bennett, C.C.; Chang, W.L.; Huber, L. PARO robot affects diverse interaction modalities in group sensory therapy for older adults with dementia. In Proceedings of the 2013 IEEE 13th International Conference on Rehabilitation Robotics (ICORR), Seattle, WA, USA, 24–26 June 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 1–6. [Google Scholar]
  16. Rasouli, S.; Gupta, G.; Nilsen, E.; Dautenhahn, K. Potential applications of social robots in robot-assisted interventions for social anxiety. Int. J. Soc. Robot. 2022, 14, 1–32. [Google Scholar] [CrossRef] [PubMed]
  17. Riek, L.D. Robotics technology in mental health care. In Artificial Intelligence in Behavioral and Mental Health Care; Elsevier: Amsterdam, The Netherlands, 2016; pp. 185–203. [Google Scholar]
  18. Kabacińska, K.; Prescott, T.J.; Robillard, J.M. Socially assistive robots as mental health interventions for children: A scoping review. Int. J. Soc. Robot. 2021, 13, 919–935. [Google Scholar] [CrossRef]
  19. Scoglio, A.A.; Reilly, E.D.; Gorman, J.A.; Drebing, C.E. Use of social robots in mental health and well-being research: Systematic review. J. Med. Internet Res. 2019, 21, e13322. [Google Scholar] [CrossRef]
  20. Gasteiger, N.; Loveys, K.; Law, M.; Broadbent, E. Friends from the future: A scoping review of research into robots and computer agents to combat loneliness in older people. Clin. Interv. Aging 2021, 16, 941–971. [Google Scholar] [CrossRef]
  21. Odekerken-Schröder, G.; Mele, C.; Russo-Spena, T.; Mahr, D.; Ruggiero, A. Mitigating loneliness with companion robots in the COVID-19 pandemic and beyond: An integrative framework and research agenda. J. Serv. Manag. 2020, 31, 1149–1162. [Google Scholar] [CrossRef]
  22. Ernst, M.; Niederer, D.; Werner, A.M.; Czaja, S.J.; Mikton, C.; Ong, A.D.; Rosen, T.; Brähler, E.; Beutel, M.E. Loneliness before and during the COVID-19 pandemic: A systematic review with meta-analysis. Am. Psychol. 2022, 77, 660. [Google Scholar] [CrossRef]
  23. Hacihasanoğlu, R.; Yildirim, A.; Karakurt, P. Loneliness in elderly individuals, level of dependence in activities of daily living (ADL) and influential factors. Arch. Gerontol. Geriatr. 2012, 54, 61–66. [Google Scholar] [CrossRef]
  24. Edemekong, P.F.; Bomgaars, D.L.; Sukumaran, S.; Levy, S.B. Activities of daily living. In StatPearls [Internet]; StatPearls Publishing: Tampa, FL, USA, 2021. [Google Scholar]
  25. Ronald, R. The Remarkable Rise and Particular Context of Younger One–Person Households in Seoul and Tokyo. City Community 2017, 16, 25–46. [Google Scholar] [CrossRef]
  26. Marashian, F.; Khorami, N.S. The effect of early morning physical exercises on Academic self-concept and Loneliness foster home children in Ahvaz City. Procedia Soc. Behav. Sci. 2012, 46, 316–319. [Google Scholar] [CrossRef] [Green Version]
  27. Lee, I.M.; Buchner, D.M. The importance of walking to public health. Med. Sci. Sport. Exerc. 2008, 40, S512–S518. [Google Scholar] [CrossRef] [PubMed]
  28. Catricalà, B.; Coffaro, D.; Manca, M.; Mattioli, A.; Paternò, F.; Santoro, C. An Approach to Exploiting Personal Memories in Humanoid Robots Serious Games for Cognitive Stimulation of Older Adults. In Proceedings of the 3rd International Workshop on Empowering People in Dealing with Internet of Things Ecosystems, Rome, Italy, 6 June 2022. [Google Scholar]
  29. Cruz-Sandoval, D.; Morales-Tellez, A.; Sandoval, E.B.; Favela, J. A social robot as therapy facilitator in interventions to deal with dementia-related behavioral symptoms. In Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, Cambridge, UK, 23–26 March 2020; pp. 161–169. [Google Scholar]
  30. Dino, M.J.S.; Davidson, P.M.; Dion, K.W.; Szanton, S.L.; Ong, I.L. Nursing and human-computer interaction in healthcare robots for older people: An integrative review. Int. J. Nurs. Stud. Adv. 2022, 4, 100072. [Google Scholar] [CrossRef]
  31. Naneva, S.; Sarda Gou, M.; Webb, T.L.; Prescott, T.J. A systematic review of attitudes, anxiety, acceptance, and trust towards social robots. Int. J. Soc. Robot. 2020, 12, 1179–1201. [Google Scholar] [CrossRef]
  32. Coronado, E.; Kiyokawa, T.; Ricardez, G.A.G.; Ramirez-Alpizar, I.G.; Venture, G.; Yamanobe, N. Evaluating quality in human-robot interaction: A systematic search and classification of performance and human-centered factors, measures and metrics towards an industry 5.0. J. Manuf. Syst. 2022, 63, 392–410. [Google Scholar] [CrossRef]
  33. Bevilacqua, R.; Felici, E.; Marcellini, F.; Glende, S.; Klemcke, S.; Conrad, I.; Esposito, R.; Cavallo, F.; Dario, P. Robot-era project: Preliminary results on the system usability. In Proceedings of the Design, User Experience, and Usability: Interactive Experience Design: 4th International Conference, DUXU 2015, Held as Part of HCI International 2015, Los Angeles, CA, USA, 2–7 August 2015; Springer: Berlin/Heidelberg, Germany, 2015; pp. 553–561. [Google Scholar]
  34. Vincent, J.; Taipale, S.; Sapio, B.; Lugano, G.; Fortunati, L. Social Robots from A Human Perspective; Springer: Berlin/Heidelberg, Germany, 2015. [Google Scholar]
  35. Görer, B.; Salah, A.A.; Akın, H.L. An autonomous robotic exercise tutor for elderly people. Auton. Robot. 2017, 41, 657–678. [Google Scholar] [CrossRef]
  36. Pandey, A.K.; Gelin, R.; Robot, A. Pepper: The first machine of its kind. IEEE Robot. Autom. Mag. 2018, 25, 40–48. [Google Scholar] [CrossRef]
  37. Song, W.K.; Kim, J. Novel assistive robot for self-feeding. In Robotic Systems-Applications, Control and Programming; Books on Demand: Norderstedt, Germany, 2012; pp. 43–60. [Google Scholar]
  38. Oka, T.; Solis, J.; Lindborg, A.L.; Matsuura, D.; Sugahara, Y.; Takeda, Y. Kineto-elasto-static design of underactuated chopstick-type gripper mechanism for meal-assistance robot. Robotics 2020, 9, 50. [Google Scholar] [CrossRef]
  39. Naotunna, I.; Perera, C.J.; Sandaruwan, C.; Gopura, R.; Lalitharatne, T.D. Meal assistance robots: A review on current status, challenges and future directions. In Proceedings of the 2015 IEEE/SICE International Symposium on System Integration (SII), Nagoya, Japan, 11–13 December 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 211–216. [Google Scholar]
  40. Bandara, D.; Arata, J.; Kiguchi, K. A noninvasive brain–computer interface approach for predicting motion intention of activities of daily living tasks for an upper-limb wearable robot. Int. J. Adv. Robot. Syst. 2018, 15, 1729881418767310. [Google Scholar] [CrossRef] [Green Version]
  41. Randazzo, L.; Iturrate, I.; Perdikis, S.; Millán, J.d.R. mano: A wearable hand exoskeleton for activities of daily living and neurorehabilitation. IEEE Robot. Autom. Lett. 2017, 3, 500–507. [Google Scholar] [CrossRef] [Green Version]
  42. Peter, O.; Tavaszi, I.; Toth, A.; Fazekas, G. Exercising daily living activities in robot-mediated therapy. J. Phys. Ther. Sci. 2017, 29, 854–858. [Google Scholar] [CrossRef] [Green Version]
  43. Lee, M.J.; Lee, J.H.; Lee, S.M. Effects of robot-assisted therapy on upper extremity function and activities of daily living in hemiplegic patients: A single-blinded, randomized, controlled trial. Technol. Health Care 2018, 26, 659–666. [Google Scholar] [CrossRef]
  44. Itadera, S.; Cheng, G. In-Hand Admittance Controller for a Robotic Assistive Walker Based on Tactile Grasping Feedback. IEEE Robot. Autom. Lett. 2022, 7, 8845–8852. [Google Scholar] [CrossRef]
  45. Zhao, X.; Zhu, Z.; Liu, M.; Zhao, C.; Zhao, Y.; Pan, J.; Wang, Z.; Wu, C. A smart robotic walker with intelligent close-proximity interaction capabilities for elderly mobility safety. Front. Neurorobot. 2020, 14, 575889. [Google Scholar] [CrossRef] [PubMed]
  46. Mostofa, N.; Feltner, C.; Fullin, K.; Guilbe, J.; Zehtabian, S.; Bacanlı, S.S.; Bölöni, L.; Turgut, D. A smart walker for people with both visual and mobility impairment. Sensors 2021, 21, 3488. [Google Scholar] [CrossRef] [PubMed]
  47. Martins, M.; Santos, C.; Frizera, A.; Ceres, R. A review of the functionalities of smart walkers. Med. Eng. Phys. 2015, 37, 917–928. [Google Scholar] [CrossRef] [PubMed]
  48. Baroni, I.; Nalin, M.; Zelati, M.C.; Oleari, E.; Sanna, A. Designing motivational robot: How robots might motivate children to eat fruits and vegetables. In Proceedings of the The 23rd IEEE International Symposium on Robot and Human Interactive Communication, Edinburgh, UK, 25–29 August 2014; IEEE: Piscataway, NJ, USA, 2014; pp. 796–801. [Google Scholar]
  49. Colombo, R.; Pisano, F.; Mazzone, A.; Delconte, C.; Micera, S.; Carrozza, M.C.; Dario, P.; Minuco, G. Design strategies to improve patient motivation during robot-aided rehabilitation. J. Neuroeng. Rehabil. 2007, 4, 3. [Google Scholar] [CrossRef] [Green Version]
  50. Schneider, S.; Kummert, F. Motivational effects of acknowledging feedback from a socially assistive robot. In Proceedings of the Social Robotics: 8th International Conference (ICSR 2016), Kansas City, MO, USA, 1–3 November 2016; Springer: Berlin/Heidelberg, Germany, 2016; pp. 870–879. [Google Scholar]
  51. Fasola, J.; Mataric, M.J. Using socially assistive human–robot interaction to motivate physical exercise for older adults. Proc. IEEE 2012, 100, 2512–2526. [Google Scholar] [CrossRef]
  52. Sackl, A.; Pretolesi, D.; Burger, S.; Ganglbauer, M.; Tscheligi, M. Social Robots as Coaches: How Human-Robot Interaction Positively Impacts Motivation in Sports Training Sessions. In Proceedings of the 2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), Napoli, Italy, 29 August–2 September 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 141–148. [Google Scholar]
  53. Granados, D.F.P.; Kinugawa, J.; Hirata, Y.; Kosuge, K. Guiding human motions in physical human-robot interaction through COM motion control of a dance teaching robot. In Proceedings of the 2016 IEEE-RAS 16th International Conference on Humanoid Robots (Humanoids), Cancun, Mexico, 15–17 November 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 279–285. [Google Scholar]
  54. Nakata, Y.; Yagi, S.; Yu, S.; Wang, Y.; Ise, N.; Nakamura, Y.; Ishiguro, H. Development of ‘ibuki’an electrically actuated childlike android with mobility and its potential in the future society. Robotica 2022, 40, 933–950. [Google Scholar] [CrossRef]
  55. Sato, M.; Yasuhara, Y.; Osaka, K.; Ito, H.; Dino, M.J.S.; Ong, I.L.; Zhao, Y.; Tanioka, T. Rehabilitation care with Pepper humanoid robot: A qualitative case study of older patients with schizophrenia and/or dementia in Japan. Enferm. Clin. 2020, 30, 32–36. [Google Scholar] [CrossRef]
  56. Niemelä, M.; Heikkilä, P.; Lammi, H. A social service robot in a shopping mall: Expectations of the management, retailers and consumers. In Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, New York, NY, USA, 6–9 March 2017; pp. 227–228. [Google Scholar]
  57. Garcia, F.; Pandey, A.K.; Fattal, C. Wait for me! towards socially assistive walk companions. arXiv 2019, arXiv:1904.08854. [Google Scholar]
  58. Kochigami, K.; Jiang, J.; Kakehashi, Y.; Au, C.; Kakiuchi, Y.; Okada, K.; Inaba, M. Walking together hand in hand: Design and evaluation of autonomous robot system that a robot recognizes moving direction with a child’s assistance of pulling its hand. In Proceedings of the 2015 IEEE/SICE International Symposium on System Integration (SII), Nagoya, Japan, 11–13 December 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 738–743. [Google Scholar]
  59. Kochigami, K.; Okada, K.; Inaba, M. Effect of walking with a robot on child-child interactions. In Proceedings of the 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Nanjing, China, 27–31 August 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 468–471. [Google Scholar]
  60. Lafaye, J.; Gouaillier, D.; Wieber, P.B. Linear model predictive control of the locomotion of Pepper, a humanoid robot with omnidirectional wheels. In Proceedings of the 2014 IEEE-RAS International Conference on Humanoid Robots, Madrid, Spain, 18–20 November 2014; IEEE: Piscataway, NJ, USA, 2014; pp. 336–341. [Google Scholar]
  61. Corke, P.; Haviland, J. Not your grandmother’s toolbox–the robotics toolbox reinvented for python. In Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 30 May–5 June 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 11357–11363. [Google Scholar]
  62. Coronado, E.; Venture, G. Towards iot-aided human–robot interaction using nep and ros: A platform-independent, accessible and distributed approach. Sensors 2020, 20, 1500. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Proposed system architecture.
Figure 1. Proposed system architecture.
Sensors 23 05692 g001
Figure 2. General control algorithm integrated in this work.
Figure 2. General control algorithm integrated in this work.
Sensors 23 05692 g002
Figure 3. Camera acquisition interface used to collect and publish images from the Pepper camera to the network. The user can also select the resolution of the images obtained. Lowering the resolution of images obtained from Pepper’s camera reduces the latency between the images.
Figure 3. Camera acquisition interface used to collect and publish images from the Pepper camera to the network. The user can also select the resolution of the images obtained. Lowering the resolution of images obtained from Pepper’s camera reduces the latency between the images.
Sensors 23 05692 g003
Figure 4. Recognition application developed to enable Pepper to recognize objects in its environment using the YOLO algorithm. On the interface shown on the left side, users can select the parameters and pre-trained deep learning model to use, while on the right side, the user can see which objects are detected from the camera of the Pepper robot.
Figure 4. Recognition application developed to enable Pepper to recognize objects in its environment using the YOLO algorithm. On the interface shown on the left side, users can select the parameters and pre-trained deep learning model to use, while on the right side, the user can see which objects are detected from the camera of the Pepper robot.
Sensors 23 05692 g004
Figure 5. Pepper robot and their geometric model represented as a 9-link mechanism that includes the elements from its foot to left hand.
Figure 5. Pepper robot and their geometric model represented as a 9-link mechanism that includes the elements from its foot to left hand.
Sensors 23 05692 g005
Figure 6. Coordinate system of Pepper. The thin arrows indicate the directions of the applied force on Pepper’s left hand.
Figure 6. Coordinate system of Pepper. The thin arrows indicate the directions of the applied force on Pepper’s left hand.
Sensors 23 05692 g006
Figure 7. Gesture of looking and pointing at the recognized object.
Figure 7. Gesture of looking and pointing at the recognized object.
Sensors 23 05692 g007
Figure 8. Example of object recognition results when walking with Pepper robot in a public space.
Figure 8. Example of object recognition results when walking with Pepper robot in a public space.
Sensors 23 05692 g008
Figure 9. Example of force results when walking with Pepper robot. The module using the Robotics Toolbox is used to plot the current values of each joint and results of the force estimation method.
Figure 9. Example of force results when walking with Pepper robot. The module using the Robotics Toolbox is used to plot the current values of each joint and results of the force estimation method.
Sensors 23 05692 g009
Figure 10. Result of object recognition (left: 2 m, right: 4 m).
Figure 10. Result of object recognition (left: 2 m, right: 4 m).
Sensors 23 05692 g010
Figure 11. Result of object recognition in low resolution (left: 2 m, right: 4 m).
Figure 11. Result of object recognition in low resolution (left: 2 m, right: 4 m).
Sensors 23 05692 g011
Figure 12. Result of force estimation experiment.
Figure 12. Result of force estimation experiment.
Sensors 23 05692 g012
Table 1. Denavit-Hartenberg parameters of Pepper robot.
Table 1. Denavit-Hartenberg parameters of Pepper robot.
Joint a i α i d i θ i
Base0 π 2 3390
Knee Pitch26800 π 2
Hip Pitch79 π 2 0−57
Hip Roll π 2 01810
Head Yaw0 π 2 0150
Head Pitch001150
Right Shoulder Pitch0 π 2 0181
Right Shoulder Roll226 π 2 10
Right Elbow Roll0 π 2 1500
Right Elbow Yaw226 π 2 0 π 2
Right Wrist Yaw0000
Left Shoulder Pitch0 π 2 1810
Left Shoulder Roll226 π 2 0 π 2
Left Elbow Yaw1 π 2 1500
Left Elbow Roll226 π 2 0 π 2
Left Wrist Yaw0010
Table 2. Motor torque constants of Pepper.
Table 2. Motor torque constants of Pepper.
JointMotor Torque Constant
mN · m/A
Knee Pitch36.9
Hip Pitch36.9
Hip Roll47.5
Shoulder Pitch27.5
Shoulder Roll19.2
Elbow Yaw27.5
Elbow Roll19.2
Wrist Yaw20.1
Table 3. Detectable objects.
Table 3. Detectable objects.
IndexObjectIndexObjectIndexObject
0person17horse34baseball bat
1bicycle18sheep35baseball glove
2car19cow36skateboard
3motorbike20elephant37surfboard
4airplane21bear38tennis racket
5bus22zebra39bottle
6train23giraffe43wine glass
7truck24backpack44cup
8boat25umbrella45fork
9traffic light26handbag46knife
10fire hydrant27tie47spoon
11stop sign28suitcase48bowl
12parking meter29frisbee49banana
13bench30skis50apple
14bird31snowboard51sandwich
15cat32sports ball52orange
16dog33kite53broccoli
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Coronado, E.; Shinya, T.; Venture, G. Hold My Hand: Development of a Force Controller and System Architecture for Joint Walking with a Companion Robot. Sensors 2023, 23, 5692. https://doi.org/10.3390/s23125692

AMA Style

Coronado E, Shinya T, Venture G. Hold My Hand: Development of a Force Controller and System Architecture for Joint Walking with a Companion Robot. Sensors. 2023; 23(12):5692. https://doi.org/10.3390/s23125692

Chicago/Turabian Style

Coronado, Enrique, Toshifumi Shinya, and Gentiane Venture. 2023. "Hold My Hand: Development of a Force Controller and System Architecture for Joint Walking with a Companion Robot" Sensors 23, no. 12: 5692. https://doi.org/10.3390/s23125692

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop