Next Article in Journal
Improving Performance of Key–Value Stores for High-Performance Storage Devices
Previous Article in Journal
Use of Lignin, Waste Tire Rubber, and Waste Glass for Soil Stabilization
Previous Article in Special Issue
Lessons Learned from Investigating Robotics-Based, Human-like Testing of an Upper-Body Exoskeleton
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Assistance in Picking Up and Delivering Objects for Individuals with Reduced Mobility Using the TIAGo Robot

by
Francisco J. Naranjo-Campos
1,*,†,
Ainhoa De Matías-Martínez
1,†,
Juan G. Victores
1,†,
José Antonio Gutiérrez Dueñas
2,
Almudena Alcaide
2 and
Carlos Balaguer
1
1
RoboticsLab, Systems and Automation Engineering Department, Universidad Carlos III de Madrid, 28911 Leganés, Spain
2
Department of Accessible Technology and R&D, Universal Accessibility and Innovation Directorate, Inserta Innovación, Fundación Once, 28012 Madrid, Spain
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Appl. Sci. 2024, 14(17), 7536; https://doi.org/10.3390/app14177536
Submission received: 17 July 2024 / Revised: 13 August 2024 / Accepted: 22 August 2024 / Published: 26 August 2024
(This article belongs to the Special Issue Intelligent Rehabilitation and Assistive Robotics)

Abstract

:
Individuals with reduced mobility, including the growing elderly demographic and those with spinal cord injuries, often face significant challenges in daily activities, leading to a dependence on assistance. To enhance their independence, we propose a robotic system that facilitates greater autonomy. Our approach involves a functional assistive robotic implementation for picking, placing, and delivering containers using the TIAGo mobile manipulator robot. We developed software and routines for detecting containers marked with an ArUco code and manipulating them using the MoveIt library. Subsequently, the robot navigates to specific points of interest within a room to deliver the container to the user or another designated location. This assistance task is commanded through a user interface based on a web application that can be accessed from the personal phones of patients. The functionality of the system was validated through testing. Additionally, a series of user trials were conducted, yielding positive feedback on the performance and the demonstration. Insights gained from user feedback will be incorporated into future improvements to the system.

1. Introduction

Ensuring a decent quality of life for individuals with reduced mobility is a significant social issue. This demographic encompasses all individuals with limited motor skills, whether these limitations are temporary or permanent.
A diverse range of populations face mobility issues, including the growing elderly demographic and individuals with spinal cord injuries. The latter can experience a spectrum of impairments, ranging from complete loss of mobility to alterations in basic bodily functions [1,2], depending on the affected area [3]. The necessity of assistance for certain tasks as a result of mobility limitations creates a situation of dependence on daily activities. Typically, assistance for individuals with functional diversity is provided by close family members or healthcare assistants.
In this context, robotics serves as a tool to enhance user independence. This field is known as Assistive Robotics, which is defined as the use of robots to directly and adaptively assist and interact with humans [4]. Assistive Robotics emerges as a means to help people meet their needs and facilitate their daily activities. To achieve this objective, several proposals have been put forth.
Among them, social robotics represents a significant focus, i.e., on developing robots that can provide support and entertainment [5,6,7,8]. This implementation seeks to make the interaction with the user natural and sociable so that the robot is perceived positively and pleases the user. It is crucial for user acceptance that individuals feel at ease and comfortable interacting with the robot, thus enabling its intended functionality [8]. Conversely, if the user is reluctant to interact with the robot, its effectiveness may be hindered [9].
Assistive Robotics has also been developed to assist individuals with reduced mobility aiming at the goal of increasing independence and self-sufficiency. Prostheses and exoskeletons have been developed [10,11,12,13] to provide strength assistance for movement or manipulation, and are widely used in the medical field of rehabilitation. Additionally, other robotic systems are being developed. For example, some propose the use of manipulator robots to assist users in object manipulation tasks [14,15,16,17], while others use mobile robots to serve people as autonomous transport or walking guides in various environments [18,19,20,21,22].
However, the potential of using mobile manipulator robots appears to be broader than that of other types of robots, as they can both manipulate and navigate. Users can remotely operate these platforms, enabling the completion of necessary tasks.
These platforms can be teleoperated by the user to realize the needed tasks. For instance, the teleoperation of a robotic arm can be achieved by implementing an application interface [23], through human gestures [24], or a 3D controller [25].
On the other hand, autonomous use of these devices promises a more useful form of assistance, although it also presents several challenges. The primary challenges [26] arise from the dynamic and unpredictable nature of the environment, the lack of expertise among users, the well-defined and adaptable tasks, and the necessity to ensure the safety and integrity of users and their belongings.
With this aim, some have proposed a multi-functional autonomous platform with a friendly appearance [27], leveraging artificial intelligence, a flexible decision engine, and autonomous navigation. This platform is designed to be completely safe, featuring collision detection and a soft artificial leather covering. Others suggest a multi-robot approach [28] involving a sensorized environment, a mobile social robot, and a mobile dual-manipulator robot that collaborates to assist, accompany, and serve elderly individuals in their homes in an autonomous manner.
In this context, we propose an approach to assist people with reduced mobility using the TIAGo robot. The robotic system implemented can navigate around a domestic or healthcare setting and perform picking up, placing, or delivering containers for objects to the user. The user is assumed to be bedridden but with mobility in the upper extremities, allowing them to control the robot from a tablet or personal phone. The primary objective of this approach is to implement a preliminary assistance system that can be tested with users to ascertain their needs and incorporate their feedback into future developments.

2. Materials and Methods

This section outlines the scope of the assistive task we implemented. It also describes the materials used, such as the TIAGo robot and containers for objects. Finally, it provides a detailed explanation of the implementation architecture, covering navigation, vision, manipulation, routine coordination, and the user web app.

2.1. Task and Materials

The assistive task implemented in this work involves picking up containers for objects and placing or delivering them within a home environment. This setting is supposed to be in a common room with furniture, like tables, shelves, and a bed, in which the location and dimensions are known and mapped. The user is supposed to be in bed due to their reduced mobility.
For that purpose, the user can command the robot through a web app running on a tablet or phone. The user can request the robot to pick up a specific container located at a certain point in the room, and then deliver it to the user or place it on another table.

2.1.1. TIAGo

TIAGo (see https://pal-robotics.com/es/robot/tiago/, date accessed 1 August 2024) is a mobile manipulator robot from PAL robotics oriented to research. Different model configurations are available, varying in the end-effector, the number of arms, and the user interface. The model we worked with is shown in Figure 1 and consists of a differential mobile base, a prismatic torso, a manipulator arm with 7 degrees of freedom, a gripper as the end-effector, an RGB-D camera installed on a pan and tilt system at the head position, and an endoscopic camera added to the gripper.

2.1.2. Containers for Objects

The objects required by the user are placed in containers to facilitate their detection, grasping, and delivery. These containers are designed as rectangular boxes with cubic handles, as shown in Figure 2. Its handle is specifically shaped to accommodate the gripper and is 6 cm wide to be compatible with the 8 cm gripper. In addition, the handle has an ArUco marker [29] that can be recognized by the robot using computer vision. This ArUco marker encodes a number that is also useful to distinguish between the available containers.

2.2. Implementation

In this subsection, the software implementation we carried out is explained. Figure 3 shows the architecture of the implementation, which consists of various hierarchical nodes in a ROS Melodic framework (see http://wiki.ros.org, date accessed 14 July 2024). The mid-level nodes send commands to the controllers of the TIAGo robot, while the coordinator node directs the mid-level nodes in a way that organizes the execution of assistance routines, and finally, the web interface node is at the highest level and is responsible for receiving the request from the user and transmitting it to the routine coordinator.
The functionalities provided by each component are detailed below, starting with the mid-level nodes, followed by a detailed description of each routine organized by the coordinator, and finally the web interface.

2.2.1. Mid-Level Nodes

The mid-level nodes process information provided by the sensors and services of the TIAGo robot and issue commands to the actuators and services. These nodes are categorized by their functionality:
  • Navigation node: The robot implements the ROS stack navigation (see http://wiki.ros.org/navigation, date accessed 4 June 2024) for autonomous navigation and it is leveraged for use in assistive tasks.
    For this purpose, a map of the environment has to be generated beforehand. Then, on this map, the points of interest (POIs) are saved in a list with an ID associated with them. POIs are those locations where containers can be found like the position of the robot in front of the table, and where the user can be found, which is the position in front of the bed. In this way, the coordinator or web app nodes will request this navigation node to move to a POI by providing its ID.
  • Vision node: This node scans images to identify ArUco markers and calculate their 3D positions in space.
    This node constantly executes a loop, scanning RGB images from both cameras (head and gripper) by using the library aruco_ros (see https://wiki.ros.org/aruco_ros, date accessed 13 August 2024), which provides real-time marker-based 3D pose estimation. However, just the corner pixels will be used. When an ArUco marker is detected, the pixel centroid is computed and published along with its number ID by ROS topic. In addition, a modified image with the centroid, ID, and bounding box is plotted for visualization, like in the example from Figure 4.
    Regarding markers detected with the RGB-D camera on the head, their position in space can also be determined. For this purpose, the value of the depth image corresponding to the centroid pixel is used, and the coordinates are computed using the camera’s intrinsic parameters. This information is also published via ROS topics.
  • Manipulation node: This node is composed of three sub-nodes to control the head, the gripper, and the arm–torso group, as previously shown in Figure 3.
    The head control enables the routine coordinator to request a specific point in space for the camera to focus on. This is achieved by calculating the values of the pan and tilt controllers based on the intrinsic parameters of the camera.
    The gripper control involves commanding the gripper controllers to either open to their maximum aperture or close until no further movement is achieved, indicating a grip.
    On the other hand, the MoveIt library [30] is used to control the arm–torso group. This allows us to plan safe trajectories that avoid self-collision and to define constraints or virtual obstacles.
  • TTS node: This node is for the text-to-speech (TTS). Explanatory behavior is a crucial feature of a human assistive robot [31]. For this purpose, at the beginning of each routine or movement, the coordinator node requests advice through the TTS node on which the task or action is to be performed. This node also receives requests to ‘speak’ social expressions, e.g., salutations, when initiated.
  • Diagnosis node: This node collects information about the robot’s state and publishes it to the web app for its display. This information includes the battery level, current state (stop or move), current sub-task (e.g., navigating), localization values, and actuator state (normal or overheat). In case an alert occurs, like a low battery level, this node commands the TTS to inform about the issue.

2.2.2. Routine for Picking a Container from the Table

This routine consists of detecting and picking up a requested container placed on a table. This process is illustrated in the flowchart shown in Figure 5 and in the sequence of frames shown in Figure 6. It is supposed to start with the robot positioned in front of the table where the container is located, and the execution steps are as follows:
  • Look for the container: The coordinator initiates the process by searching for the specified ArUco marker ID received by the user’s request. The process is represented by the finite state machine shown in Figure 7. It begins with a simultaneous tilt and scan of the camera using vision and head node services, which continue until the marker is detected. Once detected, the image is centered on the marker, and the coordinates are computed. If the marker is not found during centering, the head moves slightly to re-detect the marker, addressing any issues caused by delays or occlusions in the initial position computation.
  • Correct base position: The optimal distance for container pick-up has been empirically determined to be between 0.8 and 0.9 m. Once the marker position is obtained, the base is moved until it falls within this range.
  • Gripper centering: The marker position obtained may have some errors, which can cause the gripper to miss the container handle, resulting in a failed pick-up. To avoid this, the camera on the gripper is used. This process follows the finite state machine shown in Figure 8. Once the gripper is positioned in front of the marker, the end-effector moves along the Cartesian axis proportionally to the error between the detected centroid and the image center. If the marker is not detected, a spiral movement in that plane is executed to locate the marker. Once the image is centered, the pose of the end-effector provides a more accurate position of the marker in height and width. The distance to the marker is still obtained from the RGB-D camera.
  • Pick up the container: Once the pose of the container is determined, the pick-up process is executed. The goal in this step is to achieve the grasping position shown in the scheme of Figure 9. This involves moving the end-effector in front of the container handle with the gripper open. The end-effector then approaches the grasping frame to the determined ArUco position and closes the gripper. After securing the handle, the end-effector is elevated and retracted to a distance of 0.6 m from the base. The pick-up process concludes by moving the base backward 0.2 m.

2.2.3. Routine for Picking a Container from the Shelf

This routine consists of detecting and picking a requested container placed on a designated shelf. This process is an adaptation of the previous routine for picking a container from a table, adjusted for picking from elevated positions. It follows the flowchart shown in Figure 10 and the sequence of images illustrating the task in Figure 11. Starting with the robot positioned in front of the shelf where the container is located, the steps are as follows:
  • Look for the container: This sub-process is the same as the one used for picking from the table and follows the finite state machine shown in Figure 7.
  • Correct base position: Again, this sub-process is the same as the one implemented before.
  • Gripper centering: This sub-process includes a modification. First, it moves backward 0.3 m to create space for raising the gripper to an elevated position. Then, the previously presented finite state machine from Figure 8 is executed. Once the more precise location of the container is obtained, the robot moves forward 0.3 m.
  • Pick up the container: The ‘pick’ described before is executed by grabbing the container with the end-effector and then retreating from the shelf, as shown in Figure 11c.
  • Post-pick: Finally, the end-effector is commanded to predefined intermediate safe poses between the high pose and the final retreated pose of Figure 11d. For this trajectory, a constraint of maintaining the orientation is set via MoveIt.

2.2.4. Routine for Picking a Container from the User

This routine facilitates the picking of a container being offered by the user. The process starts with the robot in front of the user’s location and follows the following sequence, represented in Figure 12 and illustrated in the sequence in Figure 13:
  • Pre-pick: The routine begins by commanding the arm to a pre-defined configuration as shown in Figure 13a.
  • Pick: The end-effector is commanded to move forward. An advisory speech message is issued to the user, instructing them to place the container handle between the finger grippers; after a brief wait, the grab is executed.
  • Post-pick: The end-effector is retracted closer to the torso, thereby completing the routine.

2.2.5. Routine of Placing the Container on the Table

This routine involves placing the container on a table. The process starts with the robot positioned in front of the table, with knowledge of its height and distance, and with the container secured with the end-effector. The task is described by the diagram in Figure 14, while in Figure 15 shows the sequence of frames:
  • Pre-place: The process starts with moving the end-effector over the designated position.
  • Place: The end-effector is lowered to a position 1 cm above the known table height, and the gripper is opened.
  • Post-place: Subsequently, the end-effector is raised and retracted, completing the sequence by moving the arm to its home configuration.

2.2.6. Routine of Deliver Container

This is the complete routine that integrates both navigation and manipulation processes. For this task, the user must specify through the interface the ID number of the container to be picked up, the name of the location where it is situated (e.g., table), the destination point for delivery, and, if the container is to be delivered to the user, whether it should be dropped or held for the user to access the contents only.
The sequence is shown in the flowchart of Figure 16; the steps in this routine are as follows:
  • Navigate to the container: The routine starts with the robot navigating to the point where the user locates the container.
  • Pick up the container: The point at which the container is located determines the routine that is executed. This is one of the previously presented routines, namely picking from a table or shelf, or picking from the user.
  • Navigate to the delivery point: With the container securely held, the robot is commanded to navigate to a position near the delivery point. If the point is the bed, upon arrival, the robot notifies the user of its presence and the action it will perform (holding or dropping the container). Then, the robot navigates to the delivery point.
  • Deliver container: If the delivery point is a table, the robot performs the previously described routine of placing the container. If the delivery point is to the user, the process is illustrated in Figure 17 and is as follows: The routine begins by moving the end-effector forward to the user. The robot then notifies the user of the action it will perform (either holding or dropping the container). It executes the action by either maintaining its grip or opening the gripper to release the container. After a brief wait, the end-effector is retracted to a distance of 0.6 m. If the action involves dropping the container, the arm is then commanded to return to its home position.
  • Navigation to rest: Finally, the robot is commanded to navigate to a rest POI to finish and wait for another request.

2.2.7. User Web App

For the user interface (UI), a react-based web application (see https://es.react.dev/, date accessed 10 June 2024) was created. When this application is launched, it is served in a local port and IP of the robot’s network. Therefore, this UI can be utilized in any device connected to the TIAGo network by accessing this direction, e.g., the personal phone belonging to the user. In another respect, rospylib (see https://roslibpy.readthedocs.io/en/latest/, date accessed 14 July 2024) is used for interfacing with TIAGo ROS software (distro Ferrum, Ubuntu 18.04 LTS and ROS Melodic LTS), whereby the commands transmitted to the robot by the app are conveyed via ROS topics and services.
Furthermore, this UI is designed with accessibility in mind, taking into account the user profile. Consequently, the interaction can be conducted through two actions, namely change and selection, which can be performed, for example, by using the TAB and ENTER keys. This allows the adaptation of the web app to the user’s capabilities and control devices.
The UI has two modes: one for the common user and another for an expert or administrator user. This can be selected on the main page when connecting (Figure 18a) by accessing the admin user or otherwise.
The common user page allows the selection of the ID of the container to be picked, its location, the delivery destination, and the action to be taken if the user is the intended recipient. These options are depicted in Figure 18b. Additionally, the app indicates whether it is connected to the robot’s ROS network and the current state of the robot, so when the robot executes a task, the action and the moving element are noted there.
For the administrator, a greater range of options is available, as shown in Figure 18b. “Automatic” is equivalent to the common user page, while the “Teleop” section enables control of specific movements, including the base, the pick-from-user routine, the opening of the gripper, and sending the arm to its home position. The “Map” section displays the current map and allows for the relocalization of the robot. The “Setting” section prompts the diagnosis information from the robot (battery level, task state, element in movement, localization coefficient, etc.) and allows for the entry of the IP address of the ROS software if it differs from the web server.

3. Results

This section presents the experimental results. Two types of experiments were conducted: one focused on the execution of the different routines and another focused on the iteration with users. Both types of experiments were conducted in a home environment, where there was a table, a shelf, and a bed where the user was or should have been.

3.1. Execution Experiments

A meticulous examination of the execution of the routines was conducted in order to identify any potential shortcomings. Two studies were conducted: one to analyze the main manipulation routine of picking up the container from the table and another to examine the execution of the delivery combinations.

3.1.1. Experiments on Picking from Table

During the execution of the delivery from the table to the user, various initial positions of the container were set. The variation involved changing the initial position of the container relative to the saved POI where the robot navigated to execute the picking routine, differing along both horizontal axes. Figure 19 and Table 1 show the resulting measurements in time.
The results indicate that the robot completes the routine in less time when the container is positioned at the optimal manipulation distance upon arrival at the POI. However, when the container is positioned slightly off-center from the POI, there is a slight increase in execution time. It should be noted that every execution was successfully completed.

3.1.2. Experiments on Delivering

This experiment encompasses the entire delivery routine, including the different combinations of routines. All experiments were successfully completed and Table 2 presents the measurements of the execution time.
A preliminary analysis of the data indicates that picking from the table and shelf appears to be more time-consuming than picking from the user and placement tasks. This discrepancy may be attributed to the additional steps required for container detection, which necessitate a more complex process. Furthermore, picking from the shelf requires additional base corrections and post-pick movements. Conversely, picking from the user and placement tasks involve more predefined movements that do not depend on detection.
However, the efficacy of each task should be noted, including the minimal variation in time, which suggests that the routines are highly reliable in execution.

3.2. User Experiments

To evaluate the usability and usefulness of our assistance approach, we conducted user experiments. These were designed to collect qualitative data on user interactions, performance, satisfaction, and impressions. The following subsections detail the methodology, participant demographics, experimental set-up, and survey results.

3.2.1. Methodology

The user experiments follow a structured methodology to ensure consistent and reliable data collection through user surveys. The experiment is divided into two phases: pre-demo and post-demo:
  • Pre-demo: The patients are introduced to the experimental setup and informed about the robot and its technical features, but not about the specific tasks and implementations it can perform, e.g., “the robot has a camera”. They are then asked about the potential for assistance.
  • Post-demo: The patients act as users of the robot’s assistance and are asked to command the desired assistance routines from their own mobile phones. They are then surveyed about their satisfaction, the usefulness of the robot, the attractiveness of the user interface, and their future needs and suggestions for improvements.

3.2.2. Participant Demographics

The experiments were conducted with six individuals from the FLM (Spanish acronyms of Fundación de Lesionado Medular), a medical center for the treatment, rehabilitation, and residency of people with reduced mobility due to spinal cord injuries. The patients have limited mobility in their upper bodies. Participants are middle-aged, with a distribution of 60% men and 40% women.

3.2.3. Experiment Set-Up

Experiments were conducted in a common room of the FLM with the set-up presented in Figure 20. The room is equipped with a table, a shelf, and a bed for testing each delivery routine. The assistance task was pre-configured for this environment by mapping the room and storing the POIs. Each participant was introduced to the same robot and set up to ensure consistency. During the demo execution, the patient was placed in his own wheelchair next to the bed to simulate being in bed.

3.2.4. Surveys Results

The results of the user experiments provided valuable insights into user expectations and potential usability issues. Table 3 and Table 4 present the answers to the surveys and feedback from users. Key findings are as follows:
  • In general, participants were satisfied and impressed with the potential, assistance demonstration, usability, and user interface, giving them high scores.
  • Regarding the demo execution—although they found it useful, they found that it lacked speed execution. This is interesting because the movements of the robot were executed at this speed for safety as well as safety expectations.
  • Regarding their needs after seeing the demo—they expressed a desire for the robot to pick up objects not only from the table and shelf but also from the floor and cupboard. They explained that they often dropped objects and needed assistance from a nurse to retrieve them. They also suggested that the robot should be able to pick up objects from the wardrobe because they needed to pick up clothes from there without help as well.
  • Regarding the user interface (UI)—the majority of users were pleased. However, some noted the need for larger typography, while others suggested the inclusion of pictures to facilitate easier understanding.
Despite the small number of participants, it is interesting to note that they came to similar conclusions regarding their expectations, impressions, and needs. This consistency helps to focus future routine implementations and highlights the need to optimize execution time—or at least manage expectations around it—due to safety concerns.

4. Discussion

The overall aim of the presented approach is to implement an assistive system for individuals with reduced mobility that facilitates the picking, placing, and delivery of objects within a domestic environment. For this purpose, a system capable of operating in a dynamic, human-designed space is needed, which requires safe interactions with both humans and the environment. However, the predefined tasks implemented in the system may not fully cover the needs of users, making it essential to conduct tests with potential users. This will help assess the acceptance, usefulness, and performance of the implementation and gather feedback for future improvements.

4.1. System Performance

The experimental results indicate that the assistive system performs well in domestic rooms with the predefined elements for the task (a room with a bedridden user, table, and shelves). The well-established navigation system and the implemented autonomous detection and manipulation routines make it possible to complete the tasks in all tests. However, user testing revealed several challenges. The user reported that while the system was generally compliant with the assistive task, it was slow. There are two main reasons for this slowness: detection and safety.
The former involves detecting the containers and determining their positions in space. The user only gives a hint of where it is, for example on the table, and locating the robot and adapting it to the manipulation of containers takes time. This is reflected in the fact that the routines that include this process are the slowest.
The latter is safety. To avoid collisions with the environment, the robot continues to plan and follow manipulation trajectories that are sometimes not the most optimal. It also executes them at a slow speed, so in the event of a collision, the safety system of the robot can stop the movement before it causes damage to the user or the environment. This is most evident in the shelf-picking task, where the trajectory it follows to lower the arm includes passing through pre-defined configurations that ensure the arm is confined to a safe space and does not tip the contents.
Moreover, physical safety needs to be considered, as well as the perception of safety by the users. Perceived safety improves when the robot’s motion does not occur at a high speed and it is predictable [31]. This was also considered when setting the speed of the robot, accompanied by the continuous explanation with the TTS of the actions, balancing the need for efficiency with the comfort of users.

4.2. User Acceptance and Interaction

An important aspect of user testing is the acceptance of the robot by the users. After the presentation of the robot and before the demo, the users were already positive about the robot and thought it had a lot of potential. After the demo, this appreciation did not diminish, with high scores for satisfaction and usefulness. As mentioned at the beginning, user acceptance is important for the robot to fulfill its function and not to inconvenience the user.
This is complemented by the interaction through the UI. Users found the UI easy and comfortable to use due to its simplicity and the ability to operate it from their personal mobile phones. This eliminated potential technological barriers typically associated with this type of technology, making it as accessible as other devices that connect to their phones.

4.3. Feedback and Future Developments

User feedback is crucial for determining the development path of the assistance task according to their preferences [32]. In this case, the users expressed the need for the robot to be able to reach objects placed at higher levels, such as shelves, and found this feature useful in the demonstration.
However, they noted the inability of the robot to pick up objects from the floor, which was inconvenient as they had to ask for help if they dropped something. They were also interested in the ability to retrieve clothes from the wardrobe. This task would be more challenging to implement due to interactions with doors and the handling of more complex items such as clothes on hangers.
Regarding interaction, although they were satisfied with the control from their personal phones, they suggested the possibility of incorporating voice commands. This could make the interaction more natural and address situations where the user does not have their mobile device with them. Additionally, some noted the need for larger typography, while others suggested the inclusion of pictures to facilitate easier understanding. Despite these suggestions, they felt comfortable and considered the interaction with the UI to be easy.

5. Conclusions

In summary, we present a functional assistive robotic implementation for picking, placing, and delivering containers of objects using the TIAGo robot. The functionality of the application was validated through the execution of several tests. In addition, a series of experiments with users was conducted, who expressed positive impressions of the robot and the demonstration. Furthermore, feedback provided by the users will be incorporated into future improvements.
Developing an assistive system for individuals with reduced mobility holds great promise for improving independence and quality of life. Although initial results and user feedback are encouraging, more assistive tasks may need to be implemented.

Author Contributions

Conceptualization, J.G.V., A.A. and C.B.; methodology, J.G.V., A.A. and C.B.; software, F.J.N.-C., A.D.M.-M. and J.G.V.; validation, F.J.N.-C., A.D.M.-M. and J.G.V.; formal analysis, F.J.N.-C. and A.D.M.-M.; investigation, F.J.N.-C., A.D.M.-M. and J.G.V.; resources, A.A. and C.B.; data curation, A.D.M.-M., A.A. and C.B.; writing—original draft preparation, F.J.N.-C.; writing—review and editing, J.G.V.; visualization, F.J.N.-C., A.D.M.-M. and J.G.V.; supervision, J.G.V., J.A.G.D., A.A. and C.B.; project administration, J.G.V., J.A.G.D., A.A. and C.B.; funding acquisition, J.G.V., A.A. and C.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Asociación Inserta Innovación (part of Grupo Social ONCE) in the context of the project ROBOASIST2.

Institutional Review Board Statement

Ethical review and approval were waived for this study due to the nature of the research being low-risk and involving only voluntary human participants who provided informed consent.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The original contributions presented in the study are included in the article; further inquiries can be directed to the corresponding author.

Acknowledgments

During the process of writing, the artificial intelligence tool ChatGPT-4o and ChatGPT-3o was employed as a means of enhancing and verifying the accuracy of orthography and grammar. Its outputs were always reviewed and edited by the authors. The contributions and content of the paper are entirely from the authors.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
ArUcoAugmented Reality University of Cordoba
FLMFundación de Lesionado Medular
IDidentification
POIpoint of interest
ROSrobot operating system
TTStext-to-speech
UIuser interface

References

  1. Eckert, M.J.; Martin, M.J. Trauma: Spinal Cord Injury. Surg. Clin. N. Am. 2017, 97, 1031–1045. [Google Scholar] [CrossRef] [PubMed]
  2. Silva, N.A.; Sousa, N.; Reis, R.L.; Salgado, A.J. From basics to clinical: A comprehensive review on spinal cord injury. Prog. Neurobiol. 2014, 114, 25–57. [Google Scholar] [CrossRef] [PubMed]
  3. Purves, D.; Augustine, G.J.G.J.; Fitzpatrick, D.; Hall, W.C.W.C.; LaMantia, A.S.; Mooney, R.D.; Platt, M.L.; White, L.E.L.E.P.; Coquery, J.M.; Gailly, P.; et al. Neurosciences, 6th ed.; Sinauer Associate: Sunderland, MA, USA, 2019. [Google Scholar]
  4. Bemelmans, R.; Gelderblom, G.J.; Jonker, P.; de Witte, L. Socially Assistive Robots in Elderly Care: A Systematic Review into Effects and Effectiveness. J. Am. Med. Dir. Assoc. 2012, 13, 114–120.e1. [Google Scholar] [CrossRef] [PubMed]
  5. Salichs, M.A.; Castro-González, Á.; Salichs, E.; Fernández-Rodicio, E.; Maroto-Gómez, M.; Gamboa-Montero, J.J.; Marques-Villarroya, S.; Castillo, J.C.; Alonso-Martín, F.; Malfaz, M. Mini: A New Social Robot for the Elderly. Int. J. Soc. Robot. 2020, 12, 1231–1249. [Google Scholar] [CrossRef]
  6. Maroto-Gomez, M.; Carrasco-Martinez, S.; Marques-Villarroya, S.; Malfaz, M.; Castro-Gonzalez, A.; Salichs, M.A. Bio-inspired Cognitive Decision-making to Personalize the Interaction and the Selection of Exercises of Social Assistive Robots in Elderly Care. In Proceedings of the IEEE International Workshop on Robot and Human Communication RO-MAN, Busan, Republic of Korea, 28–31 August 2023; pp. 2380–2386. [Google Scholar] [CrossRef]
  7. Eirale, A.; Martini, M.; Tagliavini, L.; Gandini, D.; Chiaberge, M.; Quaglia, G. Marvin: An Innovative Omni-Directional Robotic Assistant for Domestic Environments. Sensors 2022, 22, 5261. [Google Scholar] [CrossRef] [PubMed]
  8. Gross, H.M.; Mueller, S.; Schroeter, C.; Volkhardt, M.; Scheidig, A.; Debes, K.; Richter, K.; Doering, N. Robot companion for domestic health assistance: Implementation, test and case study under everyday conditions in private apartments. In Proceedings of the IEEE International Conference on Intelligent Robots and Systems, Hamburg, Germany, 28 September–2 October 2015; pp. 5992–5999. [Google Scholar] [CrossRef]
  9. Hall, A.K.; Backonja, U.; Painter, I.; Cakmak, M.; Sung, M.; Lau, T.; Thompson, H.J.; Demiris, G. Acceptance and perceived usefulness of robots to assist with activities of daily living and healthcare tasks. Assist. Technol. 2019, 31, 133–140. [Google Scholar] [CrossRef] [PubMed]
  10. Ghadage, D.; Bagde, R.; Jha, S.; Dhadi, M.; Barhate, C. A Review On Current Technological Advancements in Prosthetic Arms. In Proceedings of the ACCESS 2023—2023 3rd International Conference on Advances in Computing, Communication, Embedded and Secure Systems, Kalady, Ernakulam, India, 18–20 May 2023; pp. 328–333. [Google Scholar] [CrossRef]
  11. Huamanchahua, D.; Toledo-Garcia, P.; Aguirre, J.; Huacre, S. Hand Exoskeletons for Rehabilitation: A Systematic Review. In Proceedings of the 2022 IEEE International IOT, Electronics and Mechatronics Conference, IEMTRONICS 2022, Toronto, ON, Canada, 1–4 June 2022. [Google Scholar] [CrossRef]
  12. Aparna, R.P.; Iyer, V.; Dhivya, J. Advancing Exoskeleton Research: A Comprehensive Review. In Proceedings of the International Conference on Advancements in Electrical, Electronics, Communication, Computing and Automation, ICAECA 2023, Coimbatore, India, 16–17 June 2023. [Google Scholar] [CrossRef]
  13. Serrano, D.; Copaci, D.; Arias, J.; Moreno, L.E.; Blanco, D. SMA-Based Soft Exo-Glove. IEEE Robot. Autom. Lett. 2023, 8, 5448–5455. [Google Scholar] [CrossRef]
  14. Dragoi, M.; Mocanu, I.; Cramariuc, O. Object Manipulation for Assistive Robots. In Proceedings of the 2021 9th E-Health and Bioengineering Conference, EHB 2021, Iasi, Romania, 18–19 November 2021. [Google Scholar] [CrossRef]
  15. Kyrarini, M.; Zheng, Q.; Haseeb, M.A.; Graser, A. Robot learning of assistive manipulation tasks by demonstration via head gesture-based interface. In Proceedings of the IEEE International Conference on Rehabilitation Robotics, Toronto, ON, Canada, 24–28 June 2019; pp. 1139–1146. [Google Scholar] [CrossRef]
  16. Jardón Huete, A.J.; Victores, J.G.; Martínez, S.; Giménez, A.; Balaguer, C. Personal autonomy rehabilitation in home environments by a portable assistive robot. IEEE Trans. Syst. Man Cybern. Part C Appl. Rev. 2012, 42, 561–570. [Google Scholar] [CrossRef]
  17. Naranjo-Campos, F.J.; Matías-Martínez, A.D.; Victores, J.G.; Álvarez, N.; Alcaide, A.; Balaguer, C. Manipulación de objetos dirigida a la asistencia de personas con movilidad reducida [Object manipulation aimed at assisting people with reduced mobility]. In Proceedings of the XLIII Conference on Automation (Libro de Actas de las XLIII Jornadas de Automática), Logroño, Spain, 7–9 September 2022; pp. 798–803. [Google Scholar] [CrossRef]
  18. Garrote, L.; Paulo, J.; Perdiz, J.; Peixoto, P.; Nunes, U.J. Robot-Assisted Navigation for a Robotic Walker with Aided User Intent. In Proceedings of the RO-MAN 2018—27th IEEE International Symposium on Robot and Human Interactive Communication, Nanjing, China, 27–31 August 2018; pp. 348–355. [Google Scholar] [CrossRef]
  19. Lin, Z.; Luo, J.; Yang, C. A teleoperated shared control approach with haptic feedback for mobile assistive robot. In Proceedings of the ICAC 2019—2019 25th IEEE International Conference on Automation and Computing, Lancaster, UK, 5–7 September 2019. [Google Scholar] [CrossRef]
  20. Mohebbi, A. Human-Robot Interaction in Rehabilitation and Assistance: A Review. Curr. Robot. Rep. 2020, 1, 131–144. [Google Scholar] [CrossRef]
  21. Hsu, P.E.; Hsu, Y.L.; Chang, K.W.; Geiser, C. Mobility assistance design of the intelligent robotic wheelchair. Int. J. Adv. Robot. Syst. 2012, 9, 244. [Google Scholar] [CrossRef]
  22. Paulo, J.; Peixoto, P.; Nunes, U.J. ISR-AIWALKER: Robotic Walker for Intuitive and Safe Mobility Assistance and Gait Analysis. IEEE Trans.-Hum.-Mach. Syst. 2017, 47, 1110–1122. [Google Scholar] [CrossRef]
  23. Navarro, J.L.O. Interfaz de Teleoperación Para el Manipulador Móvil Manfred. 2015. Available online: https://e-archivo.uc3m.es/entities/publication/0e72adc4-9c81-4ae1-839a-c669276d7b52 (accessed on 10 August 2024).
  24. Islam, J.; Ghosh, A.; Iqbal, M.I.; Meem, S.; Ahmad, N. Integration of Home Assistance with a Gesture Controlled Robotic Arm. In Proceedings of the 2020 IEEE Region 10 Symposium, TENSYMP 2020, Dhaka, Bangladesh, 5–7 June 2020; pp. 266–270. [Google Scholar] [CrossRef]
  25. Calzada, A.; Łukawski, B.; Victores, J.G.; Balaguer, C. Teleoperation of the robot TIAGo with a 3D mouse controller. In Proceedings of the Symposium on Robotics, Bioengineering, and Computer Vision (Libro de Actas del Simposio de Robótica, Bioingeniería y Visión por Computador), Badajoz, Spain, 29–31 May 2024; pp. 133–138, ISBN 978-84-9127-262-5. [Google Scholar]
  26. Jardón Huete, A. Metodología de Diseño de Robots de Asistenciales. Aplicación al Robot Portátil ASIBOT [Methodology for the Design of Assistive Robots. Application to the Portable Robot ASIBOT]. Ph.D. Thesis, Universidad Carlos III de Madrid, Leganés, Spain, 2006. [Google Scholar]
  27. Miseikis, J.; Caroni, P.; Duchamp, P.; Gasser, A.; Marko, R.; Miseikiene, N.; Zwilling, F.; Castelbajac, C.D.; Eicher, L.; Fruh, M.; et al. Lio-A Personal Robot Assistant for Human-Robot Interaction and Care Applications. IEEE Robot. Autom. Lett. 2020, 5, 5339–5346. [Google Scholar] [CrossRef] [PubMed]
  28. Barber, R.; Ortiz, F.J.; Garrido, S.; Calatrava-Nicolás, F.M.; Mora, A.; Prados, A.; Vera-Repullo, J.A.; Roca-González, J.; Méndez, I.; Mozos, Ó.M. A Multirobot System in an Assisted Home Environment to Support the Elderly in Their Daily Lives. Sensors 2022, 22, 7983. [Google Scholar] [CrossRef] [PubMed]
  29. Garrido-Jurado, S.; Muñoz-Salinas, R.; Madrid-Cuevas, F.J.; Marín-Jiménez, M.J. Automatic generation and detection of highly reliable fiducial markers under occlusion. Pattern Recognit. 2014, 47, 2280–2292. [Google Scholar] [CrossRef]
  30. Chitta, S. Moveit!: An introduction; Springer: Berlin/Heidelberg, Germany, 2016; Volume 625, pp. 3–27. [Google Scholar] [CrossRef]
  31. Rubagotti, M.; Tusseyeva, I.; Baltabayeva, S.; Summers, D.; Sandygulova, A. Perceived safety in physical human–robot interaction—A survey. Robot. Auton. Syst. 2022, 151, 104047. [Google Scholar] [CrossRef]
  32. Bhattacharjee, T.; Gordon, E.K.; Scalise, R.; Cabrera, M.E.; Caspi, A.; Cakmak, M.; Srinivasa, S.S. Is more autonomy always better? Exploring preferences of users with mobility impairments in robot-assisted feeding. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction, Cambridge, UK, 23–26 March 2020; pp. 181–190. [Google Scholar] [CrossRef]
Figure 1. Robot TIAGo and its elements used in the assistive task.
Figure 1. Robot TIAGo and its elements used in the assistive task.
Applsci 14 07536 g001
Figure 2. Object container with the ArUco marker, 3D printed.
Figure 2. Object container with the ArUco marker, 3D printed.
Applsci 14 07536 g002
Figure 3. Diagram of the software structure implemented.
Figure 3. Diagram of the software structure implemented.
Applsci 14 07536 g003
Figure 4. Scanned images from cameras with detected ArUco markers plotted.
Figure 4. Scanned images from cameras with detected ArUco markers plotted.
Applsci 14 07536 g004
Figure 5. Flowchart of the routine of detecting and picking up the container from a table.
Figure 5. Flowchart of the routine of detecting and picking up the container from a table.
Applsci 14 07536 g005
Figure 6. Sequence of frames showing the routine for picking a container from a table.
Figure 6. Sequence of frames showing the routine for picking a container from a table.
Applsci 14 07536 g006
Figure 7. Finite state machine for the process of searching for the ArUco marker.
Figure 7. Finite state machine for the process of searching for the ArUco marker.
Applsci 14 07536 g007
Figure 8. Finite state machine for the process of centering the gripper with the ArUco marker.
Figure 8. Finite state machine for the process of centering the gripper with the ArUco marker.
Applsci 14 07536 g008
Figure 9. Grasping scheme.
Figure 9. Grasping scheme.
Applsci 14 07536 g009
Figure 10. Flowchart of the routine of detecting and picking up the container from the shelf.
Figure 10. Flowchart of the routine of detecting and picking up the container from the shelf.
Applsci 14 07536 g010
Figure 11. Sequence of frames showing the routine for picking a container from the shelf.
Figure 11. Sequence of frames showing the routine for picking a container from the shelf.
Applsci 14 07536 g011
Figure 12. Flowchart of the routine of picking up the container from the user.
Figure 12. Flowchart of the routine of picking up the container from the user.
Applsci 14 07536 g012
Figure 13. Sequence of frames demonstrating the routine for picking from the user.
Figure 13. Sequence of frames demonstrating the routine for picking from the user.
Applsci 14 07536 g013
Figure 14. Flowchart of the routine of placing the container on a table.
Figure 14. Flowchart of the routine of placing the container on a table.
Applsci 14 07536 g014
Figure 15. Sequence of frames demonstrating the routine for picking from the user.
Figure 15. Sequence of frames demonstrating the routine for picking from the user.
Applsci 14 07536 g015
Figure 16. Flowchart of the implemented assistive delivery routine: this routine consists of navigating to the container’s location, picking it up, and delivering it to the user or table.
Figure 16. Flowchart of the implemented assistive delivery routine: this routine consists of navigating to the container’s location, picking it up, and delivering it to the user or table.
Applsci 14 07536 g016
Figure 17. Sequence of frames of the delivery to the user (drop container option).
Figure 17. Sequence of frames of the delivery to the user (drop container option).
Applsci 14 07536 g017
Figure 18. User interface web app.
Figure 18. User interface web app.
Applsci 14 07536 g018
Figure 19. Graph of time measurements for the pick-from-table routine. The X-axis represents the initial position of the container relative to the saved pick point, and the Y-axis represents the execution time. The blue diamond markers correspond to the initial positions at the optimal distance, the orange squares indicate initial positions closer to the containers, and the gray triangles represent the initial positions further away from the containers.
Figure 19. Graph of time measurements for the pick-from-table routine. The X-axis represents the initial position of the container relative to the saved pick point, and the Y-axis represents the execution time. The blue diamond markers correspond to the initial positions at the optimal distance, the orange squares indicate initial positions closer to the containers, and the gray triangles represent the initial positions further away from the containers.
Applsci 14 07536 g019
Figure 20. Set-up of the FLM room for user experiments. Map on the left, with furniture tagged in gray, the POIs in red and the patient location in blue. View of the room on right, with furniture and user tagged.
Figure 20. Set-up of the FLM room for user experiments. Map on the left, with furniture tagged in gray, the POIs in red and the patient location in blue. View of the room on right, with furniture and user tagged.
Applsci 14 07536 g020
Table 1. Execution results of the pick-from-the-table routine. x m and y m are the mean initial coordinates with respect to the robot, d x m is the robot’s travel distance to match the gripping distance, and t e is the execution time.
Table 1. Execution results of the pick-from-the-table routine. x m and y m are the mean initial coordinates with respect to the robot, d x m is the robot’s travel distance to match the gripping distance, and t e is the execution time.
y m (m) x m (m) dx m (m) t e (s)
0.01 ± 0.00 0.89 ± 0.02 0 ± 0 39.5 ± 1.9
0.06 ± 0.01 0.74 ± 0.01 0.09 ± 0.01 51.5 ± 0.8
0.01 ± 0.01 1.02 ± 0.02 0.08 ± 0.01 46.6 ± 1.5
0.28 ± 0.00 0.85 ± 0.01 0 ± 0 43.1 ± 1.6
0.36 ± 0.03 0.75 ± 0.03 0.06 ± 0.02 59.5 ± 6.4
0.30 ± 0.01 0.97 ± 0.1 0.08 ± 0.01 50.4 ± 1.5
0.23 ± 0.00 0.83 ± 0.01 0 ± 0 46.9 ± 0.8
0.25 ± 0.03 0.76 ± 0.01 0.07 ± 0.01 55.8 ± 2.9
0.22 ± 0.02 0.92 ± 0.02 0.06 ± 0.01 55.9 ± 5.5
Table 2. Execution results of the routine of delivering the container. The origin is the POI where the container is located, the destination is where to deliver the container, t e is the mean execution time, and σ is the standard deviation of the execution time.
Table 2. Execution results of the routine of delivering the container. The origin is the POI where the container is located, the destination is where to deliver the container, t e is the mean execution time, and σ is the standard deviation of the execution time.
OriginDestination t m (s) σ (s)
tableuser106.733.72
shelfuser121.522.88
usertable91.362.26
Table 3. The responses of users to surveys about the experiment.
Table 3. The responses of users to surveys about the experiment.
User AUser BUser CUser DUser EUser FMean
Potential (1–5)4.5544554.6
Satisfaction (1–5)4544444.1
Usefulness (1–5)4543.54.554.3
UI attractiveness (1–5)5543454.3
Table 4. Improvements and future needs noted by users.
Table 4. Improvements and future needs noted by users.
ImprovementsFaster execution, UI with larger typography, UI with pictures
Future needsVoice control, picking up not only containers, opening a closet, picking up from the floor, tasks not only in the main room
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Naranjo-Campos, F.J.; De Matías-Martínez, A.; Victores, J.G.; Gutiérrez Dueñas, J.A.; Alcaide, A.; Balaguer, C. Assistance in Picking Up and Delivering Objects for Individuals with Reduced Mobility Using the TIAGo Robot. Appl. Sci. 2024, 14, 7536. https://doi.org/10.3390/app14177536

AMA Style

Naranjo-Campos FJ, De Matías-Martínez A, Victores JG, Gutiérrez Dueñas JA, Alcaide A, Balaguer C. Assistance in Picking Up and Delivering Objects for Individuals with Reduced Mobility Using the TIAGo Robot. Applied Sciences. 2024; 14(17):7536. https://doi.org/10.3390/app14177536

Chicago/Turabian Style

Naranjo-Campos, Francisco J., Ainhoa De Matías-Martínez, Juan G. Victores, José Antonio Gutiérrez Dueñas, Almudena Alcaide, and Carlos Balaguer. 2024. "Assistance in Picking Up and Delivering Objects for Individuals with Reduced Mobility Using the TIAGo Robot" Applied Sciences 14, no. 17: 7536. https://doi.org/10.3390/app14177536

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop