Next Article in Journal
Geotechnical Design Practices and Soil–Structure Interaction Effects of an Integral Bridge System: A Review
Next Article in Special Issue
Robotic Platforms for Assistance to People with Disabilities
Previous Article in Journal
An Algorithm for Solving Robot Inverse Kinematics Based on FOA Optimized BP Neural Network
Previous Article in Special Issue
Use Learnable Knowledge Graph in Dialogue System for Visually Impaired Macro Navigation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Modular Mobile Robotic Platform to Assist People with Different Degrees of Disability

by
Jose M. Catalan
1,*,
Andrea Blanco
1,*,
Arturo Bertomeu-Motos
2,*,
Jose V. Garcia-Perez
1,
Miguel Almonacid
3,
Rafael Puerto
1 and
Nicolas Garcia-Aracil
1
1
Biomedical Neuroengineering Group, Bioengineerig Institute, Universidad Miguel Hernández, 03202 Elche, Spain
2
Department of Software and Computing Systems, University of Alicante, 03690 San Vicente del Raspeig, Spain
3
Robotics and Rehabilitation Laboratory, Columbia University, New York, NY 10027, USA
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2021, 11(15), 7130; https://doi.org/10.3390/app11157130
Submission received: 21 June 2021 / Revised: 27 July 2021 / Accepted: 29 July 2021 / Published: 2 August 2021
(This article belongs to the Special Issue Robotic Platforms for Assistance to People with Disabilities)

Abstract

:
Robotics to support elderly people in living independently and to assist disabled people in carrying out the activities of daily living independently have demonstrated good results. Basically, there are two approaches: one of them is based on mobile robot assistants, such as Care-O-bot, PR2, and Tiago, among others; the other one is the use of an external robotic arm or a robotic exoskeleton fixed or mounted on a wheelchair. In this paper, a modular mobile robotic platform to assist moderately and severely impaired people based on an upper limb robotic exoskeleton mounted on a robotized wheel chair is presented. This mobile robotic platform can be customized for each user’s needs by exploiting its modularity. Finally, experimental results in a simulated home environment with a living room and a kitchen area, in order to simulate the interaction of the user with different elements of a home, are presented. In this experiment, a subject suffering from multiple sclerosis performed different activities of daily living (ADLs) using the platform in front of a group of clinicians composed of nurses, doctors, and occupational therapists. After that, the subject and the clinicians replied to a usability questionnaire. The results were quite good, but two key factors arose that need to be improved: the complexity and the cumbersome aspect of the platform.

1. Introduction

There is evidence that early and intensive rehabilitation therapies are associated with better functional gains in patients with acquired brain damage [1]. Rehabilitation robots have shown good results in delivering high-intensity therapies and to maximize patients’ recovery [2,3,4]. However, there are some motor functions that cannot be recovered. In this case, assistive robotics have shown good results in assisting patients with acquired brain damage in performing activities of daily living and/or in supporting elderly people in staying active, socially connected, and living independently. Principally, there are two kinds of assistive robotic devices: one of them is based on mobile robot assistants, such as Care-O-bot, PR2, and Tiago, among others; the other one is based on the use of an external robotic arm or a robotic exoskeleton fixed or mounted on a wheelchair.
On the other hand, there is another approach based on the use of: (i) an external robotic arm fixed or mounted on a wheelchair; or (ii) an exoskeleton robotic device. JACO and iARM are two of the most popular external robotic arms fixed or mounted on wheelchairs. Both robotic arms were designed to be mounted on a user’s motorized wheelchair; they have six degrees of freedom and can reach objects at a distance of 90 cm [5]. A study on the practical demands of the potential users of external robotic arms and upper limb exoskeletons for assistance with ADLs can be found in [6]. The study concluded that eating and hairdressing, as well as cleaning, handling food, dressing, and moving nearby items were the ADLs that have received relatively high scores regarding the necessity of external robotic arms. The FRIEND robotic platform is an example of a well-known external robotic arm that assists disabled people in performing ADLs. The FRIEND platform, which belongs to the group of intelligent wheelchair-mounted manipulators, is intended to support disabled people with impairments of the upper limbs in ADLs [7]. On the other hand, dressing, toilet use, transfer, wheelchair control, moving nearby items, and handling food have shown high demand for the necessity of upper limb exoskeletons. Kigachi et al. presented a mechanism and control method of a mobile exoskeleton robot for three-degree-of-freedom upper-limb motion assistance (shoulder vertical and horizontal flexion/extension and elbow flexion/extension motion assistance) [8]. In addition, Meng et al. presented a mobile robotic exoskeleton with six degrees of freedom (DOFs) based on a wheelchair [9].
In this paper, a mobile robotic platform for assisting moderately and severely impaired people in performing daily activities and fully participating in society is presented. The mobile robotic platform was based on an upper limb robotic exoskeleton mounted on a robotized wheel chair. The platform is modular and composed of different hardware components: an unobtrusive and wireless hybrid brain/neural–computer interaction (BNCI) system (electroencephalography (EEG) and electrooculography (EOG)) [10], a physiological signal monitoring system, an electromyography (EMG) system, a rugged, small form-factor, and high-performance computer, a robotized wheelchair, RGB-D cameras, a voice control system, eye-tracking glasses, a small monitor, a robotic arm exoskeleton attached to the wheelchair, and a robotic hand exoskeleton including a mechatronic device to control the pronation/supination of the arm. Moreover, the robotic exoskeleton can be replaced with an external robotic device if needed. The platform has open-source software components as well, such as algorithms to estimate the user’s intention based on the hybrid BNCI system, to process the user’s physiological reactions, to estimate the indoor location and to navigate, to estimate gaze and to recognize objects, to compute 3D objects and mouth pose, to recognize user activity, and a high-level controller to control the robotic exoskeleton or external robotic device and to control the environment and wheelchair control system. The modularity of the presented mobile robotic platform can be exploited by adapting the multimodal interface to the residual capabilities of the disabled person. In particular, the platform can be mainly adapted to three groups of end users with different residual capabilities:
  • Group 1: users with residual motor capabilities to control the arm and/or hand, but who need assistance to carry out activities of daily living in an efficient way. In this group of users, residual EMG signals could be used to control a wearable robot to assist in performing ADLs. In addition, the multimodal interface could be composed of a voice semantic recognition system (for users with non-speech disorders) or a wearable EOG system (for users with speech disorders) to tune some parameters of the high-level controller of the wearable robot and to interact with the user control software, a commercial wearable device for physiological signal monitoring, and RGB depth cameras to sense and understand the environment and context to automatically recognize the abilities necessary for different ADLs;
  • Group 2: users without functional control of the arm and/or hand and who are unable to speak (due to a speech disorder or aphasia). In this group, the multimodal interface could be composed of a hybrid BMI system to send commands to the high-level control of the wearable robot, a wearable EOG system to interact with the user control software, a commercial wearable device for physiological signal monitoring, and RGB depth cameras to sense and understand the environment and context to automatically recognize the abilities necessary for different ADLs;
  • Group 3: users without functional motor control of the arm and/or hand, with speech disorders, and with limited ability to control the movement of their eyes. In this case, the multimodal interface could be composed of a BMI system to send commands to the high-level control of the wearable robot and to interact with the user control software, a commercial wearable device for physiological signal monitoring, and RGB depth cameras to sense and understand the environment and context to automatically recognize the abilities necessary for different ADLs.
For users belonging to Groups 1 and 2, a set of application scenarios was identified as possible targets for the AIDE system: drinking tasks, eating tasks, pressing a sensitive dual switch, performing personal hygiene, touching another person, and so on. For users, belonging to Group 3, the identified scenarios were related to communication, the control of home devices, and entertainment.

2. Modular Assistive Robotic Platform

The system is a fully autonomous prototype consisting mainly of a robotized wheelchair with autonomous navigation capabilities, a multimodal interface, and a novel arm exoskeleton attached to the wheelchair (Figure 1).

2.1. Biosignal Acquisition System

The proposed platform is capable of measuring and storing data from several physiological signals. Some of these signals are used for decision making when controlling the system, such as the EOG or EEG, but others are only used to measure the condition of the patient (respiratory rate, galvanic skin response, heart rate, etc.). The system allows adapting the use of the physiological signals based on the patient’s need. In addition, new biosignals and processing techniques can be integrated. The performance, signal processing, and adaptation of the different physiological signals of the system have been tested in several studies [11,12,13,14,15].

2.1.1. ExG Cap

An ExG cap, developed by Brain Vision, can be used to perform three different biosignal measurements: (1) EEG acquisition, through eight electrodes, to perform BNCI tasks and allow the user to control the assistive robotic device and interact with the control interface; (2) EOG acquisition, using two electrodes placed on the outer canthus of the eyes, to detect left and right eye movements, to provide the user the opportunity to navigate through the menus of the control interface; (3) EKG acquisition to be combined with the respiration and galvanic skin response (GSR) data in order to estimate the affective state of the user [16].

2.1.2. Electrocardiogram and Respiration Sensor

The system incorporates the Zephyr BioHarnessTM (Medtronic Zephyr, Boulder, CO, USA) physiological monitoring telemetry device to measure the electrocardiogram (ECG) and the respiration rate. This device has a built-in signal-processing unit. Therefore, we only applied a 0.004 Hz high-pass filter to remove the DC component of the signals. The HR was extracted from the ECG signal, but the time domain indices of the heart rate variability (HRV) were also extracted. In particular, the SDANN was used as a feature of the HRV, which is defined as the standard deviation of the average instantaneous heart rate intervals (NN) calculated over short periods. In this case, the SDANN was computed over a moving window of 300 s.

2.1.3. Galvanic Skin Response

A GSR sensor, developed by Shimmer, measures the skin conductivity between two reusable electrodes mounted to two fingers of one hand. These data are used, together with the EKG and the respiratory rate, to estimate the affective state of the user [12]. GSR is a common measure in psychophysiological paradigms and therefore often used in affective state detection. The GSR signal was processed using a band-pass filter of 0.05– 1.5 Hz (the frequency range of the skin conductance response (SCR)) in order to remove the artifacts.

2.2. Environment Perception and Control System

The system integrates a computer vision system to recognize the environment with which the system will interact [17]. In addition, it has a user interface so that the user can interact with the environment.

2.2.1. Computer Vision System

The activities of daily living (ADLs) require the capability to perform reaching tasks within a complex and unstructured environment. This problem should be solved in real time to be able to deal with the possible disturbances that the element may suffer during the interaction. Moreover, the objects are commonly textureless.
Currently, several methods have been proposed. However, despite the great advances in the field (especially using deep learning techniques), it has not been solved effectively yet, especially with nontextured objects. Some authors have used commercial tracking systems such as Optitrack or ART Track [18,19,20]. The main limitation of these devices is the necessity to modify the objects to track through the inclusion of optical markers, to reconstruct their position and orientation. The main lines of investigation in the field of 3D textureless object pose estimation are methods based on geometric 3D descriptors, template matching, deep learning techniques, and random forests.
Our system incorporates a computer vision system based on the use of three devices (Figure 1). The first one is Tobii Pro Glasses 2. This eye-tracking system allows the user to select the desired object. The second one is the Orbbec Astra S RGB-D camera used for the 3D pose estimation of textureless objects with which the system can interact. This camera is attached directly to the back of the wheelchair by means of a structure that places it on top of the user’s head, focusing on the scene. Finally, a full HD 1080p camera able to work at 30 fps is placed in front of the user, under the screen. This camera is used to estimate the 3D pose of the mouth of the user. This information helps the system know which position the exoskeleton must be in for tasks such as eating or drinking.
This computer vision system was tested in real conditions with patients and was also thoroughly evaluated both qualitatively and quantitatively. The results and a more detailed explanation of the algorithms developed can be seen in [17].

2.2.2. User Interface

The system also has a screen attached to the wheelchair and located in front of the user (Figure 1). On this screen, the interface menus are displayed. It brings many different options to the user (e.g., go to another room, drink, grab an object, entertainment, etc.) and gives some information about the selected task and the exoskeleton status.

2.3. Mobile Platform

The mobile platform was based on Summit XL Steel, from Robotnik. It has omnidirectional wheels that allow the movement of the user in the room. Furthermore, it has its own computer that executes a navigation system, which makes it possible to move between different rooms. Laser-based simultaneous localization and mapping (SLAM) is used to perform the mapping of each room, and the navigation and localization along the different rooms is performed using the adaptive Monte Carlo localization (AMCL) probabilistic localization system, as can be observed in Figure 2). In addition, this platform is equipped with two laser sensors used to provide the wheelchair with an obstacle avoidance algorithm, increasing the safety during navigation.

2.4. Electric Power System

The system has three batteries to power the whole system. First, the mobile platform incorporates a 15/30Ah@48V LiFePO4 battery, which gives an autonomy of up to 10 h. On the other hand, the main computer of the system also has its own 91 kWh battery. The third and the last battery of the system are dedicated to supply the arm and hand exoskeleton. This battery was built with Panasonic 18650b cells and has a capacity of 1.18 kWh, which gives an autonomy of up to 3 h in continuous operation.

2.5. Safety Buttons

Safety is a key issue in wearable robotics, so there are three emergency stop switches (Figure 3): (1) on the left side of the robotized wheelchair; (2) on the back side of the robotized wheelchair; and (3) connected through a wire to the left side of the wheelchair.
By default, there is only one emergency button that kills the exoskeleton power supply from the battery, located on a panel on the wheelchair. However, there is a second plug that offers the possibility of wiring a second button, which allows halting the device from a distance.
To restart the exoskeleton operation after a safety stop, the emergency button must be released and the lit green button of the left panel must be pressed.
The mobile robotic platform has its own emergency button located on the back side of the robotized wheelchair.
To restart the movement of the robotized wheelchair after a safety stop, the platform must be restarted by following the following steps: (1) pressing the green CPU button for 2 s; (2) when the green LED of the CPU button is off, putting the ON-OFF switch in the OFF position; (3) putting the ON-OFF switch in the ON position; this will turn on the platform electronics again; (4) pressing the green CPU button for 2 s; and (5) releasing the safety button.

2.6. Assistive Robotic Devices

The system is able to integrate two different types of robotic devices to assist people with disabilities: (i) an external robotic arm; or (ii) a robotic exoskeleton. Both of them mounted are on the robotized wheelchair.
The control architecture of the robot is independent of the type of robot used as an assistive device. This architecture was implemented in a low layer and a high layer. The low layer implements the low-level control of the robotic device. It implements a joint trajectory controller, which executes the trajectories received by the high-level controller. The other layer corresponds to the high-level controller, which is responsible for managing the communication of the robot with the system, but it also implements a motion planning system. This motion planning system resorts to the learning by demonstration (LbD) method based on the dynamic movement primitives (DMPs) proposed and evaluated in [21].

2.6.1. Exoskeleton Robotic Device

An upper limb exoskeleton was designed with five active degrees of freedom corresponding to the following arm movements: shoulder abduction/adduction, shoulder flexion/extension, shoulder internal/external rotation, elbow flexion/extension, and wrist pronation/supination [11,12,21,22]. This device allows the user’s right arm to be moved to reach objects, thus facilitating the performance of ADLs (Figure 1).
In addition to the arm exoskeleton, an active hand exoskeleton was designed to assist the opening and closing of the right/left hand [23,24]. It consists of four independent modules anchored to a hand orthosis that actuate the movements of the thumb, index finger, and middle finger, and jointly move the ring and little finger. The configuration of the hand can be adapted according to the size of the hand.

2.6.2. Robotic Manipulator

The system can also integrate an external robotic manipulator. Experimental tests of the complete system were carried out with JACO robot produced by Kinova (Boisbriand, Canada) [25]. This robotic manipulator is a very light manipulator (4.4 kg for the arm and 727 g for the hand), which can be installed on a motorized wheelchair (right or left) to help people with upper extremity mobility limitations. It has seven degrees of freedom, with a two- or three-finger gripper with a maximum opening of 17.5 cm. The JACO robot is capable of loading objects from 3.5 kg to 4.4 kg, being able to reach objects within a radius of 75 cm.

2.7. Processing and Control System

The system has two computers, the main computer of the system and the computer integrated within the mobile robotic platform (Figure 1 and Figure 4).
The computer of the mobile robotic platform executes the navigation algorithms of the mobile platform using all the information from the sensors. It communicates with the main computer to execute the actions received from the system, as well as to inform the system about the current state during the navigation.
The main computer performs the communication between all the components of the system, processes all the information gathered from the sensors and cameras, and controls the arm and hand exoskeletons. This computer has its own 91 kWh battery.
Both computers communicate through a WiFi router. In this way, we can monitor the operation of the entire system by connecting an external computer to the router.

2.8. Finite State Machine

The integration of environmental data acquired by 3D sensors and user intentions has been evaluated in several studies [11,12,13,14,15]. The AIDE system also incorporates an activity recognition algorithm to improve the performance of the control interfaces. This algorithm has been evaluated with patients [16]. The experience gained in these studies resulted in two different state machines (Figure 5 and Figure 6). Both finite state machines (FSMs) describe the general operation of the system, so they have to be adapted according to the user’s residual capabilities, in other words depending on the control user interfaces employed. The system can be controlled by means of EEG, EMG, EoG, gaze, voice commands, etc., and/or a combination of these. In this way, the system is adapted to the user’s needs or preferences. These FSMs were evaluated in the different studies cited. In addition, in these studies, the different functions of the finite state machines were explained.

2.8.1. Hygiene Task

Due to the complexity of this type of task, the hygiene task is primarily intended to allow the user to be able to clean his/her face or brush his/her teeth. Figure 5 shows the state machine developed to carry out this type of task.

2.8.2. Preparing and Eating a Meal

In this scenario, the complex task of preparing and eating a meal is broken up into two subtasks. First, the user has to prepare a meal (Figure 6). In this FSM, the user takes the food from the fridge and heats it in the microwave. To do this, the use moves the wheelchair, opens/closes the fridge, opens/closes the microwave, and moves the robotic arm and hand exoskeleton to grasp and release the food tray. In order to perform this, several elements of the AIDE system are involved such us environmental control to move the wheelchair, the robotic arm, and the hand exoskeleton, the object detection and 3D pose estimation, etc.
After this, the system will continue to the eating and drinking task. In this task, the wheelchair is always in the same position in such a way that the user has only to interact with the exoskeleton to manipulate the glass and the cutlery.

3. Experimental Session

The study presented in this paper aimed to determine the degree of usability of the complete system in its main application environment, assistance in activities of daily living. In other experiments carried out throughout the project [11,12,16,17,21], the different elements that compose the robotic system described here were validated, as well as the different user interfaces used (EEG, EOG, EMG) [13,14,15].
This experiment was performed in a home environment developed for this purpose. It consisted of a room divided into two areas, one that simulated the living room and the other the kitchen. These two areas were used by a user in order to simulate the interaction with different elements of a home.
For this purpose, we enlisted the collaboration of a subject suffering from multiple sclerosis. In addition, a group of clinicians composed of nurses, doctors, and occupational therapists provided us an objective view of the system in its main field of application after the observation of this experiment (see Figure 7).
The results of this study were obtained by performing the System Usability Scale (SUS), which determines the degree of system usability as perceived by the user and the clinicians.

3.1. Interface

The control of all the system proposed for this experiment was performed through an environmental control interface (ECI). This interface was developed under the AIDE project. It consists of three different abstraction levels where the user has to navigate in order to perform a specific activity (Figure 8). The first level shows the available rooms of the proposed scenario; the second level has a grid with all the possible activities the user can perform; and the last level is related to the action the user can achieve regarding the activity. The control of this interface was performed with a hybrid EEG/EOG system [26]. In addition, the control of the ECI was provided with an intelligent system, proposed in [16], in order to help the navigation through the interface and streamline the completion of the desired task.

3.2. Navigation

In this experiment, two different rooms were mapped, the kitchen and the living room, as can be seen in Figure 9. After a previous mapping of the different rooms, the user could freely navigate through the them using the proposed interface. The navigation to each room was performed in two steps using the interface. First, three different location points were established to perform a direct displacement to them. Then, a fine approach could be performed by small displacements to reach the place where the task had to be executed.

3.3. Activities of Daily Living

Throughout the experiment, the user interacted with several elements of the home through the use of the environmental control interface (Figure 9). These elements were located in two different rooms, the kitchen and the living room. The user navigated through the environmental control menu using the EOG and EEG interfaces described above.
Environmental control allowed the user to choose the destination he wanted to reach (kitchen or living room), and the mobile platform would take him there automatically. First, as shown in Figure 10, he moved to the kitchen area and adjusted the height of the worktop. Next, he moved to the living room, where he lit a lamp and then turned on the television. The times indicated are those that the user took to complete the activity, from the time he initiated the order to select the task to be performed until the activity was completely finished.
Once the user had interacted with the different elements of the room, he was ready to perform the eating task. As previously, the user selected the object, in this case the spoon, using the eye-tracking system, and he confirmed the selected object using an EOG command. Therefore, the exoskeleton started to move. When the robot reached the object, the user had to think “close” in order to close the hand (EEG command). When the robot reached his mouth, the user used EOG commands to indicate that he wanted to finish the task or wanted to continue eating. To leave the spoon, the user had to think “open” in order to open the hand (EEG command). At that point, the exoskeleton returned to the idle position, and the finite state machine was left waiting for a new command.
The user was able to complete all the tasks in reasonably short times, since the longest activities were navigation to the kitchen (1 min and 15 s) and the eating task (depending on the repetitions the user wanted to perform). In addition, the user had the ability to abort the activity carried out at any time if he deemed it necessary, providing greater security to the system.

3.4. Subjective Assessment of Usability

The System Usability Scale (SUS) provides a quick tool for measuring the usability aspects of technology. The SUS consists of 10 questions with five response options from strongly agree to strongly disagree. The questions are the following:
Q1
I think that I would like to use this system frequently.
Q2
I found the system unnecessarily complex.
Q3
I thought the system was easy to use.
Q4
I think that I would need the support of a technical person to be able to use this system.
Q5
I found the various functions in this system were well integrated.
Q6
I thought there was too much inconsistency in this system.
Q7
I would imagine that most people would learn to use this system very quickly.
Q8
I found the system very cumbersome to use.
Q9
I felt very confident using the system.
Q10
I needed to learn a lot of things before I could get going with this system.

3.5. Results

As mentioned above, the system developed was validated in different experiments that allowed improving not only the robotic device, but also the control and the different user interfaces. In the study presented in this paper, the main objective was to know the vision of the user himself and the opinion of a group of experts in relation to the usability of the final system in the assistance with ADLs.
To answer the questionnaire, factors such as the time taken by the user to carry out the activity with the robotic system must be taken into account (it cannot be too high), as well as assessing whether the user has completed each of the tasks without problems. To this end, the experts were present as members of the public throughout the experiment, in order to be able to evaluate the aforementioned issues first hand.
All the clinicians filled in the SUS questionnaire, and the results are shown in Figure 11. The median of all the questions was equal to or above 2.5. However, the two questions with the lowest median value were related to the complexity and the cumbersome aspect of the system. This may be due to the fact that this system is a prototype that is still at an early development stage, and it is also a fact that, for the first time of use, it takes a relatively long time to calibrate the control interfaces to the user. We are working on improving the future prototypes of the system by taking into account these aspects.

4. Conclusions

In this paper, a modular robotic platform to provide assistance to moderately and severely impaired people in performing daily activities and participating in society was presented. The main innovation of our robotic platform was its modularity, which allows customizing the platform (hardware and software components) for the needs of each potential user. We presented the results of an experiment with a subject suffering from multiple sclerosis. In the experiment, the subject had to carry out different tasks in a simulated scenario while being observed by a a group of clinicians composed of nurses, doctors, and occupational therapists. After that, the subject and the clinicians replied to a usability questionnaire. These results showed a high degree of usability of the system, although there were also several areas for improvement. These aspects were taken into account to improve the new version of the device, thus trying to reduce the users’ perception of the complexity of the system.

Author Contributions

J.M.C., A.B., A.B.-M. and J.V.G.-P. designed and developed the platform. J.M.C. and A.B. worked on the construction of the experimental setup. A.B.-M. and J.V.G.-P. performed the experiments. J.M.C. analyzed the data. J.M.C., A.B. and A.B.-M. drafted the paper. J.V.G.-P. deeply revised the manuscript. N.G.-A., M.A. and R.P. contributed to the design of the the robotic platform and the experiment and deeply revised the manuscript. All the authors checked and approved the final submitted version of the manuscript. All authors read and agreed to the published version of the manuscript.

Funding

This work was supported by the AIDE project through Grant Agreement No. 645322 of the European Commission, by the Conselleria d’Educacio, Cultura i Esport of Generalitat Valenciana, by the European Social Fund—Investing in your future, through the grant ACIF 2018/214, and by the Promoción de empleo joven e implantación de garantía juvenil en I+D+I 2018 through the grant PEJ2018-002670-A.

Institutional Review Board Statement

The study was conducted according to the guidelines of the Declaration of Helsinki and approved by the Ethics Committee of Miguel Hernandez University (2017.32.E.OEP).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Conflicts of Interest

The authors declare no conflict of interest. The funder institutions had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; nor in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
AMCLAdaptive Monte Carlo localization
ADLActivities of daily living
BNCIBrain/neural–computer interaction
DMPsDynamic movement primitives
ECIEnvironmental control interface
EEGElectroencephalography
EMGElectromyography
EOGElectrooculography
FSMFinite state machine
GSRGalvanic skin response
LbDLearning by demonstration
SUSSystem Usability Scale
SLAMSimultaneous localization and mapping

References

  1. Turner-Stokes, L.; Pick, A.; Nair, A.; Disler, P.B.; Wade, D.T. Multi-disciplinary rehabilitation for acquired brain injury in adults of working age. Cochrane Database Syst. Rev. 2015, 22, CD004170. [Google Scholar] [CrossRef]
  2. Chaparro-Rico, B.; Cafolla, D.; Ceccarelli, M.; Castillo-Castaneda, E. Design and Simulation of an Assisting Mechanism for Arm Exercises. In Advances in Italian Mechanism Science; Springer International Publishing: Berlin/Heidelberg, Germany, 2017. [Google Scholar]
  3. Rodríguez-León, J.; Chaparro-Rico, B.; Russo, M.; Cafolla, D. An Autotuning Cable-Driven Device for Home Rehabilitation. J. Healthc. Eng. 2021, 2021, 6680762. [Google Scholar] [CrossRef] [PubMed]
  4. Duret, C.; Grosmaire, A.G.; Krebs, H.I. Robot-Assisted Therapy in Upper Extremity Hemiparesis: Overview of an Evidence-Based Approach. Front. Neurol. 2019, 10, 412. [Google Scholar] [CrossRef] [Green Version]
  5. Beaudoin, M.; Lettre, J.; Routhier, F.; Archambault, P.; Lemay, M.; Gélinas, I. Impacts of robotic arm use on individuals with upper extremity disabilities: A scoping review. Can. J. Occup. Ther. 2018, 85, 397–407. [Google Scholar] [CrossRef]
  6. Nam, H.S.; Seo, H.G.; Leigh, J.H.; Kim, Y.J.; Kim, S.; Bang, M.S. External Robotic Arm vs. Upper Limb Exoskeleton: What Do Potential Users Need? Appl. Sci. 2019, 9, 2471. [Google Scholar] [CrossRef] [Green Version]
  7. Gräser, A.; Kuzmicheva, O.; Ristic-Durrant, D.; Natarajan, S.K.; Fragkopoulos, C. Vision-based control of assistive robot FRIEND: Practical experiences and design conclusions. at-Automatisierungstechnik 2012, 60, 297–308. [Google Scholar] [CrossRef] [Green Version]
  8. Kiguchi, K.; Rahman, M.H.; Sasaki, M.; Teramoto, K. Development of a 3DOF mobile exoskeleton robot for human upper-limb motion assist. Robot. Auton. Syst. 2008, 56, 678–691. [Google Scholar] [CrossRef]
  9. Meng, Q.; Xie, Q.; Shao, H.; Cao, W.; Wang, F.; Wang, L.; Yu, H.; Li, S. Pilot Study of a Powered Exoskeleton for Upper Limb Rehabilitation Based on the Wheelchair. BioMed Res. Int. 2019, 2019, 9627438. [Google Scholar] [CrossRef] [PubMed]
  10. Soekadar, S.R.; Witkowski, M.; Vitiello, N.; Birbaumer, N. An EEG/EOG-based hybrid brain-neural computer interaction (BNCI) system to control an exoskeleton for the paralyzed hand. Biomed. Eng. Tech. 2015, 60, 199–205. [Google Scholar] [CrossRef]
  11. Crea, S.; Nann, M.; Trigili, E.; Cordella, F.; Baldoni, A.; Badesa, F.J.; Catalán, J.M.; Zollo, L.; Vitiello, N.; Aracil, N.G.; et al. Feasibility and safety of shared EEG/EOG and vision-guided autonomous whole-arm exoskeleton control to perform activities of daily living. Sci. Rep. 2018, 8, 10823. [Google Scholar] [CrossRef] [Green Version]
  12. Badesa, F.J.; Diez, J.A.; Catalan, J.M.; Trigili, E.; Cordella, F.; Nann, M.; Crea, S.; Soekadar, S.R.; Zollo, L.; Vitiello, N.; et al. Physiological responses during hybrid BNCI control of an upper-limb exoskeleton. Sensors 2019, 19, 4931. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Nann, M.; Cordella, F.; Trigili, E.; Lauretti, C.; Bravi, M.; Miccinilli, S.; Catalan, J.M.; Badesa, F.J.; Crea, S.; Bressi, F.; et al. Restoring activities of daily living using an EEG/EOG-controlled semiautonomous and mobile whole-arm exoskeleton in chronic stroke. IEEE Syst. J. 2020, 15, 2314–2321. [Google Scholar] [CrossRef]
  14. Trigili, E.; Grazi, L.; Crea, S.; Accogli, A.; Carpaneto, J.; Micera, S.; Vitiello, N.; Panarese, A. Detection of movement onset using EMG signals for upper-limb exoskeletons in reaching tasks. J. Neuroeng. Rehabil. 2019, 16, 1–16. [Google Scholar] [CrossRef] [Green Version]
  15. Accogli, A.; Grazi, L.; Crea, S.; Panarese, A.; Carpaneto, J.; Vitiello, N.; Micera, S. EMG-based detection of user’s intentions for human-machine shared control of an assistive upper-limb exoskeleton. In Wearable Robotics: Challenges and Trends; Springer: Berlin/Heidelberg, Germany, 2017; pp. 181–185. [Google Scholar]
  16. Bertomeu-Motos, A.; Ezquerro, S.; Barios, J.A.; Lledó, L.D.; Domingo, S.; Nann, M.; Martin, S.; Soekadar, S.R.; Garcia-Aracil, N. User activity recognition system to improve the performance of environmental control interfaces: A pilot study with patients. J. Neuroeng. Rehabil. 2019, 16, 1–9. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  17. Ivorra, E.; Ortega, M.; Catalán, J.M.; Ezquerro, S.; Lledó, L.D.; Garcia-Aracil, N.; Alcañiz, M. Intelligent multimodal framework for human assistive robotics based on computer vision algorithms. Sensors 2018, 18, 2408. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  18. Onose, G.; Grozea, C.; Anghelescu, A.; Daia, C.; Sinescu, C.; Ciurea, A.; Spircu, T.; Mirea, A.; Andone, I.; Spânu, A.; et al. On the feasibility of using motor imagery EEG-based brain–computer interface in chronic tetraplegics for assistive robotic arm control: A clinical test and long-term post-trial follow-up. Spinal Cord 2012, 50, 599–608. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  19. Li, M.; Yin, H.; Tahara, K.; Billard, A. Learning object-level impedance control for robust grasping and dexterous manipulation. In Proceedings of the 2014 IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, 31 May–7 June 2014; pp. 6784–6791. [Google Scholar]
  20. Ahmadzadeh, S.R.; Kormushev, P.; Caldwell, D.G. Autonomous robotic valve turning: A hierarchical learning approach. In Proceedings of the 2013 IEEE International Conference on Robotics and Automation, Karlsruhe, Germany, 6–10 May 2013; pp. 4629–4634. [Google Scholar]
  21. Lauretti, C.; Cordella, F.; Ciancio, A.L.; Trigili, E.; Catalan, J.M.; Badesa, F.J.; Crea, S.; Pagliara, S.M.; Sterzi, S.; Vitiello, N.; et al. Learning by demonstration for motion planning of upper-limb exoskeletons. Front. Neurorobot. 2018, 12, 5. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  22. Díez, J.A.; Blanco, A.; Catalán, J.M.; Badesa, F.J.; Sabater, J.M.; Garcia-Aracil, N. Design of a prono-supination mechanism for activities of daily living. In Converging Clinical and Engineering Research on Neurorehabilitation II; Springer: Berlin/Heidelberg, Germany, 2017; pp. 531–535. [Google Scholar]
  23. Díez, J.A.; Blanco, A.; Catalán, J.M.; Bertomeu-Motos, A.; Badesa, F.J.; García-Aracil, N. Mechanical design of a novel hand exoskeleton driven by linear actuators. In Iberian Robotics Conference; Springer: Berlin/Heidelberg, Germany, 2017; pp. 557–568. [Google Scholar]
  24. Díez, J.A.; Blanco, A.; Catalán, J.M.; Badesa, F.J.; Lledó, L.D.; Garcia-Aracil, N. Hand exoskeleton for rehabilitation therapies with integrated optical force sensor. Adv. Mech. Eng. 2018, 10. [Google Scholar] [CrossRef]
  25. Campeau-Lecours, A.; Maheu, V.; Lepage, S.; Lamontagne, H.; Latour, S.; Paquet, L.; Hardie, N. Jaco assistive robotic device: Empowering people with disabilities through innovative algorithms. In Proceedings of the Rehabilitation Engineering and Assistive Technology Society of North America (RESNA) Annual Conference; 2016. Available online: https://www.resna.org/sites/default/files/conference/2016/pdf_versions/other/campeau_lecours.pdf (accessed on 1 August 2021).
  26. Soekadar, S.; Witkowski, M.; Gómez, C.; Opisso, E.; Medina, J.; Cortese, M.; Cempini, M.; Carrozza, M.; Cohen, L.; Birbaumer, N.; et al. Hybrid EEG/EOG-based brain/neural hand exoskeleton restores fully independent daily living activities after quadriplegia. Sci. Robot. 2016, 1, eaag3296. [Google Scholar] [CrossRef] [Green Version]
Figure 1. The parts of the platform are shown. The computer, battery, connections with the other components, and the safety relay are located on the back side of the prototype protected by a carbon fiber cover. Safety is a key issue in wearable robotics, so there are three emergency stop switches: (1) on the left side of the robotized wheelchair; (2) on the back side of the robotized wheelchair; and (3) connected through a wire to the left side of the wheelchair.
Figure 1. The parts of the platform are shown. The computer, battery, connections with the other components, and the safety relay are located on the back side of the prototype protected by a carbon fiber cover. Safety is a key issue in wearable robotics, so there are three emergency stop switches: (1) on the left side of the robotized wheelchair; (2) on the back side of the robotized wheelchair; and (3) connected through a wire to the left side of the wheelchair.
Applsci 11 07130 g001
Figure 2. Pictures of the monitoring of the navigation algorithm with obstacle avoidance in a real test.
Figure 2. Pictures of the monitoring of the navigation algorithm with obstacle avoidance in a real test.
Applsci 11 07130 g002
Figure 3. Safety buttons: (left) two safety buttons on the left side of the robotized wheelchair for the main CPU; (right) one safety button on the back of the wheelchair for the mobile platform.
Figure 3. Safety buttons: (left) two safety buttons on the left side of the robotized wheelchair for the main CPU; (right) one safety button on the back of the wheelchair for the mobile platform.
Applsci 11 07130 g003
Figure 4. Diagram with the connections between all the components of the platform.
Figure 4. Diagram with the connections between all the components of the platform.
Applsci 11 07130 g004
Figure 5. Finite state machine (FSM) for the eating, drinking, and hygiene tasks. They are are sequential implemented allowing the user to continue or abort the task in anytime. Black arrows refer to automatic processes. Green arrow refer to an action confirmed by the user, and the red arrows refer to the decision to abort the current activity by the user.
Figure 5. Finite state machine (FSM) for the eating, drinking, and hygiene tasks. They are are sequential implemented allowing the user to continue or abort the task in anytime. Black arrows refer to automatic processes. Green arrow refer to an action confirmed by the user, and the red arrows refer to the decision to abort the current activity by the user.
Applsci 11 07130 g005
Figure 6. FSM for preparing a meal. The states are sequentially implemented, allowing the user to continue or abort the task at anytime. Black arrows refer to automatic processes. Green arrows refer to an action confirmed by the user, and red arrows refer to the decision to abort the current activity by the user.
Figure 6. FSM for preparing a meal. The states are sequentially implemented, allowing the user to continue or abort the task at anytime. Black arrows refer to automatic processes. Green arrows refer to an action confirmed by the user, and red arrows refer to the decision to abort the current activity by the user.
Applsci 11 07130 g006
Figure 7. Pictures of the experimental session in the simulated home environment with the subject and the group of clinicians.
Figure 7. Pictures of the experimental session in the simulated home environment with the subject and the group of clinicians.
Applsci 11 07130 g007
Figure 8. Images of the experimental session where the user navigates through the menus of the control interface to perform the different tasks of the protocol.
Figure 8. Images of the experimental session where the user navigates through the menus of the control interface to perform the different tasks of the protocol.
Applsci 11 07130 g008
Figure 9. Simulated home scenario.
Figure 9. Simulated home scenario.
Applsci 11 07130 g009
Figure 10. Study protocol.
Figure 10. Study protocol.
Applsci 11 07130 g010
Figure 11. System Usability Scale (SUS) results.
Figure 11. System Usability Scale (SUS) results.
Applsci 11 07130 g011
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Catalan, J.M.; Blanco, A.; Bertomeu-Motos, A.; Garcia-Perez, J.V.; Almonacid, M.; Puerto, R.; Garcia-Aracil, N. A Modular Mobile Robotic Platform to Assist People with Different Degrees of Disability. Appl. Sci. 2021, 11, 7130. https://doi.org/10.3390/app11157130

AMA Style

Catalan JM, Blanco A, Bertomeu-Motos A, Garcia-Perez JV, Almonacid M, Puerto R, Garcia-Aracil N. A Modular Mobile Robotic Platform to Assist People with Different Degrees of Disability. Applied Sciences. 2021; 11(15):7130. https://doi.org/10.3390/app11157130

Chicago/Turabian Style

Catalan, Jose M., Andrea Blanco, Arturo Bertomeu-Motos, Jose V. Garcia-Perez, Miguel Almonacid, Rafael Puerto, and Nicolas Garcia-Aracil. 2021. "A Modular Mobile Robotic Platform to Assist People with Different Degrees of Disability" Applied Sciences 11, no. 15: 7130. https://doi.org/10.3390/app11157130

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop