2.8.2. Preparing and Eating a Meal

In this scenario, the complex task of preparing and eating a meal is broken up into two subtasks. First, the user has to prepare a meal (Figure 6). In this FSM, the user takes the food from the fridge and heats it in the microwave. To do this, the use moves the wheelchair, opens/closes the fridge, opens/closes the microwave, and moves the robotic arm and hand exoskeleton to grasp and release the food tray. In order to perform this, several elements of the AIDE system are involved such us environmental control to move the wheelchair, the robotic arm, and the hand exoskeleton, the object detection and 3D pose estimation, etc.

After this, the system will continue to the eating and drinking task. In this task, the wheelchair is always in the same position in such a way that the user has only to interact with the exoskeleton to manipulate the glass and the cutlery.

#### **3. Experimental Session**

The study presented in this paper aimed to determine the degree of usability of the complete system in its main application environment, assistance in activities of daily living. In other experiments carried out throughout the project [11,12,16,17,21], the different elements that compose the robotic system described here were validated, as well as the different user interfaces used (EEG, EOG, EMG) [13–15].

This experiment was performed in a home environment developed for this purpose. It consisted of a room divided into two areas, one that simulated the living room and the other the kitchen. These two areas were used by a user in order to simulate the interaction with different elements of a home.

For this purpose, we enlisted the collaboration of a subject suffering from multiple sclerosis. In addition, a group of clinicians composed of nurses, doctors, and occupational therapists provided us an objective view of the system in its main field of application after the observation of this experiment (see Figure 7).

**Figure 7.** Pictures of the experimental session in the simulated home environment with the subject and the group of clinicians.

The results of this study were obtained by performing the System Usability Scale (SUS), which determines the degree of system usability as perceived by the user and the clinicians.

#### *3.1. Interface*

The control of all the system proposed for this experiment was performed through an environmental control interface (ECI). This interface was developed under the AIDE project. It consists of three different abstraction levels where the user has to navigate in order to perform a specific activity (Figure 8). The first level shows the available rooms of the proposed scenario; the second level has a grid with all the possible activities the user can perform; and the last level is related to the action the user can achieve regarding the activity. The control of this interface was performed with a hybrid EEG/EOG system [26]. In addition, the control of the ECI was provided with an intelligent system, proposed in [16], in order to help the navigation through the interface and streamline the completion of the desired task.

**Figure 8.** Images of the experimental session where the user navigates through the menus of the control interface to perform the different tasks of the protocol.

### *3.2. Navigation*

In this experiment, two different rooms were mapped, the kitchen and the living room, as can be seen in Figure 9. After a previous mapping of the different rooms, the user could freely navigate through the them using the proposed interface. The navigation to each room was performed in two steps using the interface. First, three different location points were established to perform a direct displacement to them. Then, a fine approach could be performed by small displacements to reach the place where the task had to be executed.

**Figure 9.** Simulated home scenario. 16

#### *3.3. Activities of Daily Living*

Throughout the experiment, the user interacted with several elements of the home through the use of the environmental control interface (Figure 9). These elements were located in two different rooms, the kitchen and the living room. The user navigated through the environmental control menu using the EOG and EEG interfaces described above.

Environmental control allowed the user to choose the destination he wanted to reach (kitchen or living room), and the mobile platform would take him there automatically. First, as shown in Figure 10, he moved to the kitchen area and adjusted the height of the worktop. Next, he moved to the living room, where he lit a lamp and then turned on the television. The times indicated are those that the user took to complete the activity, from the time he initiated the order to select the task to be performed until the activity was completely finished.

**Figure 10.** Study protocol.

Once the user had interacted with the different elements of the room, he was ready to perform the eating task. As previously, the user selected the object, in this case the spoon, using the eye-tracking system, and he confirmed the selected object using an EOG command. Therefore, the exoskeleton started to move. When the robot reached the object, the user had to think "close" in order to close the hand (EEG command). When the robot reached his mouth, the user used EOG commands to indicate that he wanted to finish the task or wanted to continue eating. To leave the spoon, the user had to think "open" in order to open the hand (EEG command). At that point, the exoskeleton returned to the idle position, and the finite state machine was left waiting for a new command.

The user was able to complete all the tasks in reasonably short times, since the longest activities were navigation to the kitchen (1 min and 15 s) and the eating task (depending on the repetitions the user wanted to perform). In addition, the user had the ability to abort the activity carried out at any time if he deemed it necessary, providing greater security to the system.

#### *3.4. Subjective Assessment of Usability*

The System Usability Scale (SUS) provides a quick tool for measuring the usability aspects of technology. The SUS consists of 10 questions with five response options from strongly agree to strongly disagree. The questions are the following:


#### *3.5. Results*

As mentioned above, the system developed was validated in different experiments that allowed improving not only the robotic device, but also the control and the different user interfaces. In the study presented in this paper, the main objective was to know the vision of the user himself and the opinion of a group of experts in relation to the usability of the final system in the assistance with ADLs.

To answer the questionnaire, factors such as the time taken by the user to carry out the activity with the robotic system must be taken into account (it cannot be too high), as well as assessing whether the user has completed each of the tasks without problems. To this end, the experts were present as members of the public throughout the experiment, in order to be able to evaluate the aforementioned issues first hand.

All the clinicians filled in the SUS questionnaire, and the results are shown in Figure 11. The median of all the questions was equal to or above 2.5. However, the two questions with the lowest median value were related to the complexity and the cumbersome aspect of the system. This may be due to the fact that this system is a prototype that is still at an early development stage, and it is also a fact that, for the first time of use, it takes a relatively long time to calibrate the control interfaces to the user. We are working on improving the future prototypes of the system by taking into account these aspects.
