*2.1. Biosignal Acquisition System*

The proposed platform is capable of measuring and storing data from several physiological signals. Some of these signals are used for decision making when controlling the system, such as the EOG or EEG, but others are only used to measure the condition of the patient (respiratory rate, galvanic skin response, heart rate, etc.). The system allows adapting the use of the physiological signals based on the patient's need. In addition, new biosignals and processing techniques can be integrated. The performance, signal processing, and adaptation of the different physiological signals of the system have been tested in several studies [11–15].

#### 2.1.1. ExG Cap

An ExG cap, developed by Brain Vision , can be used to perform three different biosignal measurements: (1) EEG acquisition, through eight electrodes, to perform BNCI tasks and allow the user to control the assistive robotic device and interact with the control interface; (2) EOG acquisition, using two electrodes placed on the outer canthus of the eyes, to detect left and right eye movements, to provide the user the opportunity to navigate through the menus of the control interface; (3) EKG acquisition to be combined with the respiration and galvanic skin response (GSR) data in order to estimate the affective state of the user [16].

### 2.1.2. Electrocardiogram and Respiration Sensor

The system incorporates the Zephyr BioHarnessTM (Medtronic Zephyr, Boulder, CO, USA) physiological monitoring telemetry device to measure the electrocardiogram (ECG) and the respiration rate. This device has a built-in signal-processing unit. Therefore, we only applied a 0.004 Hz high-pass filter to remove the DC component of the signals. The HR was extracted from the ECG signal, but the time domain indices of the heart rate variability (HRV) were also extracted. In particular, the SDANN was used as a feature of the HRV, which is defined as the standard deviation of the average instantaneous heart rate intervals (NN) calculated over short periods. In this case, the SDANN was computed over a moving window of 300 s.

#### 2.1.3. Galvanic Skin Response

A GSR sensor, developed by Shimmer, measures the skin conductivity between two reusable electrodes mounted to two fingers of one hand. These data are used, together with the EKG and the respiratory rate, to estimate the affective state of the user [12]. GSR is a common measure in psychophysiological paradigms and therefore often used in affective state detection. The GSR signal was processed using a band-pass filter of 0.05–1.5 Hz (the frequency range of the skin conductance response (SCR)) in order to remove the artifacts.

#### *2.2. Environment Perception and Control System*

The system integrates a computer vision system to recognize the environment with which the system will interact [17]. In addition, it has a user interface so that the user can interact with the environment.

### 2.2.1. Computer Vision System

The activities of daily living (ADLs) require the capability to perform reaching tasks within a complex and unstructured environment. This problem should be solved in real time to be able to deal with the possible disturbances that the element may suffer during the interaction. Moreover, the objects are commonly textureless.

Currently, several methods have been proposed. However, despite the great advances in the field (especially using deep learning techniques), it has not been solved effectively yet, especially with nontextured objects. Some authors have used commercial tracking systems such as Optitrack or ART Track [18–20]. The main limitation of these devices is the necessity to modify the objects to track through the inclusion of optical markers, to reconstruct their position and orientation. The main lines of investigation in the field of 3D textureless object pose estimation are methods based on geometric 3D descriptors, template matching, deep learning techniques, and random forests.

Our system incorporates a computer vision system based on the use of three devices (Figure 1). The first one is Tobii Pro Glasses 2. This eye-tracking system allows the user to select the desired object. The second one is the Orbbec Astra S RGB-D camera used for the 3D pose estimation of textureless objects with which the system can interact. This camera is attached directly to the back of the wheelchair by means of a structure that places it on top of the user's head, focusing on the scene. Finally, a full HD 1080p camera able to work at 30 fps is placed in front of the user, under the screen. This camera is used to estimate the 3D pose of the mouth of the user. This information helps the system know which position the exoskeleton must be in for tasks such as eating or drinking.

This computer vision system was tested in real conditions with patients and was also thoroughly evaluated both qualitatively and quantitatively. The results and a more detailed explanation of the algorithms developed can be seen in [17].

### 2.2.2. User Interface

The system also has a screen attached to the wheelchair and located in front of the user (Figure 1). On this screen, the interface menus are displayed. It brings many different options to the user (e.g., go to another room, drink, grab an object, entertainment, etc.) and gives some information about the selected task and the exoskeleton status.
