*3.2. Resources Used and EEG Acquisition*

Enobio-8 is the wireless and portable sensor system that we have used for EEG recording. This device consists of a neoprene helmet with 39 holes to cover the main positions of the distribution according to the 10–20 system. The helmet makes it possible to use dry and wet electrodes. The electrodes chosen for this experiment are of the dry type. Eight electrodes are used in this case located at F3, F4, T7, C3, CZ, C4, T8 and Pz according to 10/20 system. Figure 1 shows the location of the electrodes.

**Figure 1.** Electrodes location.

We must also take into account two more electrodes which will be used as grounding for the system, which allows the rest of the signals to be correctly referenced. They are attached to an Ag/AgCl EEG measurement patch with a conductive semi-liquid gel with low impedance, placed under the lobe of each ear. The electrical signals from the cortex are collected and subsequently sent to a computer via Bluetooth.

The accuracy of the system is approximately 0.005 μV. Knowing that the voltage we are working with ranges between 10 and 100 μV, the accuracy of the results is at least 0.005%.

The channels were recorded with a sampling frequency of 500 Hz. The recordings had a dynamic range of ±100 μV for all sessions. A notch filter at 50 Hz was enabled. We placed two electrodes at the mastoid bone as an EEG ground.

In addition, different standard software is used in the experiment for signal acquisition and post-processing:


### *3.3. Experimental Timeline*

In order to find out if the research hypothesis has been fulfilled, the experiment was carried out in which the EEG signal of a group of pianists and a control group was analyzed. To do so, the motor imagery of two dichotomous movements was used, one of the left hand and the other of the right hand, which served to control a BCI system, and the success rate of each individual and each group was analyzed.

The most widely used training method for the BCI (and, therefore, followed in this study) is that based on the Graz Principle [42], developed at the Institute of Neural Engineering of the Technological University of Graz (Austria). This procedure is divided into two stages:


Three sessions of three trials each with 40 sequences per trial were carried out with each subject. The first trial of each session corresponds to the first stage described above, followed by the second trial, which develops the second stage. The third trial repeats the second stage in order to increase the number of data collected. Table 3 summarizes the above.



### *3.4. EEG Processing*

In the first trial, no feedback was produced, since only data collection was carried out, which will be used to train the system through a feature extraction and a classifying algorithm. To do this, a spatial filter obtained by Common Spatial Pattern (CSP) and a Linear Discriminant Analysis (LDA) classifier were used, as will be explained later. However, for trials 2 and 3 of each session, a feedback was performed that consists of showing the user what the BCI system interprets they are thinking, according to the previously trained algorithms (online analysis). However, once the data collection of each subject was completed during the three sessions, an offline analysis was carried out, using as well CSP and LDA.

CSP is a mathematical technique used in signal processing to separate multivariable signals into sub-components with different variances. The CSP method was first proposed under the name Fukunaga-Koontz Transform in [43] as an extension of Principal Component Analysis (PCA) and has been widely used in BCI to maximize the distance between two classes of motions.

A CSP filter maximizes the variance of filtered EEG signals from one motion class while minimizing it for signals from the other class. The development of this technique comes naturally when we try to maximize the difference of variances between the two signals by spatially filtering them.

LDA is a Machine Learning technique used to perform supervised linear classification, based on a range of observations that can be divided into groups or classes [44]. The problem is basically to assign the right class to each observation. Linear classifiers are those that base their decision according to some hyper plane by assigning each class to one side of the subspace evaluated.

LDA is a simple and very stable technique, which will allow us to perform the type of classification we need without the use of any other parameter. This method is based on the assumption that we have two classes that follow a normal distribution. For each class, the parameters of mean and variance are modeled to get the distribution that best describes it and then the Bayes' theorem is used to calculate the probability of belonging to each of the classes.

The combination of CSP and LDA has been widely used in MI-based BCI to maximize the distance between two classes of movements [9,45]. There are some methods which observe the different activity between bilateral sides of hemispheres during the imagery, but some of these methods are too complex or demand too much computing time, so it is hard to apply them in real-time applications. CSP and LDA have also the characteristic of being 2-class methods. A CSP filter maximizes the variance of filtered EEG signals from one motion class while minimizing it for signals from the other class. After using this method, the goal is to train a classifier in such a way that it is able to estimate the class to which an observation belongs from observations of which we know the class, so LDA was used.

### *3.5. Experiment Deployment*

The approximate time necessary for setting up and conducting the experiment was 45 min for each session, which required a certain availability of the subjects. Between each session, it was ensured that there was a rest period for each subject of at least four days. Between trials we provided 10 min in order to let the volunteer rest.

The procedure is similar as that depicted in BCI Competition IV [46]. The subjects were right-handed and had normal vision. All volunteers were sitting in a chair, with a flat screen monitor placed 1 m away at eye level.

Taking the existing literature into account, the subjects were asked to imagine specific movements of the hands and fingers. This movement will consist of a drumming of the fingers of the hand, accompanied by a swinging of the wrist up and down. Before carrying out the experiment, each subject was recommended to practice it in a real way to assimilate the physical sensation and, subsequently, to repeat it in an imagined way. Furthermore, during the experiment, it was recommended that subjects be careful not to make any parasitic movements with their hands, eyes or head, which could have jeopardized the accuracy of the calculation. This fact was explained at the beginning of the experiment and it was discussed in the following sessions.

The structure of the entire experiment can also be described with the diagram shown in Figure 2. Each session would begin with the donning of the helmet for EEG analysis. Subsequently, a first phase would be carried out with each subject that will allow adjustments to both the extraction of characteristics and the training of the classifier (online analysis). Once the parameters are adjusted, the system is ready to be able to present real-time feedback on the screen. Subsequently, two iterations are carried out in which the subject can see in real time the interpretation that the system makes of his movement imagery. This process will be repeated over three days and the data will later be analyzed offline.

**Figure 2.** Experiment deployment.

A timing detail of each sequence can be seen in Figure 3. In second 0, the system presents a back screen. In second two presents a green cross and, 2 s later appear and arrow for 1.25 s. The arrow indicates de MI that the user must done. If the arrow point at left, the user must imagine the movement of the right hand and, if the arrow point left the user must imagine the movement with the other hand. Only in Trials 2 and 3 the system shows feedback as a blue line at second 5 and, it changes its longitude according to the output of the classifier. At second 8 the screen turns black and below there is a random time near 2 s in order to avoid synchronization between user and the timing protocol. For the presentation of the stimuli we used the previously named OpenVibe software.
