Next Article in Journal
Semi-Automatic Determination of Rockfall Trajectories
Next Article in Special Issue
Synchronous Wearable Wireless Body Sensor Network Composed of Autonomous Textile Nodes
Previous Article in Journal
Handling Real-World Context Awareness, Uncertainty and Vagueness in Real-Time Human Activity Tracking and Recognition with a Fuzzy Ontology-Based Hybrid Method
Previous Article in Special Issue
Novel Wearable and Wireless Ring-Type Pulse Oximeter with Multi-Detectors
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Evaluating Classifiers to Detect Arm Movement Intention from EEG Signals

Brain-Machine Interface Systems Lab, Miguel Hernández University of Elche, Avda. de la Universidad S/N, 03202, Elche (Alicante), Spain
*
Authors to whom correspondence should be addressed.
Sensors 2014, 14(10), 18172-18186; https://doi.org/10.3390/s141018172
Submission received: 18 July 2014 / Revised: 16 September 2014 / Accepted: 17 September 2014 / Published: 29 September 2014
(This article belongs to the Collection Sensors for Globalized Healthy Living and Wellbeing)

Abstract

: This paper presents a methodology to detect the intention to make a reaching movement with the arm in healthy subjects before the movement actually starts. This is done by measuring brain activity through electroencephalographic (EEG) signals that are registered by electrodes placed over the scalp. The preparation and performance of an arm movement generate a phenomenon called event-related desynchronization (ERD) in the mu and beta frequency bands. A novel methodology to characterize this cognitive process based on three sums of power spectral frequencies involved in ERD is presented. The main objective of this paper is to set the benchmark for classifiers and to choose the most convenient. The best results are obtained using an SVM classifier with around 72% accuracy. This classifier will be used in further research to generate the control commands to move a robotic exoskeleton that helps people suffering from motor disabilities to perform the movement. The final aim is that this brain-controlled robotic exoskeleton improves the current rehabilitation processes of disabled people.

1. Introduction

Over the last few years, diverse research has been undertaken in neuroscience in order to obtain a deeper knowledge about the human brain. As a consequence, the development of brain-computer interfaces (BCIs) has increased. A BCI is a system that gives a communication channel between the human brain and a computer [1]. Thus, it provides a new way to interact with the environment, which can be useful for many people who have motor disabilities. These systems are able to capture brain signals using invasive or non-invasive methods and then process these signals in order to generate control commands for a specific device. Invasive techniques register the activity of one neuron or small groups of neurons through microelectrodes implanted in the brain, and they have been used to determinate the movement intention in animals [2] or to control a computer cursor [3]. On the other hand, electroencephalographic (EEG) recordings are non-invasive and allow registering brain activity without surgery. Therefore, these techniques are preferable because of ethical concerns and medical risks. Therefore, this paper uses non-invasive BCI based on electroencephalographic (EEG) signals measured through superficial electrodes placed on the scalp.

Many cognitive processes performed by the brain are well located around the cortex. The occipital lobe is responsible for processing visual information, and it can provide relevant information and a good signal-to-noise ratio, whilst the parietal and frontal lobes take part in intention, planning and the decision to make a movement [4]. There are some extensively studied potentials generated in the occipital area as a response to a visual stimulus, like steady state visually evoked potential (SSVEP) or P300 potential, a kind of event related potential (ERP) wave [5]. These potentials can be used to know where the subject is focusing his gaze, and then it can be applied, for example, to control a wheelchair [6,7] or to write words in a web browser [8]. The main problem is that these potentials are evoked, so an external visual stimulation is necessary to provoke them, which implies the necessity of using a graphical interface to interact with, and therefore, this increases the limitations of its application. On the other hand, there is a family of potentials generated on the subject's own will, which are called spontaneous potentials. Current research has already used these kinds of signals to control different devices, e.g., a robotic arm [9,10] or a wheelchair [11].

One of the main objectives of studies based on BCIs is motor rehabilitation. The parietal and frontal lobes are more interesting because the signals acquired from them allow knowing when an arm movement will be performed before it actually starts, due to their relation with motor action. In this case, it is known that when a person is going to perform a movement, the body runs a chain of events that ends with the action of the muscles and, therefore, the actual movement [12]. This chain starts in the brain only a few tenths of a second before the movement onset, and after that, the electrical signal passes through the spinal cord and reaches the muscles that exert the necessary force. Current technology allows collecting and processing electroencephalographic (EEG) signals from the cortex even in a real-time application, and therefore, a wide range of applications can be developed.

On this study, we are interested in detecting an arm movement before it actually happens. There are two appropriate neuro-physiological phenomena that begin before a voluntary action occurs, and they have different sources [13]. On the one hand, there is a slow potential called the Bereitschaftspotential (also known as the readiness potential), which manifests as a decrement in the closest frequencies to the DC component in EEG signals [14]. This occurs in two phases: the first one with a small decrease of voltage, which starts around 1.5 s before the movement onset, and the second one with a pronounced decrease, which starts around 0.5 s before the movement. On the other hand, the event-related desynchronization (ERD) refers to a decrease in the spectral power of EEG signals in the mu and beta frequency bands [15]. This one starts up to two seconds before the movement onset, and it ends approximately when the movement is finished. After that, the spectral power recovers its magnitude, generating the event-related synchronization (ERS). Some studies have already used these phenomena to know the intention of movement, such as to anticipate a wrist movement [16,17] or an ankle movement [18]. In these cases, the movement to perform was not complex, but, for example, in [19], a reaching movement that requires the use of several muscles and coordination is studied. Moreover, there is research related to the lower limb, where the gait is under consideration [20].

The study and detection of electrical signals that occur in the brain just before undertaking a particular movement can be very useful to assist the movement of people with some motor disabilities that make it difficult or impossible for them to do the action on their own. A way to help these people could be through an exoskeleton attached to the impaired segment of their body [21]. The orthosis releases the effort from the patient's muscles. The user will think about starting an arm movement, and a suitable processing and classification of the EEG signals will detect it before happening. The classifier output could be used to activate the exoskeleton engines, in the case of an active exoskeleton [22]. Furthermore, it could activate a functional electrical stimulation (FES) to stimulate specific muscles when the exoskeleton is passive [23]. Therefore, users will perform the movement when they wish, namely when the classifier detects a pre-movement. In a motor rehabilitation process, this coordination between the desire to execute a movement and the performance of the action might improve the effects of rehabilitation. Nowadays, exoskeletons have been used in research to help with different tasks. The Armeo Spring system is an upper limb orthosis, which can be a tool to train people that suffer multiple sclerosis [24]. In the project, WOTAS (wearable orthosis for tremor assessment and suppression) [25], another system is able to suppress tremors in the arm. Moreover, there are exoskeletons for lower limb rehabilitation, like those developed in the HYPER project, hybrid neuroprosthetic and neurorobotic devices for functional compensation and rehabilitation of motor disorders [26].

In this paper, a methodology to detect the intention of doing a reaching movement with the upper limb using a non-invasive system with spontaneous EEG signals based on the ERD phenomenon is presented. The main objective of this paper is to make a comparison between classifiers in order to choose the most convenient to detect the movement intention. The best classifier will be used for further research. The final goal is to obtain a system to activate FES or the exoskeleton engines to move the arm of those people who have motor damage. Moreover, an exoskeleton attached to the upper limb will be used to support the weight of the weak limb. Both the exoskeleton and FES are not discussed in this paper. The current research is developed as a part of the project, Brain2Motion–Development of a Multimodal Brain-Neural Interface to Control an Exoskeletal–Neuroprosthesis Hybrid Robotic System for the Upper Limb (DPI2011-27022-C02-01), funded by the Spanish Ministry of Economy and Competitiveness.

The remainder of the paper is organized as follows. In Section 2, the performed experiment is explained. In Section 3, the results obtained with several classifiers are shown and discussed, and Section 4 contains the conclusions and future work.

2. Experimental Section

2.1. Register

During the experimental tests, a commercial amplifier (g.USBamp, g.Tec, GmbH, Austria) was used. The amplifier has 16 channels and also an independent reference and ground input. The EEG signals acquired by the amplifier were registered with a 256-Hz sampling frequency. A computer software developed in MATLAB (software Matrix Laboratory of MathWorks) read and processed the data acquired. The MATLAB API (application programming interface) provided with the amplifier was used to manage it.

The acquisition of EEG signals was done using 16 active Ag/AgCl electrodes distributed over the scalp. The distribution of sensors on the scalp of the subject was: Fz, FC5, FC1, FCz, FC2, FC6, C3, Cz, C4, CP5, CP1, CP2, CP6, P3, Pz and P4, while the mono-auricular reference was placed on the right ear lobe and the ground was located on AFz, according to the International 10/10 System (Figure 1). To ensure a better placement of the electrodes, a cap (g.GAMMAcap, g.Tec, GmbH, Austria) was used. Furthermore, this system is able to reduce electromagnetic interference.

In the experimental tests, the user had to move a computer mouse. The position of the mouse in the computer screen determines the beginning and also the phase of the movement. A sampling frequency of 16 Hz was used to register the mouse position. The complete set up can be seen in Figure 2.

2.2. Test Description

The experiment was performed by 6 healthy subjects between 23 and 31 years old (26.17 ± 3.31 average), all men and right-handed. All volunteers had normal vision and hearing and no history of neurological or psychiatric disorders. The test was done in an isolated room to prevent noise and distractions. Each subject was instructed to perform a reach movement forward and backward with the mouse and then return to the starting position. A graphical interface was used to guide the subject on each performance, and it was used to separate the data between resting and movement time. The interface showed a cross for 3 s when the subject had to remain at rest with the cursor at the bottom of the screen, showing later a point during 5 s. During this period, the subject could freely make a movement and went back to the initial position (Figure 3). Users were warned not to start the movement immediately when the point was shown (at least 1 s later), because EEG signals could be affected by visual stimulus. All subjects have performed one session with 6 runs. Each run consisted of 256 s.

2.3. Data Selection

The first step to carry out in the analysis was to select the data. In our study, it was necessary to differentiate between pre-movement data, which was obtained in a 1-s window width before the beginning of the arm movement, and resting data, which was obtained in a 1-s window width when the subject had to keep calm.

From the position of the mouse, it was possible to know the position of the arm. The subject had to remain immobile with the cursor at the bottom of the screen and then move the mouse forward to the top, then backward to the initial position. Therefore, the time instant of the movement initiation was when the mouse changed its position, and the data to analyze were 1 s before it. On the other hand, the resting data were defined as 1 s in the middle of a period when the interface showed a cross.

2.4. Signal Pre-Processing

The EEG signals are of the order of microvolts, and due to their poor signal-to-noise ratio, it was necessary to use some filters to improve their quality. Firstly, two frequency filters were applied. One of them was a 50-Hz notch filter to eliminate the power line interference. This filter was designed using an internal hardware filter in the amplifier device. After that, an 8th order Butterworth filter programmed in MATLAB from 5 to 40 Hz was applied to remove some artifacts and the DC component, preserving only the information of the interesting frequencies, which are the mu and beta frequency bands (8–30 Hz).

Secondly, a spatial filter was applied on all EEG channels to reduce the contribution of the remaining electrodes in each channel and, therefore, to isolate better the information collected from each position [27]. A Laplacian algorithm was applied for all electrodes. This algorithm uses the information received from all of the remaining electrodes and their distances from them. The visual result is a smoother time signal, which should contain only the contribution coming from the particular position of the electrode. The Laplacian was computed according to the formula:

V i L A P = V i C R j S i g i j V j C R
where ViLAP is the result of applying this algorithm to the electrode i , ViCR is the electrode i signal before the transformation and,
g i j = 1 d i j j S i 1 d i j
where Si contains all of the electrodes, except from the electrode i and dij is the distance between i and j electrodes.

2.5. Features Extraction and Classifiers

The selected EEG data were processed with a fast Fourier transform (FFT) to know the spectral power. Due to the complexity of detecting the ERD phenomenon in real time based on experiments previously performed, a novel methodology to define the cognitive process was used. This technique allows using data mining and more sophisticated classifiers instead of thresholds. Therefore, the features were the sums of three frequency bands, 8–12 Hz, 13–24 Hz and 25–30 Hz, with a 1-Hz resolution per electrode, which represents the mu and beta bands, making a total of 48 features. Figure 4 shows an FFT output of pre-movement and resting data selected. It is possible to see that, in pre-movement, the power spectra between 11 and 18 Hz is lower than in resting time. These features were inputs to some classifiers typically used in BCI [28,29]. This study analyzes support vector machine (SVM), k-nearest neighbor (k-NN) and naive Bayes (NB) classifiers implemented in MATLAB.

SVM is an approach where the objective is to find the best separation hyperplane, which provides the highest margin distance between the nearest points of two classes to separate them. Typically, a convex quadratic programming (QP) [30] is solved to determine the SVM model, but in this paper, a least squares (LS) SVM [31] method was also used. Moreover, an alternative algorithm to solve the optimization problem in SVM, called sequential minimal optimization (SMO), has been applied [32]. Both LS-SVM, as well as SMO reduce the computational complexity, since the first one transforms the convex quadratic programming (QP) problem into linear equations and the second one subdivides the mathematical problem of QP-SVM into subproblems. Support vector machine classifiers are widely used in BCI systems, and they usually achieve great results [33].

The k-NN classification rule is based on the density estimation using the distance from nearest neighbors [34]. This classifier is non-linear, and its computational complexity is dependent of the number of neighbors. Firstly, a training phase from a given population is done with k as the number of nearest neighbors (in our case, k = 10, 20 and 30) used in the classification. The distance metric can be changed between several methods, such as Euclidean or Hamming, among others.

The classification paradigm that uses Bayes's theorem in conjunction with the conditional independence hypothesis of the predictor variables given the class is known as naive Bayes [35]. This method computes the probability that a sample belongs to a class and assigns it to the most likely class by comparing a linear combination of the features with a threshold.

3. Results and Discussion

In this section, seven classifiers were evaluated in order to choose the best one to detect the intention of an arm movement. The classifiers used were LS-SVM, QP-SVM, SMO and k-NN, with 10, 20 and 30 neighbors, and Naive Bayes. The discussion is going to be performed using three parameters: true positive rate (TPR), false positive rate (FPR) and the GAP (TPR divided by FPR) obtained in a six-fold cross-validation. Each run performed was used as a fold; all combinations of five folds were utilized to create a classifier, and the remaining fold was used to test it. The parameters of each user calculated with all iterations of the cross-validation and an average of all users are shown in Table 1. In Figure 5, a bar plot with the results of the best user (A) and the worst user (F) in terms of GAP is shown, too. The values of TPR and FPR were calculated as follows:

TPR = pre movements detected as pre movement sum of pre movements in test * 100
FPR = resting samples detecate as pre movement sum of resting samples in test * 100

In order to analyze the significant differences between classifiers, a statistical study using ANOVA for TPR and FPR has been done (Figure 6). According to the results obtained using TPR, there is no significant difference between classifiers (p-value = 0.8774 > 0.05). The interquartile range is the more noticeable characteristic, because SVM classifiers have less range than others. Hence, the SVM family was more stable between users. Regarding FPR, there is a significant difference between classifiers (p-value = 0.0225 < 0.05). In order to know the classifiers that obtain a significantly lower FPR index, an ANOVA with all combinations of two classifiers have been performed. The p-value of every combination is shown in Table 2. All SVM classifiers achieve significant differences compared to k-NN (p-value < 0.05) and somewhat less significant ones compared to NB (p-value < 0.1). The remaining combinations do not show significant differences. Therefore, it is not possible to choose a concrete SVM classifier, but at least this family was better than others. Then, an analysis of the TPR, FPR and GAP indexes obtained was done.

3.1. True Positive Rate (TPR)

Considering the TPR obtained for each subject, it is possible to see that all users reached almost 70% accuracy, although each one achieved his/her best mark with different classifiers. Moreover, the standard deviations have reasonable values in all results, since the training data used were not as good in all iterations performed during the cross-validation. Users A, B and D had their best performancewith some kind of k-NN, and also, their deviation is really small: 85.2% ± 3.5% with 20-NN, 81.9% ± 3.6% with 30-NN and 76.9% ± 3.7% with 10-NN, respectively. In the case of User F, the best one was NB with 86.7% ± 5.6% and Users D and E reached 71.0% ± 10.4% and 74.3% ± 5.4% with SMO, respectively. Focusing our attention on the average of all users, it is possible to see that QP-SVM achieved 71.68% ± 6.2% accuracy above the others. This value has also the smallest deviation, so it indicates that the training data affected the results less than others classifiers.

3.2. False Positive Rate (FPR)

As can be observed, FPR for all users and classifiers was above 17%, and in some cases, it reached up to 70%. This is one thing to improve in further research. The ultimate goal of this research is to use the output of the classifier to command an external system that moves the arm of the user depending on whether the user wants to move his/her arm or not. Therefore, if the classifier detects pre-movement when the user wants to remain at rest, the engines of the exoskeleton or FES will be activated, and the user will perform an unintentional movement. It would be an inconvenience to the subject, and surely it will be counter-productive in the process of rehabilitation of the upper limb. However, if the classifier detects pre-movement as resting samples, subjects will only need to try again.

3.3. GAP

GAP is an index that indicates the difference between TPR and FPR. This parameter should be higher than one. The larger the index, the better the classification. Most users obtained the best value of this parameter with some kind of SVM classifier. All of them reached a value of two for this index, so this indicates that the sum of powers selected as characteristics of the classifier can be at least determinant to correctly classify the data. It is also true that in some users, like C, E and, above all, F, this value should be higher and reach at least three to be considered in future experiments, because the classifier requires the minimum possible FPR without a bad TPR. The average of the GAP index is better in LS-SVM and QP-SVM with equal values (2.8).

4. Conclusions

This research has been done in order to select the most convenient classifier to detect the intention of an arm movement between three families of classifiers usually used in BCI research. Our final objective is to help in the rehabilitation of disabled people to perform this movement with an exoskeleton attached to the upper limb. If the system is able to predict each movement with a really low FPR, the classifier output will be used as a command to activate the engines or the FES system of the exoskeleton, moving the user's arm. The relationship between the cognitive process to perform such movements and the real movements could improve the rehabilitation due to neuroplasticity.

In summary, SVM classifiers obtain the best results with TPR and FPR around 70% and 28%. Although these values are good for these initial tests, the FPR is a bit high. In other works related to the detection of the intention to perform a wrist, an ankle and a reaching movement, the true positive rates are on average 52% [17], 82.5% [18] and 80% [19], respectively. The studies were done using ERD or BP phenomena, and the results were shown in different ways. Therefore, the TPR obtained in this study is a good starting point, but the false positive rate in other studies is around 10%. Therefore, greater efforts should be made to minimize this index. It is important that the classifier has less resting samples detected as pre-movement to be efficient in a rehabilitation process. An undesired performed movement might not improve rehabilitation or even worsen it. In this study, LS-SVM is the best option, since it provides the minimum rate of false positives. However, only six users performed the test. Using a wider population would make it possible to achieve more reliable results. At least, SVM classifiers behave better.

In future works, new tests with healthy subjects will be performed with some possible improvements in processing data in order to decrease the FPR. Usually, EEG signals have some artifacts, e.g., produced by eye blinks or eye movements, that should be removed. Moreover, as the users have to perform movements during the experiment, the EEG signals could be affected by muscle artifacts, and they should be widely considered [36]. Furthermore, a process based on independent component analysis (ICA) [37] or principal component analysis (PCA) or even linear discriminant analysis (LDA) [38] should be tested to automatically extract the features of the EEG signals related to the movement onset. Other combinations of frequency sums in the mu and beta bands and more sorts of frequency and spatial filters, such as common average reference (CAR), should be tested. Additionally, a Welch method to obtain the power spectra could improve the results obtained with FFT. Afterwards, real-time tests will be carried out to validate the processing and the classifiers. In this sense, the idea is to process the data every 0.5 s with a one-second sliding window. Any SVM classifier is quick enough to make a decision in this time interval. Furthermore, a new set up with 32 electrodes placed mainly in the parietal and frontal lobes will be used, since 16 electrodes would not be enough to remove the EEG signals' noise properly. Moreover, they might not be in the best position to capture the ERD phenomenon, since it is expected that the origin is the motor cortex. Finally, healthy and disabled people will use the best processing and classifier in order to command an exoskeleton attached to the upper limb in an emulated test of a rehabilitation process.

Acknowledgments

This research has been funded by the Spanish Ministry of Economy and Competitiveness through the Brain2Motion project–Development of a Multimodal Brain-Neural Interface to Control an Exoskeletal–Neuroprosthesis Hybrid Robotic System for the Upper Limb (DPI2011-27022-C02-01), and by Conselleria d'Educació, Cultura i Esport of Generalitat Valenciana of Spain, through Grants VALi+d ACIF/2012/135 and FPA/2014/041.

Author Contributions

Daniel Planelles is responsible for the design and implementation of the algorithms presented in the manuscript. Together with Daniel Planelles, Enrique Hortal took part in acquisition, analysis and interpretation of data. Álvaro Costa contributed with programming code and discussions. Both Andrés Úbeda and Eduardo Iáñez supervised the work and contributed with the revision process. José M. Azorín contributed with valuable feedback on technologies and final critical approval of the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hintermüller, C.; Kapeller, C.; Edlinger, G.; Guger, C. BCI Integration: Application Interfaces. In Brain-Computer Interface Systems–Recent Progress and Future Prospects; Fazel-Rezai, R., Ed.; InTech: Rijeka, Croatia, 2013; pp. 21–41. [Google Scholar]
  2. Chapin, J.K.; Moxon, K.A.; Markowitz, R.S.; Nicolelis, M.A.L. Real-Time Control of a Robot Arm Using Simultaneously Recorded Neurons in the Motor Cortex. Nat. Neurosci. 1999, 2, 664–670. [Google Scholar]
  3. Serruya, M.D.; Harsopoulos, N.G.; Paninski, L.; Fellows, M.R.; Donoghue, J. Instant Neural Control of a Movement Signal. Nature 2002, 416, 141–142. [Google Scholar]
  4. Andersen, R.A.; Cui, H. Intention, Action Planning, and Decision Making in Parietal-Frontal Circuits. Neuron 2009, 63, 568–583. [Google Scholar]
  5. Allison, B.Z.; Pineda, J.A. ERPs Evoked by Different Matrix Sizes: Implications for a Brain Computer Interface (BCI) System. IEEE Trans. Neural Syst. Rehabil. Eng. 2003, 11, 110–113. [Google Scholar]
  6. Muller, S.M.T.; Bastos-Filho, T.F.; Sarcinelli-Filho, M. Using a SSVEP-BCI to command a robotic wheelchair. Proceedings of the IEEE International Symposium on Industrial Electronics (ISIE), Gdansk, Poland, 27–30 June 2011; pp. 957–962.
  7. Iturrate, I.; Antelis, J.M.; Kubler, A.; Minguez, J. A Noninvasive Brain-Actuated Wheelchair Based on a P300 Neurophysiological Protocol and Automated Navigation. IEEE Trans Robot. 2009, 25, 614–627. [Google Scholar]
  8. Sirvent, J.L.; Iáñez, E.; Úbeda, A.; Azorín, J.M. Visual evoked potential-based brain-machine interface applications to assist disabled people. Expert Syst. Appl. 2012, 39, 7908–7918. [Google Scholar]
  9. Hortal, E.; Úbeda, A.; Iáñez, E.; Azorín, J.M. Control of a 2 DoF Robot Using a Brain-Machine Interface. Comput. Methods Progr. Biomed. 2014, 116, 169–176. [Google Scholar]
  10. Iáñez, E.; Azorín, J.M.; Úbeda, A.; Ferrández, J.M.; Fernández, E. Mental Tasks-Based Brain-Robot Interface. Robot. Auton. Syst. 2010, 58, 1238–1245. [Google Scholar]
  11. Ferreira, A.; Bastos-Filho, T.F. Improvements of a Brain-Computer Interface Applied to a Robotic Wheelchair. Proceedings of the International Joint Conference, BIOSTEC, Porto, Portugal, 14–17 January 2009; pp. 64–73.
  12. Bronzino, J.D. Principles of Electroencephalography. In The Biomedical Engineering Handbook; CRC Press LLC: Boca Raton, FL, USA, 2000. [Google Scholar]
  13. Babiloni, C.; Carducci, F.; Cincotti, F.; Rossini, P.M.; Neuper, C.; Pfurtscheller, G.; Babiloni, F. Human Movement-Related Potentials vs. Desynchronization of EEG Alpha Rhythm: A High-Resolution EEG Study. NeuroImage 1999, 10, 658–665. [Google Scholar]
  14. Shibasaki, H.; Hallett, M. What is the Bereitschaftspotential? Clin. Neurophysiol. 2006, 117, 2341–2356. [Google Scholar]
  15. Pfurtscheller, G.; Lopes da Silva, F.H. Event-related EEG/MEG synchronization and desynchronization: basic principles. Clin. Neurophysiol. 1999, 110, 1842–1857. [Google Scholar]
  16. Bai, O. Prediction of human voluntary movement before it occurs. Clin. Neurophysiol. 2011, 122, 364–372. [Google Scholar]
  17. Ibáñez, J.; Serrano, J.I.; del Castillo, M.D.; Barrios, L.; Gallego, J.A.; Rocon, E. An EEG-Based Design for the Online Detection of Movement Intention. Proceedings of the Advances in Computational Intelligence Lecture Notes in Computer Science 2011, Torremolinos-MÃaÃlaga, Spain, 8–10 June 2011; Volume 6691, pp. 370–377.
  18. Niazi, I.K.; Jiang, N.; Tiberghien, O.; Nielsen, J.F.; Dremstrup, K.; Farina, D. Detection of movement intention from single-trial movement-related cortical potentials. J. Neural Eng. 2011. [Google Scholar] [CrossRef]
  19. Lew, E.; Chavarriaga, R.; Silvoni, S.; Millán, J.R. Detection of self-paced reaching movement intention from EEG signals. Front. Neuroeng. 2012, 5. [Google Scholar] [CrossRef]
  20. Velu, P.D.; de Sa, V.R. Single-trial classification of gait and point movement preparation from human EEG. Front. Neurosci. 2013, 7. [Google Scholar] [CrossRef]
  21. Pons, J.L. Rehabilitation Exoskeletal Robotics. Eng. Med. Biol. Mag. 2010, 29, 57–63. [Google Scholar]
  22. Pons, J.L. Wearable Robots: Biomechatronic Exoskeletons; Pons, J.L., Ed.; Wiley: Hoboken, NJ, USA, 2008. [Google Scholar]
  23. Maleki, A.; Shafaei, R.; Fallah, A. Musculo-Skeletal Model of Arm for FES Research Studies. Proceedings of the Cairo International Biomedical Engineering Conference, Cairo, Egypt,, 18–20 December 2008; pp. 1–4.
  24. Gijbels, D. The Armeo Spring as training tool to improve upper limb functionality in multiple sclerosis: A pilot study. J. NeuroEng. Rehabil. 2011, 8. [Google Scholar] [CrossRef]
  25. Rocon, E.; Belda-Lois, J.M.; Ruiz, A.F.; Manto, M.; Moreno, J.C.; Pons, J.L. Design and Validation of a Rehabilitation Robotic Exoskeleton for Tremor Assessment and Suppression. IEEE Trans. Neural Syst. Rehabil. Eng. 2007, 15, 367–378. [Google Scholar]
  26. Bortole, M.; Pons, J.L. Development of a Exoskeleton for Lower Limb Rehabilitation. In Converging Clinical and Engineering Research on Neurorehabilitation; Pons, J., Torricelli, D., Pajaro, M., Eds.; Springer: Berlin/Heidelberg, Germany, 2013; pp. 85–90. [Google Scholar]
  27. McFarland, D.J.; McCane, L.M.; David, S.V.; Wolpaw, J.R. Spatial filter selection for EEG-based communication. Electroencephalogr. Clin. Neurophysiol. 1997, 103, 386–394. [Google Scholar]
  28. Lotte, F.; Congedo, M.; Lécuyer, A.; Lamarche, F.; Arnaldi, B. A Review of Classification Algorithms for EEG-Based Brain-Computer Interfaces. J. Neural Eng. 2007, 4, R1. [Google Scholar]
  29. Bashashati, A.; Fatourechi, M.; Ward, R.K.; Birch, G.E. A survey of signal processing algorithms in brain-computer interfaces based on electrical brain signals. J. Neural Eng. 2007, 4, R32. [Google Scholar]
  30. Thome, A.C.G. SVM Classifiers-Concepts and Applications to Character Recognition. In Advances in Character Recognition; Ding, X., Ed.; InTech: Rijeka, Croatia, 2012; pp. 25–50. [Google Scholar]
  31. Suykens, J.A.K.; Vandewalle, J. Least Squares Support Vector Machine Classifiers. Neural Process. Lett. 1999, 9, 293–300. [Google Scholar]
  32. Shawe-Taylor, J.; Cristianini, N. Implementation Techniques. In Support Vector Machines and Other Kernel-Based Learning Methods; Cambridge University Press: Cambridge, UK, 2000. [Google Scholar]
  33. Li, X.; Chen, X.; Yan, Y.; Wei, W.; Wang, Z.J. Classification of EEG Signals Using a Multiple Kernel Learning Support Vector Machine. Sensors 2014, 14, 12784–12802. [Google Scholar]
  34. Mitchel, T.M. Instance-Based Learning. In Machine Learning; Mitchel, T.M., Ed.; McGraw-Hill: New York, NY, USA, 1997. [Google Scholar]
  35. Theodoridis, S.; Koutroumbas, K. Classifiers Based on Bayes Decision Theory. In Pattern Recognition; Theodoridis, S., Koutroumbas, K., Eds.; Academic Press: Waltham, MA, USA, 2009. [Google Scholar]
  36. Ma, J.; Tao, P.; Bayram, S.; Svetnik, V. Muscle artifacts in multichannel EEG: Characteristics and reduction. Clin. Neurophysiol. 2012, 123, 1676–1686. [Google Scholar]
  37. Akhtar, M.T.; Christopher, W.M.; James, J. Employing spatially constrained ICA and wavelet denoising, for automatic removal of artifacts from multichannel EEG data. Signal Process. 2012, 92, 401–416. [Google Scholar]
  38. Friedman, J.H. Regularized Discriminant Analysis. J. Am. Stat. Assoc. 1989, 84, 165–175. [Google Scholar]

Figures and Tables

Figure 1. Placement of the electrodes according to the International 10/10 System.
Figure 1. Placement of the electrodes according to the International 10/10 System.
Sensors 14 18172f1 1024
Figure 2. Experimental set up. The subject was wearing a cap with 16 electrodes and he/she had to move the mouse forward/backward. EEG signals were collected and registered by the amplifier.
Figure 2. Experimental set up. The subject was wearing a cap with 16 electrodes and he/she had to move the mouse forward/backward. EEG signals were collected and registered by the amplifier.
Sensors 14 18172f2 1024
Figure 3. Description of the experiment. Resting position (bottom left) and final position of the movement (bottom right).
Figure 3. Description of the experiment. Resting position (bottom left) and final position of the movement (bottom right).
Sensors 14 18172f3 1024
Figure 4. FFT performed on a single pre-movement (top) and resting (bottom) data of the electrode Cz. In this case, features in pre-movement are 0.1760, 0.2993, 0.0749 and in the sample of resting data are 0.1784, 0.5455 and 0.0722. In pre-movement data, there is lower activity in the 13–24 Hz band sum than in the resting data.
Figure 4. FFT performed on a single pre-movement (top) and resting (bottom) data of the electrode Cz. In this case, features in pre-movement are 0.1760, 0.2993, 0.0749 and in the sample of resting data are 0.1784, 0.5455 and 0.0722. In pre-movement data, there is lower activity in the 13–24 Hz band sum than in the resting data.
Sensors 14 18172f4 1024
Figure 5. True positive rate (TPR) and false positive rate (FPR) of User A (top) and User F (bottom) for each classifier.
Figure 5. True positive rate (TPR) and false positive rate (FPR) of User A (top) and User F (bottom) for each classifier.
Sensors 14 18172f5 1024
Figure 6. ANOVA results for false positive rate (FPR) (top) and true positive rate (TPR) (bottom) over all users. p-value = 0.0225 and p-value = 0.8774, respectively.
Figure 6. ANOVA results for false positive rate (FPR) (top) and true positive rate (TPR) (bottom) over all users. p-value = 0.0225 and p-value = 0.8774, respectively.
Sensors 14 18172f6 1024
Table 1. Results for all users and classifiers. TPR and FPR are in percentages, and GAP is the division between TPR and FPR. SMO, sequential minimal optimization; QP, quadratic programming; NB, naive Bayes.
Table 1. Results for all users and classifiers. TPR and FPR are in percentages, and GAP is the division between TPR and FPR. SMO, sequential minimal optimization; QP, quadratic programming; NB, naive Bayes.
MethodUserABCDEFAll Users
LSTPR75.7±4.377.4±2.367.8±6.970.2±12.672.2±3.265.3±5.471.4±5.8
FPR18.5±10.826.8±6.029.6±7.018.8±6.231.5±7.042.7±7.128.0±7.4
GAP4.12.92.33.72.31.52.8

QPTPR78.5±2.376.4±2.869.2±6.670.9±11.770.8±2.664.3±5.171.7±5.2
FPR17.3±11.026.5±5.731.2±7.822.8±2.730.4±5.140.7±5.028.1±6.2
GAP4.52.92.23.12.31.62.8

SMOTPR70.8±5.072.6±6.171.0±10.469.1±11.974.3±5.463.1±6.070.1±7.5
FPR18.8±13.323.5±5.229.1±8.326.9±5.132.6±5.838.4±4.628.2±7.0
GAP3.83.12.42.62.31.62.6

10-NNTPR82.6±6.462.2±2.357.3±7.476.9±3.768.0±3.361.4±2.068.1±4.2
FPR23.1±12.842.3±6.552.8±10.345.4±3.342.1±5.136.1±8.140.3±7.7
GAP3.61.51.11.71.61.71.9

20-NNTPR85.2±3.574.1±1.356.8±7.271.5±4.971.0±4.760.8±2.269.9±4.0
FPR21.7±12.542.5±3.760.2±9.547.0±6.343.6±3.241.7±6.642.8±7.0
GAP3.91.70.91.51.61.51.9

30-NNTPR84.0±3.181.9±3.662.0±5.476.5±3.072.4±3.168.1±4.174.1±3.7
FPR21.5±10.650.6±5.654.3±11.947.1±6.743.1±5.448.6±10.644.2±8.5
GAP3.91.61.11.61.71.41.9

NBTPR73.3±2.780.0±6.959.2±11.365.6±9.471.5±6.086.7±5.872.7±7.0
FPR19.9±10.332.3±3.841.6±5.748.7±5.340.6±10.070.6±7.442.3±7.1
GAP3.72.51.41.41.81.22.0
Table 2. Significance difference in FPR between classifiers.
Table 2. Significance difference in FPR between classifiers.
QP-SVMSMO10-NN20-NN30-NNNB
LS-SVM0.9780.9660.0500.0400.0230.099
QP-SVMX0.9880.0430.0350.0200.095
SMOXX0.0350.0300.0160.090
10-NNXXX0.7090.5490.809
20-NNXXXX0.8440.954
30-NNXXXXX0.826

Share and Cite

MDPI and ACS Style

Planelles, D.; Hortal, E.; Costa, Á.; Úbeda, A.; Iáez, E.; Azorín, J.M. Evaluating Classifiers to Detect Arm Movement Intention from EEG Signals. Sensors 2014, 14, 18172-18186. https://doi.org/10.3390/s141018172

AMA Style

Planelles D, Hortal E, Costa Á, Úbeda A, Iáez E, Azorín JM. Evaluating Classifiers to Detect Arm Movement Intention from EEG Signals. Sensors. 2014; 14(10):18172-18186. https://doi.org/10.3390/s141018172

Chicago/Turabian Style

Planelles, Daniel, Enrique Hortal, Álvaro Costa, Andrés Úbeda, Eduardo Iáez, and José M. Azorín. 2014. "Evaluating Classifiers to Detect Arm Movement Intention from EEG Signals" Sensors 14, no. 10: 18172-18186. https://doi.org/10.3390/s141018172

Article Metrics

Back to TopTop