Next Article in Journal
Acceleration of a Feature Selection Algorithm Using High Performance Computing
Previous Article in Journal
Mining of the Milky Way Star Archive Gaia-DR2. Searching for Binary Stars in Planetary Nebulae
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

Study of Machine Learning Techniques for EEG Eye State Detection †

CITIC Research Center & University of A Coruña, Campus de Elviña, A Coruña 15071, Spain
*
Author to whom correspondence should be addressed.
Presented at the 3rd XoveTIC Conference, A Coruña, Spain, 8–9 October 2020.
Proceedings 2020, 54(1), 53; https://doi.org/10.3390/proceedings2020054053
Published: 31 August 2020
(This article belongs to the Proceedings of 3rd XoveTIC Conference)

Abstract

:
A comparison of different machine learning techniques for eye state identification through Electroencephalography (EEG) signals is presented in this paper. (1) Background: We extend our previous work by studying several techniques for the extraction of the features corresponding to the mental states of open and closed eyes and their subsequent classification; (2) Methods: A prototype developed by the authors is used to capture the brain signals. We consider the Discrete Fourier Transform (DFT) and the Discrete Wavelet Transform (DWT) for feature extraction; Linear Discriminant Analysis (LDA) and Support Vector Machine (SVM) for state classification; and Independent Component Analysis (ICA) for preprocessing the data; (3) Results: The results obtained from some subjects show the good performance of the proposed methods; and (4) Conclusion: The combination of several techniques allows us to obtain a high accuracy of eye identification.

1. Introduction

Over the past decades, several studies have proved that each eye state represents specific activity patterns in certain brain rhythms. For instance, it has been demonstrated that the alpha power increases during closed-eyes states while significant reductions are produced when subjects open their eyes. On the other hand, beta band power does not show relevant differences between both eye states [1]. Taking these studies into account, in our previous work [2] we proposed a threshold-based classification system for the detection of open eyes (oE) and closed eyes (cE) from Electroencephalography (EEG) signals. For this purpose, we employed the mean power ratio between alpha (8–12 Hz) and beta (13–17 Hz) frequency bands for the identification of the user’s eye state. The system achieved an accuracy higher than 95 % for both eye states with a classification delay of 2 s .
In this paper, we extend our previous study by considering an EEG device with two sensors and different techniques for feature extraction, such as Independent Component Analysis (ICA), Discrete Wavelet Transform (DWT) and Discrete Fourier Transform (DFT). Moreover, two well-known classifiers for EEG such as Support Vector Machine (SVM) and Linear Discriminant Analysis (LDA) are also tested and assessed [3].

2. Material and Methods

The same EEG device described in [2] is employed to record the brain signals of the subjects using two electrodes, located at positions O1 and O2 according to the 10–20 International System. The experimental group included a total of 7 volunteers (males) who agreed to participate in the research. Their mean age was 29.67 (range 24–56). All of them indicated that they do not present hearing or visual impairments. Participation was voluntary and informed consent was obtained for each participant in order to employ their EEG data in our study.
A 10 min -recording was performed for each subject, where 5 min correspond to oE and the remaining 5 min to cE: 4 min are employed for the training step, where 2 of them correspond to the oE state and 2 to the cE one. Thus, the remaining 6 min of the recordings, composed by 3 min of each eye state, are employed for the test step.
The mean power of the alpha and beta bands, denoted by α and β , respectively, is computed for each sensor. According to these values, we extract their relative power ratio, given by R = β / α , as a feature to predict the eye state. For this purpose, three different approaches are compared. On the one hand, the DFT is applied over the raw EEG data to compute α and β and their subsequent ratio. Moreover, as a second approach, the JADE algorithm [4] is used to extract the independent components from the original data previous to the feature extraction with the DFT. Finally, the DWT with 4 levels of decomposition and coif4 as mother wavelet is applied over the raw data. In this approach, detail coefficients of level 3 (D3) and 4 (D4) are used for the calculation of the ratio R due to their equivalence to beta and alpha rhythms, respectively.
In addition, according to our previous results [2], overlapping windows are more appropriate for eye state identification, since they improve system performances and reduce detection times. Therefore, the feature extraction techniques are applied over 10 s windows with an 80% overlap.
For the eye state estimation, SVM and LDA classification algorithms are assessed. With the goal of avoiding bias, each experiment has been repeated ten times, each time implementing a cross-validation process i.e., a different combination of training and test recordings is implemented for each of them.

3. Experimental Results

The performance of each feature extraction technique is analyzed for both eye states. Table 1 shows the mean accuracy obtained for all subjects by the classifiers and the three feature sets.

4. Discussion and Conclusions

The presented study in this paper is an extension of our previous work for EEG eye state classification. It can be appreciated from the results that SVM offers the best performance for all subjects and feature sets for oE. Conversely, LDA outperforms SVM for cE. Moreover, it can be seen that DWT presents lower results than the other two techniques. In this regard, JADE combined with DFT seem to provide the highest accuracies for oE using SVM and for cE using LDA. Therefore, it can be concluded that the inclusion of an ICA algorithm for the feature extraction improves the overall system performance. However, a clear decision can not be taken between SVM or LDA, since both methods provide similar results.

Author Contributions

F.L. and F.J.V.-A. have implemented software used in this paper and performed the experiments; F.L. and D.I. have analyzed the data; P.M.C. and A.D. have designed the experiments and head the research. All authors have read and agreed to the published version of the manuscript.

Funding

This work has been funded by the Xunta de Galicia (ED431G2019/01), the Agencia Estatal de Investigación of Spain (TEC2016-75067-C4-1-R) and ERDF funds of the EU (AEI/FEDER, UE), and the predoctoral Grant No. ED481A-2018/156 (Francisco Laport).

Abbreviations

The following abbreviations are used in this manuscript:
DCTDiscrete Fourier Transform
DWTDiscrete Wavelet Transform
ICAIndependent Component Analysis
LDALinear Discriminant Analysis
SVMSupport Vector Machine

References

  1. Barry, R.J.; Clarke, A.R.; Johnstone, S.J.; Magee, C.A.; Rushby, J.A. EEG differences between eyes-closed and eyes-open resting conditions. Clin. Neurophysiol. 2007, 118, 2765–2773. [Google Scholar] [CrossRef] [PubMed]
  2. Laport, F.; Dapena, A.; Castro, P.M.; Vazquez-Araujo, F.J.; Iglesia, D. A Prototype of EEG System for IoT. Int. J. Neural Syst. 2020, 2050018. [Google Scholar] [CrossRef] [PubMed]
  3. Ortiz-Rosario, A.; Adeli, H. Brain-computer interface technologies: From signal to action. Rev. Neurosci. 2013, 24, 537–552. [Google Scholar] [CrossRef] [PubMed]
  4. Cardoso, J.F.; Souloumiac, A. Blind beamforming for non-Gaussian signals. In Proceedings of the F-Radar and Signal Processing; IET: Stevenage, UK, 1993; Volume 140, pp. 362–370. [Google Scholar]
Table 1. Comparison of the classification accuracies (%). Bold values indicate the best result for each subject.
Table 1. Comparison of the classification accuracies (%). Bold values indicate the best result for each subject.
(a) Accuracy for oE
DFTcoif4JADE+DFT
SubjectSVMLDASVMLDASVMLDA
199.5297.7198.5594.58100.0098.31
290.9688.4381.8181.2093.6190.72
399.8895.9097.4796.02100.0093.73
497.9589.4085.3082.7785.9081.33
596.6396.5170.0066.5197.8396.99
689.0470.1293.2584.5895.5473.25
794.7083.9885.3080.7293.7388.31
(b) Accuracy for cE
DFTcoif4JADE+DFT
SubjectSVMLDASVMLDASVMLDA
1100.00100.0098.43100.00100.00100.00
287.8389.8877.8382.8991.4592.65
3100.00100.0098.67100.0099.76100.00
497.7198.9286.0291.8182.0588.31
596.0298.4376.8779.6495.5497.95
689.4096.5194.2298.1993.4999.52
796.27100.0084.2289.7694.5899.40
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Laport, F.; Castro, P.M.; Dapena, A.; Vazquez-Araujo, F.J.; Iglesia, D. Study of Machine Learning Techniques for EEG Eye State Detection. Proceedings 2020, 54, 53. https://doi.org/10.3390/proceedings2020054053

AMA Style

Laport F, Castro PM, Dapena A, Vazquez-Araujo FJ, Iglesia D. Study of Machine Learning Techniques for EEG Eye State Detection. Proceedings. 2020; 54(1):53. https://doi.org/10.3390/proceedings2020054053

Chicago/Turabian Style

Laport, Francisco, Paula M. Castro, Adriana Dapena, Francisco J. Vazquez-Araujo, and Daniel Iglesia. 2020. "Study of Machine Learning Techniques for EEG Eye State Detection" Proceedings 54, no. 1: 53. https://doi.org/10.3390/proceedings2020054053

Article Metrics

Back to TopTop