Next Article in Journal
Cognitive Benefits of Activity Engagement among 12,093 Adults Aged over 65 Years
Previous Article in Journal
Indian Medicinal Herbs and Formulations for Alzheimer’s Disease, from Traditional Knowledge to Scientific Assessment
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

EEG-Based BCI System to Detect Fingers Movements

College of Computer and Information Sciences, King Saud University, Riyadh 12372, Saudi Arabia
*
Author to whom correspondence should be addressed.
Brain Sci. 2020, 10(12), 965; https://doi.org/10.3390/brainsci10120965
Submission received: 27 September 2020 / Revised: 24 November 2020 / Accepted: 8 December 2020 / Published: 10 December 2020

Abstract

:
The advancement of assistive technologies toward the restoration of the mobility of paralyzed and/or amputated limbs will go a long way. Herein, we propose a system that adopts the brain-computer interface technology to control prosthetic fingers with the use of brain signals. To predict the movements of each finger, complex electroencephalogram (EEG) signal processing algorithms should be applied to remove the outliers, extract features, and be able to handle separately the five human fingers. The proposed method deals with a multi-class classification problem. Our machine learning strategy to solve this problem is built on an ensemble of one-class classifiers, each of which is dedicated to the prediction of the intention to move a specific finger. Regions of the brain that are sensitive to the movements of the fingers are identified and located. The average accuracy of the proposed EEG signal processing chain reached 81% for five subjects. Unlike the majority of existing prototypes that allow only one single finger to be controlled and only one movement to be performed at a time, the system proposed will enable multiple fingers to perform movements simultaneously. Although the proposed system classifies five tasks, the obtained accuracy is too high compared with a binary classification system. The proposed system contributes to the advancement of a novel prosthetic solution that allows people with severe disabilities to perform daily tasks in an easy manner.

1. Introduction

In 2006, a “Convention on the Rights of Persons with Disabilities” (UNCPRD) was adopted by the United Nations that recognizes the autonomy and independence of living as basic human rights [1]. In line with this treaty, the system proposed in this paper aims to help people with severe disabilities restore the mobility of their fingers using Brain Computer Interface (BCI) technology. BCI technology aims to assist people suffering from severe disabilities in regular everyday activities by proposing an affordable alternative pathway to regain communication and interaction with their environment. Over the past decade, the BCI technology has seen a rapid development and it has been applied to a wide range of fields, especially in rehabilitation engineering or for functional substitution [2,3,4,5,6].
Different types of bio-signals such as EEG [7,8,9,10], Magnetoencephalography (MEG) [11,12], Electrocorticogram (ECoG) [13,14,15], Functional Near-Infrared Spectroscopy (fNIRS) [16], as well as Electromyography (EMG) [17] have been used to design BCI systems that decode finger movements. EEG is the most commonly used technology for the acquisition of brain signals in BCI systems due to its noninvasive nature, low cost, and high portability [8]. EEG signals corresponding to motor imagery (MI) are defined in frequency domains as brain activity that is triggered by muscular contractions or by thinking about specific tasks. Sensorimotor rhythms (SMRs) activity are registered during motor imagery and they allow the design of the so-called motor-imagery BCI (MI BCI). SMRs appear in well-defined locations according to a well known functional map that binds SMRs with the brain lobes where they took place [18].
Distinct brain activity patterns related to the five fingers’ movements should be identified to control prostheses. The discrimination of finger movements is a complex task due to physiological and non-physiological artifacts in the EEG signals [19]. Unlike movements performed by different parts of the human body, the movements of different fingers of the same hand activate relatively small and close regions in the sensory-motor cortex area [8]. Furthermore, an advanced classification strategy that addresses multi-class classification problems is required by such applications. Therefore, all artifacts must be carefully removed and the signal-to-noise ratio must be enhanced, as well as the relevant channels containing the useful and non-redundant information must be selected, and an appropriate classification strategy must be applied to discriminate between different tasks with a high degree of accuracy.
The vast majority of existing EEG-based BCI systems analyze brain activities to identify a single finger movement. These systems are unreliable to control prostheses and robotic hands that are designed for performing tasks that require various skills [8]. Ref. [7] analyzed Event-related desynchronization (ERD) topography during left or right index finger movement. A degree feature extraction algorithm was proposed based on the graph theory together with Support Vector Machine (SVM) to classify two kinds of index finger movement: left or right index finger movement. The average accuracy of the system was about 62%. Ref. [8] proposed decoding finger movements, including four thumb-related movements as well as the flexion and extension movements of the index, middle, ring, and little fingers. The system predicts the movements using the Choi-Williams distribution and a two-layer classification framework. The system obtained an average classification accuracy computed over the four fingers across all subjects of 43.5%. Ref. [9] proposed another system that applies the common spatial pattern (CSP) algorithm to extract features, as well as four classifiers, such as the random forest, SVM, k-nearest neighborhood (kNN), and the linear discriminant analysis (LDA) to discriminate between trials. The maximum average accuracy reached by these classifiers is about 54%. Ref. [10] proposed using the principal component analysis (PCA), the Power Spectral Densities (PSD), and SVM with a Radial Basis Kernel Function (RBF). The average accuracy of the system was about 77%.
Nevertheless, a few BCI systems have been proposed in the literature to decode the movements performed by each finger within the same hand [10] and the wrist/grasp-related movements of the same hand [8,20]. Recently, an investigation has been started to develop new algorithms allowing researchers to decode the simultaneous movements of the five fingers.
Previous EEG-based studies [7,8,9,10] were usually extracting features from the same group of electrodes for all subjects and for all fingers movements, whereas neuroscience studies mentioned that the activity is unique in the brain of each person. Moreover, the brain activity is different in the left and right brain hemispheres during the same motor imagery [7,21]. Thus, there is a growing need to analyze the brain activity during finger movements and to identify the electrodes that are showing relevant and significant changes. We present a novel statistical-based approach to identify, for each subject, electrodes that are relevant to motor imagery tasks of each finger. The selected electrodes were used as sources to extract relevant features. The experimental results show that the proposed method is highly competitive compared with the existing studies on a multi-class fingers movements classification. Five motor imagery tasks are considered in this work, including movements of the thumb, index, middle, ring, and pinky fingers.
The remainder of this paper is organized as follows. In Section 2, the methodology is presented and the EEG signal acquisition as well as the signal processing chain are described. In Section 3, experimental results are presented. Section 4 discusses the results, a conclusion is drawn, and a future work is indicated.

2. Materials and Methods

The experiments carried out in this paper were approved by the Targeted Research Program committee—Disability Research Grant Program at King Abdulaziz City for Science and Technology (KACST) which granted funding for the research project No 5-18-03-001-0015-10827.
Our methodology focuses on the requirements of a multi-class classification problem for a successful BCI system that decodes the movement of fingers using EEG brain activity signals. Figure 1 presents the general structure of the proposed system. Firstly, our own dataset was created using, a g.HIamp from g.tec, an 80 channel amplifier. To do this, volunteers were asked to randomly move their fingers. Every single finger movement was considered to be a single trial. EEG signals that correspond to the trials of the subject were recorded during different sessions. Every trial was labeled distinctively. Afterwards, the recorded data was processed by removing the artifacts to increase the signal-to-noise ratio (SNR) of the EEG signals. Subsequently, the set of electrodes showing signals that are relevant to the movements of the fingers was identified. Then, the CSP algorithm was applied to extract the features that correspond to the finger movements. Finally, an ensemble of one-class classifiers was used to decode the five finger movements. Every one-class classifier was trained to detect the movements of a given finger. To avoid over-fitting, every classifier was trained on a portion of the data set and tested using another portion. The measurement accuracy was used to determine the performance of every one-class classifier.

2.1. Data Acquisition

2.1.1. EEG Signal Acquisition

The EEG signals were recorded using a g.tec g.HIamp amplifier. The signals were captured through 64 electrodes placed on the scalp according to the international system localization 10–20. 256 Hz was decided as the sampling frequency of the signals and a filtration stage was applied using a BPF with the type “Chebyshev” to keep the frequency component between 1 Hz and 60 Hz.

2.1.2. Experimental Paradigm

The data used in this project was created locally and consisted of EEG signals recorded from 5 KSU volunteers aged between 21 and 23 years. All subjects were male and all of them were right-handed. As for the experiment, subjects were told to sit on a comfortable chair and their right arm was placed on a table to rest and thus to avoid muscle fatigue. We recorded EEG signals from every subject in multiple sessions. During every session, the selected subjects were asked to perform individual finger movements according to the automated scenario program interface (SPI). The subjects performed actions that corresponded to the movement of the five fingers and they were told to use only their right hand. All movements started at a neutral position, with the hand open, the lower arm extended to around 120 degrees, and the thumb placed on the inner side. Subjects were requested during every trial to execute sustained movements. For each finger, 180 trials were recorded which were comprised of 3 to 4 different sessions.

2.1.3. Monitoring the Recording Sessions

Furthermore, a SPI was developed to guide the subjects during the recording of the sessions. It was responsible for displaying the instructions for the subjects on a screen. The main menu of the SPI and indicates the general information related to each subject, the scenario mode, the duration of readiness, the duration of flexing, the duration of waiting, and the number of trials in each session. Once the EEG recording process was launched, a specific state machine was followed; which included 3 phases:
  • Get ready phase: during this phase, a random finger/limb movement was selected and the corresponding animated picture “gif file” was displayed by the scenario program on the screen.
  • Action phase: during this phase, the subject moved the selected finger/limb.
  • Rest phase: during this phase, the subject was in a resting state.
The duration of each recording session was as follows. 2 s were set for each phase; the get ready phase, the action phase, and the reading phase.

2.1.4. Labeling Signals

The recording process was done over four sessions. Each session was continuous and was discretized into a sequence of six seconds per trial. The first 20 s of each session were discarded due to the BCI amplifier initialization delay. The EEG signals were subdivided into six-second intervals where each epoch corresponded to one finger movement. Each interval was labeled with a corresponding label from the scenario program. This process was repeated for every session and the processed sessions were concatenated to give each subject a fully labeled signal.

2.2. Artifacts Removal

After labeling the EEG signals, a filter block was applied to remove artifacts. In this way, only the frequency components related to the intention of finger movement were kept. These frequency components were often set to be between 8 Hz and 30 Hz [22]. Thus, a finite impulse response filter was applied with a 4th order, allowing the removal of frequency components outside the band while maintaining a zero-frequency phase for the signal [23]. Subsequently, a common average referencing technique was applied to allow the average signal at all electrodes to be calculated and subtracted from the EEG signal at every electrode for each time point. This step allowed for the discrimination between the positive and the negative peaks in the EEG signals and for locating the sources of the signal in the noisy environment that led to an improvement in the SNR [24]. The EEG signals were converted and computed according to Equation (1).
T C A R ( n ) = T ( n ) 1 E k = 1 E T ( k )
where E is the total number of electrodes used in the recording process of one trial T. T ( k ) is the EEG signal at the electrode k.

2.3. Selection of Relevant Electrodes

Unlike existing EEG-based studies to detect fingers movements which are extracting features from a predefined set of electrodes, this work proposes an adaptive method to analyze the brain activity during fingers movements and to identify the electrodes that are showing relevant changes. The identified electrodes were used as signal resources to extract relevant features before the classification step.

2.3.1. Annotations

  • An electrode (e) is an electrical conductor used to acquire brain signals.
  • E is the set of electrodes (e) on a cap.
  • A motor imagery (m), also called the motor imagery task, is a mental process by which an individual simulates a given movement action.
  • Ψ is a set of motor imagery tasks. The motor imagery tasks which were considered here are the imaginary movements of the thumb, index, middle, ring and pinky fingers.
  • A trial (t) is a set of brain signals that are recorded with a set of electrodes E during a given motor imagery task m.
  • θ is a set of trials. θ = { t i }
  • ς is a set of subjects from each of which a set of trials was recorded.
  • A rest is a set of brain signals, recorded using a set of electrodes E that corresponds to the mental state during a resting period. In this study, the resting period corresponds to the portion of a trial t recorded during the 0–1 s period of t. It is denoted t [ 0 1 ] .

2.3.2. Preliminaries

The following set of basic functions are required for the selection of the relevant electrodes:
  • σ : For a given motor imagery task m i , this function returns the subset of trials that were recorded during m i . It is defined as follows:
    σ : Ψ P ( θ )
    σ ( m i ) = { t j θ in a way that t j is recorded during the motor imagery task m i } .
  • δ : For a given subject s i , this function returns the subset of trials that have been recorded during the sessions of the subject s i . It is defined as follows:
    δ : ς P ( θ )
    δ ( S i ) = { t j θ in a way that t j is recorded during a session of the subject S i } .
    P o w ( e , t ) is the power, also called the energy, of the electrode e calculated from the trial t. It is computed using a spectral representation of the trial t with the application of the fast Fourier transformation. It is measured according to the following expression:
    P o w ( e , t ) = f [ 8 , 30 ] f F F T ( t ) [ e , f ]
    where
    F F T [ e , f ] = n = 0 N 1 t [ n , e ] e j 2 π k n N
    The power spectrum of each trial using the FFT function are available online at Supplementary Materials (Folder S1).
    E R D / E R S ( e , t , r e f _ P o w e r ) is defined as the percentage of the power increase or decrease in the electrode e during the trial t in relation to a reference power r e f _ P o w e r , according to the following expression:
    E R D _ E R S ( e , t , r e f _ P o w e r ) = P o w ( e , t ) r e f _ P o w e r r e f _ P o w e r

2.3.3. The Selection Models

Let’s define the following functions:
  • The function ϕ ( s i , m j ) that calculates the subset of trials recorded during the sessions of the subject s i while performing the motor imagery task m j . It is defined as follows:
    ϕ : Ψ P ( θ )
    ϕ ( s i , m j ) = δ ( s i ) σ ( m j )
  • The function τ ( s i , m j , e k ) that calculates the subset of ϕ ( s i , m j ) , where changes in the brain activity of the subject s i in the electrode e k are significant during the motor imagery task m j . Brain activity variations during a given motor imagery are considered significant if they exceed the variation in power in a reference electrode during the same motor imagery. It is defined as follows:
    τ : ς × Ψ × E P ( θ )
    τ ( s i , m j , e k ) = { t l ϕ ( s i , m j ) such that | E R D _ E R S ( e k , t l , r e f _ P o w e r ( s i , e k ) | | E R D _ E R S ( r e f e r e n c e e l e c , t l , r e f _ P o w e r ( S i , r e f e r e n c e e l e c ) |
    *
    Where ref_Power ( s i , e) is defined as the average power of the electrode e during the distinct rest periods of s i , according to the following expression:
    r e f _ P o w e r ( s i , e ) = 1 δ ( s i ) t _ x δ ( s _ i ) P o w ( e , t _ x [ 0 . . 1 ] )
    *
    Regarding the reference electrode ( r e f e r e n c e e l e c ) we selected C 3 as the reference for all electrodes located at the left-brain hemisphere. Moreover, we selected C 4 as the reference for all electrodes located at the right brain hemisphere.
A recent study published in Scientific Reports on 2020 [21] has presented the distribution of sensorimotor rhythms during hand motor imagery in the right and the left hemispheres. These distributions confirm neuroscience assumptions saying that brain activity corresponding to left hand movements is located on the right hemisphere and vice versa. This is why we considered the electrodes C3 and C4 as references to the electrodes at the left and right hemispheres respectively.
Take for instance a subject s i , a motor imagery m j and an electrode e k . The electrode e k is relevant to the motor imagery m j with respect to the subject s i , if the probability, denoted ρ ( s i , m j , e k ) , of obtaining significant brain activity changes in e k when the subject s i performs the motor imagery task m j exceeds a given threshold called the α _ t h r e s h o l d . The probability ρ ( s i , m j , e k ) is calculated as follows:
  • ρ : ς × Ψ × E [ 0 , 1 ]
  • ρ ( s i , m j , e k ) = τ ( s i , m j , e k ) ϕ ( s i , m j )
Here:
  • τ ( s i , m j , e k ) and ϕ ( s i , m j ) are the total number of trials of τ ( s i , m j , e k ) and ϕ ( s i , m j ) , respectively.
Based on the function specifications described above, the following electrodes selector, called the ϵ , calculates and returns the set of electrodes that are relevant, for a given subject s i , to a given motor imagery m j . It is defined according to the following expression:
  • ϵ : ς × Ψ P ( E )
  • ϵ ( s i , m j ) = { E k E such that ρ ( s i , m j , e k ) α _ t h r e s h o l d }

2.4. Feature Extraction

For every finger movement m of a given subject s, we identified a set of relevant electrodes. These electrodes were used to extract appropriate features using the CSP method that is a well-known feature extraction technique. This method aims to extract and keep a significant activity or rhythm, as well as eliminate all redundant EEG signals. These features represent the most significant energy at the relevant electrodes in the α and β bands in reality; which have the highest likelihood of containing significant motor imagery information [22]. More theoretical details about the CSP algorithm are presented in [25].

2.5. Finger Movements Classification

For every Finger movement m, a one-class classifier denoted C m was designed. The classifier C m is responsible for the detection of the movements of the corresponding finger. The classification model of a given classifier C m is built including features extracted by the CSP algorithm using the subset of electrodes that are relevant to the corresponding motor imagery m. As such, each finger had its classification model, where the signal was copied to the model of each finger as well as each model was classified based on whether that signal belonged to that particular finger. This approach made the movement of multiple fingers at the same time possible, as each finger worked independently.
Many classifiers were tested, such as SVM, Logistic Regression, Gaussian Naive Bayes, and LDA. LDA was more efficient in term of accuracy against the other classifiers. Therefore, the LDA was selected as the main classifier of the proposed approach.

2.6. Prosthesis Control

The main objective of this work is to control prostheses merely by EEG motor imagery signals. A complex signal pre-processing is performed on EEG trials to remove artifacts, enhance signal to noise ratio, and decrease the dimensionality of the EEG signals. Every trial is processed simultaneously by each one-class classifier. Every one-class classifier applies the CSP algorithm on the set of electrodes that are relevant to the corresponding finger, to extract relevant features. Based on the extracted features, the LDA classifier identifies the trial as a target or as an outlier. If a given one-class classifier has recognized the trial as target, then a control command is sent to the corresponding finger’s actuator. A finite state machine corresponding to these steps is designed and validated at the simulation stage.

3. Results

The extracted features and their corresponding labels were split into two sets. The first set was dedicated to the training session and the other for testing purposes. The presented results were measured according to a 5-fold cross-validation approach. The training set for each finger was concatenated with 20% of the data from other fingers. However, for the testing set, 50% of the data from other fingers was added. Algorithm 1 summarizes the data set decomposition process.
The main objective of this section is to demonstrate the efficiency of the proposed method in regard to distinguishing the movements of the five-finger based on the intention of the user. Until now, the recording trials were applied directly to the signal processing chain without the application of the proposed channel selection algorithm. This is the first approach where the EEG features are extracted using the CSP spatial filter and the LDA algorithm for classification, while maintaining the same data partition as presented in Algorithm 1. The accuracy of the system fluctuates between subjects from 52% to 60%, with an average of 57%. This accuracy is poor, which makes the system inefficient. With the application of the same techniques and the integration of the channel selection method, the number of channels used during the acquisition are minimized by identifying the relevant channels and removing the others. Figure 2 represents the identified relevant electrodes for each finger in all subjects. As depicted in Figure 2, despiteusing 64 channels during the acquisition process, more than 70% of the channels were removed due to not containing useful information. Furthermore, the most sensitive channels were in the left hemisphere and the center of the cortex as the scope of this study focused on the movements of fingers of the right hand. These results were obtained with an α _ t h r e s h o l d set to 70%.
Algorithm 1: Commented algorithm of the basic steps of feature extraction and classification problems
Brainsci 10 00965 i001
Figure 3 shows the spatial filters, obtained with CSP algorithm after identification of relevant electrodes. The obtained features are available online at Supplementary Materials (Folder S2). These maps show that the location of relevant electrodes changes from one subject to another and from one finger to another for the same subject, as expected from the literature [26]. The features are generally smoother and physiologically more relevant, with strong weights over the motor cortex areas, except in some cases where the features have large weights in several unexpected locations from a neuro-physiological point of view.
Four different metrics have been used to measure the efficiency of the proposed method to detect fingers movements. These metrics are: the Accuracy, Precision, Recall, and the F-measure.
The integration of the channel selection algorithm with the same feature extraction as well as classification algorithms enhanced the accuracy of the system significantly, by more than 20% for each subject. The obtained system accuracy was 81% overall. Table 1a–e present the significance test for the five subjects. The column “Raw Accuracy” of these tables shows the accuracy of the classification process of finger movements using CSP and LDA without our proposed statistical based channel selection. The results demonstrate the efficiency and the validity of our approach. Table 1f presents the accuracy obtained for each subject. Despite the system classifying five finger movements simultaneously, the measured accuracy exceeded 83%, which exceeds all previous literature concerning EEG and fingers. In fact, with the use of the same approach, multiple finger movements can be detected simultaneously, in addition to having high model accuracy.

4. Discussion

This research was implemented with the use of a relatively large amount of data instances. The sessions of the recording will hopefully help other BCI researchers bring further improvements based on this data. The starting prediction models on the raw data ranged from 52% to 59% and this was highly improved with intensive cleaning and the addition of a pre-processing phase. Having an accuracy of 83% proves the quality of the decided approach. Our pre-processing approach showed high accuracy results by choosing the relevant channels after their comparison with reference channels and with the exclusion of unimportant channels. The prediction model will get a differentiable dataset by using this pre-processed sequence along with the feature extraction approach that can be classified using a machine learning model such as the LDA. The main advantage of giving every finger its classifier is that it allows the model to predict movements of multiple fingers at the same time, which has not been done or discussed in previous works. The determination of the flexion angle of the fingers and distinguishing between different finger movements extracted from both hands could be investigated in future works, along with improving the accuracy of the current system.
Table 2 presents the accuracy in association with the proposed system and the overall results of existing methods that are validated according to the online and offline approach. System performance is significantly improved by the proposed system, achieving a system accuracy of 81% on average according to the offline approach. Therefore, the proposed finger decoding system outperforms those described in previous studies in multiple ways. For example, the average accuracy described herein increased by 4% compared to the best known previous system that was presented in [10]. Moreover, the proposed system significantly improves the runtime using robust and efficient algorithms in contrast with the method presented in [7,8,9,10].
A small number of trials for every finger movement are sufficient to train the system for the on-line scenario before being used by a new user. Indeed, the number of training set could be increased using data augmentation techniques. The data augmentation technique which we have adopted in this work consists of decomposing every trial t of a total time T into fragments of an equal small window size W. Having fragments with small duration does not harm the proposed method since the features are extracted from the frequency domain of the trial and not from its time domain. Moreover, every two consecutive fragments may overlap. Let’s define O as the overlapping ratio. This ratio determines how much data are common between two consecutive fragments. Thus, every trial t will be split into N fragments of a window size W. N is calculated as follows:
N = T W s h i f t + 1 where s h i f t = ( 1 O ) W .
Therefore, if we consider a trail t of 6 s. The trial t could be split into N = ( 6 2 ) / ( 0.25 × 2 ) ) + 1 = 9 fragments of 2 s each with an overlapping ratio of 75%. Thus, five trials of 6 s will be decomposed to 45 trials of 2 s.

5. Conclusions

This study aimed to develop a BCI system for disabled people who are suffering from motor mobility impairment. The outcomes of this study may contribute to the development of a next-generation prosthesis, i.e., a brain-controlled prosthesis. Such prosthesis can offer an alternative method for disabled people to restore their mobility. In this study, a prediction method was developed that consists of a set of one-class classifiers. An average accuracy of 81% was achieved with the proposed method. Existing EEG BCI systems decode only single finger movements. In contrast, different models were trained during the establishment of this system, including the SVM, Gaussian, Naïve Bayes, Linear Regression, as well as the LDA model. The LDA was the classifier which obtained the best results. The system was trained and tested using a data-set recorded from volunteers. These promising results can significantly increase the control of EEG-based BCI technologies and potentially facilitate their development with rich control signals to drive complex applications. The detection of continuous finger movements will be the target of the future work associated with this system.

Supplementary Materials

The following are available online at https://zenodo.org/record/4316450#.X9MdDrMRWUk. Folder S1: Power spectrum of each finger trial for every subject obtained by applying the FFT on the temporal signals. Folder S2: CSP weight filter of each trial for every subject obtained by applying the CSP on the corresponding relevant electrodes.

Author Contributions

S.G., K.B. and H.A. (Hatim Aboalsamh): research design, analysis of the data, wrote manuscript; B.A., Y.A., Z.A. and H.A. (Homoud Alobaedallah): acquisition and analysis of the data, wrote program. All authors were fully involved in the study and approved the final version of this manuscript.

Funding

This research was funded by Deanship of Scientific Research, King Saud University: RG-1440-109.

Acknowledgments

The authors extend their appreciation to the Deanship of Scientific Research at King Saud University for funding this work through Research Group no. RG-1440-109.

Conflicts of Interest

The authors declare that they have no conflict of interest and no problem with Ethical Approval.

Abbreviations

The following abbreviations are used in this manuscript:
MDPIMultidisciplinary Digital Publishing Institute
DOAJDirectory of open access journals
TLAThree letter acronym
BCIBrain Computer Interfaces
SVMSupport vector machine
LDALinear discriminant analysis
RFrandom forest
kNNk-nearest neighborhood
HCIHuman computer interaction
EEGElectroencephalogram
MEGMagnetoencephalography
ECoGElectrocorticogram
fNIRSFunctional Near-Infrared Spectroscopy
EMGElectromyography
UNUnited Nations
ERPsEvent-related potentials
SMRsensorimotor rhythms
CWDChoi–Williams distribution
2LCFTwo-layer classification framework
CSPCommon spatial pattern
LSTMlong short-term memory
CNNconvolutional neural network model
RCNNrecurrent convolutional neural network
PCAPrincipal component analysis
PSDPower Spectral Densities
SNRsignal-to-noise ratio
SPIScenario program interface

References

  1. Gannouni, S.; Alangari, N.; Mathkour, H.; Aboalsamh, H.; Belwafi, K. BCWB: A P300 brain-controlled web browser. Int. J. Semant. Web Inf. Syst. 2017, 13, 55–73. [Google Scholar] [CrossRef]
  2. Mudgal, S.K.; Sharma, S.K.; Chaturvedi, J.; Sharma, A. Brain computer interface advancement in neurosciences: Applications and issues. Interdiscip. Neurosurg. 2020, 20, 100694. [Google Scholar] [CrossRef]
  3. Pulliam, C.L.; Stanslaski, S.R.; Denison, T.J. Chapter 25—Industrial perspectives on brain-computer interface technology. In Brain-Computer Interfaces; Handbook of Clinical Neurology; Ramsey, N.F., del R. Millán, J., Eds.; Elsevier: Amsterdam, The Netherlands, 2020; Volume 168, pp. 341–352. [Google Scholar] [CrossRef]
  4. Zhang, F.; Yu, H.; Jiang, J.; Wang, Z.; Qin, X. Brain–computer control interface design for virtual household appliances based on steady-state visually evoked potential recognition. Vis. Inform. 2020, 4, 1–7. [Google Scholar] [CrossRef]
  5. AL-Quraishi, M.; Elamvazuthi, I.; Daud, S.; Parasuraman, S.; Borboni, A. EEG-Based Control for Upper and Lower Limb Exoskeletons and Prostheses: A Systematic Review. Sensors 2018, 18, 3342. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Tayeb, Z.; Fedjaev, J.; Ghaboosi, N.; Richter, C.; Everding, L.; Qu, X.; Wu, Y.; Cheng, G.; Conradt, J. Validating Deep Neural Networks for Online Decoding of Motor Imagery Movements from EEG Signals. Sensors 2019, 19, 210. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  7. Jia, T.; Liu, K.; Lu, Y.; Liu, Y.; Li, C.; Ji, L.; Qian, C. Small-Dimension Feature Matrix Construction Method for Decoding Repetitive Finger Movements From Electroencephalogram Signals. IEEE Access 2020, 8, 56060–56071. [Google Scholar] [CrossRef]
  8. Alazrai, R.; Alwanni, H.; Daoud, M.I. EEG-based BCI system for decoding finger movements within the same hand. Neurosci. Lett. 2019, 698, 113–120. [Google Scholar] [CrossRef] [PubMed]
  9. Anam, K.; Nuh, M.; Al-Jumaily, A. Comparison of EEG Pattern Recognition of Motor Imagery for Finger Movement Classification. In Proceedings of the 2019 6th International Conference on Electrical Engineering, Computer Science and Informatics (EECSI), Bandung, Indonesia, 18–20 September 2019. [Google Scholar] [CrossRef]
  10. Liao, K.; Xiao, R.; Gonzalez, J.; Ding, L. Decoding Individual Finger Movements from One Hand Using Human EEG Signals. PLoS ONE 2014, 9, e85192. [Google Scholar] [CrossRef] [PubMed]
  11. Yong, X.; Li, Y.; Menon, C. The Use of an MEG/fMRI-Compatible Finger Motion Sensor in Detecting Different Finger Actions. Front. Bioeng. Biotechnol. 2016, 3. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  12. Quandt, F.; Reichert, C.; Hinrichs, H.; Heinze, H.; Knight, R.; Rieger, J. Single trial discrimination of individual finger movements on one hand: A combined MEG and EEG study. NeuroImage 2012, 59, 3316–3324. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Xie, Z.; Schwartz, O.; Prasad, A. Decoding of finger trajectory from ECoG using deep learning. J. Neural Eng. 2018, 15, 036009. [Google Scholar] [CrossRef] [PubMed]
  14. Branco, M.P.; Freudenburg, Z.V.; Aarnoutse, E.J.; Bleichner, M.G.; Vansteensel, M.J.; Ramsey, N.F. Decoding hand gestures from primary somatosensory cortex using high-density ECoG. NeuroImage 2017, 147, 130–142. [Google Scholar] [CrossRef] [PubMed]
  15. Onaran, I.; Ince, N.F.; Cetin, A.E. Classification of multichannel ECoG related to individual finger movements with redundant spatial projections. In Proceedings of the 2011 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Boston, MA, USA, 30 August–3 September 2011. [Google Scholar] [CrossRef]
  16. Lee, S.H.; Jin, S.H.; Jang, G.; Lee, Y.J.; An, J.; Shik, H.K. Cortical activation pattern for finger movement: A feasibility study towards a fNIRS based BCI. In Proceedings of the 2015 10th Asian Control Conference (ASCC), Kota Kinabalu, Malaysia, 31 May–3 June 2015. [Google Scholar] [CrossRef]
  17. Soltanmoradi, M.A.; Azimirad, V.; Hajibabazadeh, M. Detecting finger movement through classification of electromyography signals for use in control of robots. In Proceedings of the 2014 Second RSI/ISM International Conference on Robotics and Mechatronics (ICRoM), Tehran, Iran, 15–17 October 2014. [Google Scholar] [CrossRef]
  18. Pfurtscheller, G.; da Silva, F.L. Event-related EEG/MEG synchronization and desynchronization: Basic principles. Clin. Neurophysiol. 1999, 110, 1842–1857. [Google Scholar] [CrossRef]
  19. Belwafi, K.; Romain, O.; Gannouni, S.; Ghaffari, F.; Djemal, R.; Ouni, B. An embedded implementation based on adaptive filter bank for brain-computer interface systems. J. Neurosci. Methods 2018, 305, 1–16. [Google Scholar] [CrossRef] [PubMed]
  20. Edelman, B.J.; Baxter, B.; He, B. EEG Source Imaging Enhances the Decoding of Complex Right-Hand Motor Imagery Tasks. IEEE Trans. Biomed. Eng. 2016, 63, 4–14. [Google Scholar] [CrossRef] [PubMed]
  21. Zapała, D.; Zabielska-Mendyk, E.; Augustynowicz, P.; Cudo, A.; Jaśkiewicz, M.; Szewczyk, M.; Kopiś, N.; Francuz, P. The effects of handedness on sensorimotor rhythm desynchronization and motor-imagery BCI control. Sci. Rep. 2020, 10. [Google Scholar] [CrossRef] [PubMed]
  22. Belwafi, K.; Djemal, R.; Ghaffari, F.; Romain, O. An adaptive EEG filtering approach to maximize the classification accuracy in motor imagery. In Proceedings of the 2014 IEEE Symposium on Computational Intelligence, Cognitive Algorithms, Mind, and Brain (CCMB), Orlando, FL, USA, 9–12 December 2014. [Google Scholar] [CrossRef] [Green Version]
  23. Mitra, S.K. Digital Signal Processing; Wcb/McGraw-Hill: Boston, MA, USA, 2010. [Google Scholar]
  24. Lepage, K.Q.; Kramer, M.A.; Chu, C.J. A statistically robust EEG re-referencing procedure to mitigate reference effect. J. Neurosci. Methods 2014, 235, 101–116. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  25. Lotte, F.; Guan, C. Regularizing Common Spatial Patterns to Improve BCI Designs: Unified Theory and New Algorithms. IEEE Trans. Biomed. Eng. 2011, 58, 355–362. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  26. Pfurtscheller, G.; Neuper, C. Motor imagery and direct brain-computer communication. Proc. IEEE 2001, 89, 1123–1134. [Google Scholar] [CrossRef]
Figure 1. General structure of the proposed system.
Figure 1. General structure of the proposed system.
Brainsci 10 00965 g001
Figure 2. Relevant Channels for each finger.
Figure 2. Relevant Channels for each finger.
Brainsci 10 00965 g002
Figure 3. Electrode weights obtained for each finger and each subject.
Figure 3. Electrode weights obtained for each finger and each subject.
Brainsci 10 00965 g003
Table 1. Subject’s statistical testing.
Table 1. Subject’s statistical testing.
(a) s 1 results
FingerRaw
Accuracy
Proposed method
AccuracyPrecisionRecallF_Measure
Thumb5080.3585.7183.3384.50
Index60.7080.3590.3277.7783.58
Middle57.1483.9286.4888.8887.67
Ring55.3585.7191.1786.1188.57
Pinky57.1487.5093.9386.1189.85
(b) s 2 results
FingerRaw
Accuracy
Proposed method
AccuracyPrecisionRecallF_Measure
Thumb66.0773.2181.817578.26
Index58.1885.4593.5482.8587.87
Middle61.8189.0991.4291.4291.42
Ring56.3683.6396.4277.1485.71
Pinky60.7173.2186.2069.4476.92
(c) s 3 results
FingerRaw
Accuracy
Proposed method
AccuracyPrecisionRecallF_Measure
Thumb58.6275.8678.5786.8482.50
Index53.4479.3182.5086.8484.61
Middle55.1789.6588.0997.3692.50
Ring53.4484.4887.1789.4788.31
Pinky42.185.9691.4286.4888.88
(d) s 4 results
FingerRaw
Accuracy
Proposed method
AccuracyPrecisionRecallF_Measure
Thumb7569.6474.3580.5577.33
Index60.7185.7185.94.4489.47
Middle5083.9281.3997.2288.60
Ring56.3672.7272.7291.4281.01
Pinky56.3678.1878.0491.4284.21
(e) s 5 results
FingerRaw
Accuracy
Proposed method
AccuracyPrecisionRecallF_Measure
Thumb51.7876.7881.0883.3382.19
Index60.7183.9282.9294.4488.31
Middle58.9276.7875.5594.4483.95
Ring67.8578.5776.0897.2285.36
Pinky60.7176.7876.7491.6683.54
(f) Summary of the accuracy by subject
Finger s 1 s 2 s 3 s 4 s 5
Thumb80.3573.2175.8669.6476.78
Index80.3585.4579.3185.7183.92
Middle83.9289.0989.6583.9276.78
Ring85.7183.6384.4872.7278.57
Pinky87.5073.2185.9678.1876.78
Average83.5680.9183.5278.0378.56
Table 2. Comparison of the proposed method system with other EEG finger decoding systems.
Table 2. Comparison of the proposed method system with other EEG finger decoding systems.
StudiesN of FingersSignal Processing ChainN of SubjectsAccuracy (%)
[7]2Band-pass filter & ERD/ERS & SVM10≈62.5
[8]4CWD & 2LCF1843.5
[9]5RF & LDA & SVM & KNN454
[10]5PCA & PSD & SVM1177
Proposed method5Band-pass filter & CAR &CSP & LDA581
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Gannouni, S.; Belwafi, K.; Aboalsamh, H.; AlSamhan, Z.; Alebdi, B.; Almassad, Y.; Alobaedallah, H. EEG-Based BCI System to Detect Fingers Movements. Brain Sci. 2020, 10, 965. https://doi.org/10.3390/brainsci10120965

AMA Style

Gannouni S, Belwafi K, Aboalsamh H, AlSamhan Z, Alebdi B, Almassad Y, Alobaedallah H. EEG-Based BCI System to Detect Fingers Movements. Brain Sciences. 2020; 10(12):965. https://doi.org/10.3390/brainsci10120965

Chicago/Turabian Style

Gannouni, Sofien, Kais Belwafi, Hatim Aboalsamh, Ziyad AlSamhan, Basel Alebdi, Yousef Almassad, and Homoud Alobaedallah. 2020. "EEG-Based BCI System to Detect Fingers Movements" Brain Sciences 10, no. 12: 965. https://doi.org/10.3390/brainsci10120965

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop