Next Article in Journal
Transducer Technologies for Biosensors and Their Wearable Applications
Next Article in Special Issue
Intelligent Biosignal Processing in Wearable and Implantable Sensors
Previous Article in Journal
Peripheral Nerve Injury Induces Changes in the Activity of Inhibitory Interneurons as Visualized in Transgenic GAD1-GCaMP6s Rats
Previous Article in Special Issue
Computer-Aided Detection of Fiducial Points in Seismocardiography through Dynamic Time Warping
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Intelligent Classification Technique of Hand Motor Imagery Using EEG Beta Rebound Follow-Up Pattern

1
Center of Excellence in Biomedical Research on Advanced Integrated-on-Chips Neurotechnologies (CenBRAIN Neurotech), School of Engineering, Westlake University, Hangzhou 310024, China
2
Institute of Advanced Technology, Westlake Institute for Advanced Study, Hangzhou 310024, China
*
Authors to whom correspondence should be addressed.
Biosensors 2022, 12(6), 384; https://doi.org/10.3390/bios12060384
Submission received: 20 April 2022 / Revised: 18 May 2022 / Accepted: 20 May 2022 / Published: 2 June 2022
(This article belongs to the Special Issue Intelligent Biosignal Processing in Wearable and Implantable Sensors)

Abstract

:
To apply EEG-based brain-machine interfaces during rehabilitation, separating various tasks during motor imagery (MI) and assimilating MI into motor execution (ME) are needed. Previous studies were focusing on classifying different MI tasks based on complex algorithms. In this paper, we implement intelligent, straightforward, comprehensible, time-efficient, and channel-reduced methods to classify ME versus MI and left- versus right-hand MI. EEG of 30 healthy participants undertaking motional tasks is recorded to investigate two classification tasks. For the first task, we first propose a “follow-up” pattern based on the beta rebound. This method achieves an average classification accuracy of 59.77% ± 11.95% and can be up to 89.47% for finger-crossing. Aside from time-domain information, we map EEG signals to feature space using extraction methods including statistics, wavelet coefficients, average power, sample entropy, and common spatial patterns. To evaluate their practicability, we adopt a support vector machine as an intelligent classifier model and sparse logistic regression as a feature selection technique and achieve 79.51% accuracy. Similar approaches are taken for the second classification reaching 75.22% accuracy. The classifiers we propose show high accuracy and intelligence. The achieved results make our approach highly suitable to be applied to the rehabilitation of paralyzed limbs.

Graphical Abstract

1. Introduction

Motor imagery (MI) is a popular method developed to help patients undergoing post-stroke rehabilitation to learn or improve specific motor functions [1]. It is a dynamic state in which patients experience sensations without any actual execution. Studies demonstrate that MI may enhance functional recovery of paralyzed limbs [2], since similar activation sequences occur in the motor cortex during both MI and actual motor execution (ME) [3]. A brain–machine interface (BMI) allows users to interact with the external world through their brain signals instead of their peripheral muscles [4]. Extensive research has been conducted to exploit BMIs for post-stroke rehabilitation, as they assist in the restoration of motional ability [5]. Cincotti et al. demonstrated that, compared with MI alone, rehabilitation training integrated with BMI neurofeedback makes motor areas become more involved, such as by enhancing neuroplasticity in affected regions [6]. Noninvasive electroencephalography (EEG) is a frequently used BMI modality, and one study demonstrated that the majority of stroke patients could use EEG-based MI BMI [7]. One possible application is for evaluating the restoration of physical functions. Until now, the types of assessments commonly used have been time-consuming and can be affected by subjective evaluation.
On the other hand, neurophysiology has revealed that EEG signals experience suppression or enhancement during both MI and ME in the mu and beta frequency bands, which is known as event-related desynchronization (ERD) or event-related synchronization (ERS). Various EEG-based MI BMIs have been developed to detect this phenomenon. The authors of [8] have concluded that beta rebound (beta ERS occurs shortly after movement) is solid and stable without training, promising fast and universal detection. Leeb et al. applied the beta rebound generated by foot MIs as a feature to detect the user’s control intention [9]. Based on beta rebound after foot MIs, Müller-Putz et al. proposed a brain-switch with one-channel EEG [10]. Few studies touch on beta rebound of hand MIs. To the best of our knowledge, no ME versus MI classifications using beta rebound have been reported. Diverse feature extraction methods were proposed to classify left versus right MI [11]. Common spatial pattern (CSP) and its derivatives are proved to result in good accuracy in subsequent classification tasks [12,13,14]. Wang et al. used SampEn to extract features in MI-EEG data and trained classifier, proving its effectiveness [15]. Other techniques, including statistical, wavelet-based, and power-based, were popular in physiological signal processing. Rajdeep et al. extracted 26 features based on these techniques and finished left versus right hand movements classification [16]. These works have already achieved competitive accuracies, but high-dimensional feature vectors can spoil classifier performance, which calls for feature selection to remove redundant features and retain relevant ones [17]. Gu et al. applied sparse logistic regression (SLR) and its derivatives to select features and to estimate their weight parameters for classification, improving the performance of foot MI and acquiring satisfactory results [18]. However, no prior-art publications were found applying SLR to hand MI classification. Foot MI can generate more observable signals and is, therefore, easier to classify [19], but we cannot overlook hand MI as their deftness and indispensable role in daily life.
To assess the restoration objectively, we investigated the difference between ME and MI, intending to assimilate MI into ME in EEG signals with neurofeedback. It is possible to retrain the brain toward becoming more capable of movement, which improves recovery. While the lateral classification (left versus right hand) has achieved high accuracy in upper and lower limbs, few studies have investigated the difference between ME and MI [20,21]. Focusing on power in different frequency bands, Miller et al. confirmed that spatial distribution of neuronal activity during MI mimics that during ME, and its magnitude is ~25% of ME [22]. More detailed distinction should be drawn to ensure a stable detector. Moreover, existing studies of EEG-based MI BMI share the following limitations: (1) few studies have specified the movement or decoded different motors within the same limb [23]; (2) the multichannel EEG signals in these research activities may reduce processing accuracy and speed, while optimal sets of channels are preferable from a practical point of view [24]; (3) little comparative analysis has been conducted to evaluate different feature extraction methods on experimental data set in parallel to determine which ones are preferable [25]; and (4) they feed large quantities of feature vectors directly into classifiers, which will severely limit the accuracy of classifiers [18].
We built a dataset underlining both ME and MI involving delicate motors to address the above-described limitations. This dataset aimed to explore the feasibility of differentiating between ME versus MI and left- versus right-hand MI by optimizing the feature extraction and classification methods. We put forward a stable and straightforward detector of ME and MI based on beta rebound called “follow-up pattern”. We also proposed corresponding methods to address the limitations mentioned before: (1) we reproduced motions that require the engagement of both hands, investigating their application to ME and MI classification; (2) we optimized the number and location of EEG channels to achieve high accuracy with a few channels of EEG-based MI BMI, also proposing a stable and straightforward detector of ME and MI based on beta rebound; (3) we adopted various approaches for extracting features and trained classifiers to validate their utility; and (4) we recognized useful features that improved classification performance with feature-selection techniques. The prepared dataset and analysis methods we proposed can be combined with noninvasive brain stimulation (NIBS) techniques to induce plasticity during post-stroke rehabilitation [26].
The paper is summarized as follows: details of the experiments and the methods of feature selection are described in Section 2; Section 3 illustrates the “follow-up” pattern based on the beta rebound and presents the outcomes of different detection methods; in Section 4 we compare our results with related work; and the conclusions and future work are the subjects of Section 5.

2. Materials and Methods

2.1. Experiments

In our research, 30 healthy individuals (15 males, 15 females; aged 20–35 years, mean ± SD: 24.26 ± 3.46; 29 are right-handed) volunteered. All participants provided written informed consent in accordance with the Declaration of Helsinki before the experiment, which was approved by the ethical committee of Westlake University, Hangzhou, China (approval ID: 20191023swan001). All participants received CNY 100 as an inconvenience allowance. Participants were required to make movements based on auditory stimuli, undertaking the following actions: finger tapping, holding a pen, opening a pen, crossing fingers, and moving the arm, as shown in Figure 1. The tasks were set to examine the feasibility (whether joints and hard tissues constrain the freedom of movement) and coordination (all fingers should work in coordination to serve a common purpose, i.e., participants place their hands flat on the table in a comfortable way, while each finger start to orchestrate the required movement after coordination stimuli) of both-hand motion. Each task included five trials for ME and five trials for MI. Each trial was followed by a 2 s rest time. The timing paradigm of a single trial is shown in Figure 2.

2.2. EEG System

The EEG system examined in this study was the Brain Products actiCHamp Plus (EEG signal amplifier) and actiCAP slim (active EEG electrodes) provided by Brain Products GmbH, Munich, Germany, as shown in Figure 3. Thirty-two active electrodes including a reference electrode and a ground electrode were introduced to the system. These electrodes can be placed onto three fabric caps (54–56 cm, 56–58 cm, or 58–60 cm), catering for participants’ head circumstances. A chin belt was attached to each cap to achieve better fixation and maintain electrodes’ position on the scalp. In total, 32 possible electrode positions arranged under a 10–20 international standard system were marked on each cap.
Before each experiment, a disinfectant wipe was applied to the electrodes. When finished, electrodes and caps were carefully cleaned from gels. These practices can effectively prevent crosstalk between channels induced by resting gels and enhance connectivity by removing dust and particles within the system.

2.3. Data Recording and Preprocessing

EEG signals were recorded with Ag/AgCl electrodes in a 32-channel cap arranged under a 10–20 international standard system (Brain Products, Inc, Gilching, Germany). The central frontal electrode (Fz) served as a reference to a common ground, and the impedance was controlled to be lower than 10 kΩ. The EEG data were recorded with a sampling rate of 1000 Hz. The montage used in our experiment is shown in Figure 4.
Preprocessing included the following procedures: removal of bad channels (channels that coupled noise or had irregular power spectra) or segments, re-referencing to a common average (common average reference is the average electrical activity measured across all scalp channels, re-referencing is conducted by subtracting it from each channel.), filter from 1 to 60 Hz and a 50 Hz notch filter (the interferences from mainline power are removed by the 50 Hz notch filter and EEG signals at 1 to 60 Hz contain most useful information for our applications.), independent component analysis (ICA), epoch extraction, and baseline correction. In the two sets (ME and MI) of preprocessed EEG data, a total of 812 epochs were generated. According to [27], the primary motor cortex (PMC) region, where channels C3, C4, and Cz are located, includes more signals for higher classification performance than other brain areas. We adopted these channels in subsequent analysis to shorten the experiment’s preparation time and to reduce the computation load to realize a BMI that requires less input information. We attempted to classify ME and MI with a single channel, Cz, and EEG signals from 19 subjects (with good-quality Cz) were applied. While the classification of left- versus right-hand MI requires more lateral information, 10 participants (with good-quality C3, C4, and Cz) were selected for this task.

2.4. Event-Related Desynchronization/Synchronization Analysis

The definition of the ratio ERD/ERS can be formulated as:
E R D / E R S i = A i R R × 100 %
where A i is the average power of i th sample over all the trials and R is the average power in the reference interval [28]. The value is defined as ERS when A i is greater than R .
ERD/ERS values ranging from 13 to 40 Hz were computed to observe beta rebound. ERD/ERS values were considered significant with 95% confidence by adopting a bootstrap t-test.

2.5. Feature Extraction

Sample entropy (SampEn) evaluates the complexity and regularity of time-series data, measuring the unpredictability of fluctuations in physiological signals [29]. Let x T denote the EEG time series, where T represents the length. To calculate SampEn, we should determine the series of vectors length, m , beforehand:
X i = x i , x i + m 1 ,   i = 1 , 2 , , T m + 1
Similar tolerance r controls the number of vector X j such that:
N m i = c a r d X j | d i s t m X i , X j < r
where d i s t m X i , X j is defined as the most considerable absolute difference between each scalar component.
B m r = 1 ( T m + 1 ) 2 i = 1 T m + 1 N m i
SampEn is then defined as the negative logarithm of B m + 1 r B m r .
Here, we computed the SampEn of the Cz, C3, and C4 channels from 10 participants, with a series of vector lengths m = 2 based on both raw EEG data ( r = 1.0 S D , where S D denotes the standard deviation) and ERD/ERS data ( r = 0.1 S D ). These values were chosen by enumeration and while examining their performance when training classifiers.
The common spatial pattern (CSP) is an advanced algorithm based on principal component analysis (PCA), and it has been successfully applied to brain–computer interfaces [30]. CSP filters EEG signals of two classes to make a clear distinction between them. The feature vectors f i   are defined by Equation (5):
f i = log var Y i k = 1 k = 2 log var Y k ,   i = 1 , 2
where v a r represents the variance of a specific sequence and Y i denotes the corresponding column of CSP-filtered data.
Statistical feature vectors include standard deviation of raw signals and the mean of the absolute values of both the first and second differences of the raw and standardized signal.
We applied Daubechies mother wavelets of order 4 (db4) to analyze the raw EEG data, and the detailed coefficients at level 3 were used to extract features. The related feature vectors were wavelet root mean square ( R M S ), energy ( E N G ), and entropy ( E N T ) [31].
Average power within a specific frequency band was estimated by the average power spectrum density (PSD). The average band power is defined as the power ratio in a specific frequency band to total power. We applied the Welch approach to estimate the PSD with a Hamming window. We performed a PSD estimation on two rhythms, alpha (8–12 Hz) and beta (13–40 Hz).
Details of the feature vectors applied to the classification of ME versus MI, and the classification of laterality in MI, are listed in Table 1 and Table 2, respectively.

2.6. Support Vector Machine Classifier

With statistical learning, a support vector machine (SVM) can tackle problems involving small training sets and nonlinear relationships in classification tasks [32]. SVM is used to optimize a hypersurface to separate different classes and to enlarge the distances between them. The MATLAB function fitcsvm was applied to train and cross-validates SVM models for our classification tasks.

2.7. Feature Selection

For neuroimaging data, where the training set is small while the feature dimensionality is large, logistic regression is not applicable. In sparse logistic regression (SLR), every weight parameter has its own adjustable variance referred to as relevance parameters, controlling the possible range of the corresponding weight parameters. The weight parameters are estimated as the marginal posterior mean, which can be estimated by variational Bayesian approximation (SLR-VAR) or Laplace approximation (SLR-LAP). The L1-norm-SLR with a Laplace approximation (L1-SLR-LAP) and the component-wise implementation (L1-SLR-COMP) were also investigated in this study [18].

3. Results

3.1. “Follow-Up” Pattern

Beta rebound is a stable phenomenon that occurs several seconds after ME or MI. As shown in Figure 5, the beta rebound is the beta ERS (refer to Formula (1)) that occurs within 1 s after a stimulus (represented as blue lines). It can be observed in participants with little or no training. Taking advantage of this primitive and perceptible reaction, we proposed a method based on the beta rebound in the time-domain signals to discriminate between ME and MI that requires a light computational load and little pre-training. This time-domain “follow-up” pattern helps therapists gain information from the beta rebound in real time, evaluate the performance of paralyzed patients, and then guide and rectify their MI tasks. With proper training, the beta rebound can offer novel targets for therapeutic interventions [33].
Figure 5 demonstrates the difference between ME and MI in both the time and spatial domains. In the time domain, ME and MI have a distinction in amplitude, time delay, and latency. Figure 5a (ME) and Figure 5c (MI) illustrate it as ERD/ERS time courses during the same finger-tapping movement (motion Tap: Right Finger 1), with dashed lines from five different individuals while bold red lines delineating the average time course across these subjects. Beta rebounds are represented as peaks in these lines. “Stimulus” marks the time when subjects hear the auditory instructions. Compared with ME, the beta rebounds of MI have smaller amplitude, appear later after stimulus, and last longer. In the spatial domain, ME and MI have different topographic distributions. Figure 5b (ME) and Figure 5d (MI) demonstrate it with topo-plots (topographic maps of EEG fields in a circular 2D view looking down at the top of the head) depicting ERD/ERS distribution. These topo-plots are from subject S01 for motion Tap: Right Finger 1 at the time when the beta rebound is the most remarkable (ME: 1.624 s; MI: 1.818 s). Black dots mark the locations of electrodes. ERS is in red while ERD is in blue. During the MI task (Figure 5d), the beta rebound was constrained within the mid-central areas (channel Cz). In contrast, the rebound of ME (Figure 5b) had a more enormous scope of influence, affecting adjacent electrodes (channels Cz, FC1, FC2, PC1, PC2, and P3). Cz and the surrounded channels are related to sensorimotor cortex, which accounts for the peak at the mid-central areas. The other peak at channel P3 may attribute to the touch sensation function of parietal lobe, which only occur during ME. To conclude, in the time domain, there is a high probability that beta rebounds of lower intensity, higher latency, and longer duration indicate MIs instead of MEs; in the spatial domain, if beta rebounds mainly affect channel Cz, it will most likely represent MIs.
We computed the difference in the ERD/ERS values between ME and MI by subtracting the signals of each motion recorded from each subject. The results of the subtracted signals during M4 (Figure 1), open a pen, are illustrated by a pseudo-color map in Figure 6, with the x-axis representing post-stimuli time and the y-axis representing subjects. Each pixel indicates the intensity by color, where red denotes the beta rebound of ME, and blue denotes the beta rebound of MI. As marked by black frames (as an example) in Figure 6, most participants’ data observes the “follow-up” patterns. The “follow-up” pattern implies that the beta rebound of ME can occur faster than that of MI, following the difference described above in Figure 5. We marked all the peaks in the ERS series and counted all the “follow-up” phenomena across subjects and across motional tasks. The results are shown in Table 3 and Table 4. Table 3 defines the percentage as the ratio of motions that displayed “follow-up” patterns. Some subjects, e.g., S06 and S18, achieved high accuracy under this criterion, which reflected the variation across subjects: some subjects are more adapted to imaginary tasks than others. Throughout all the motions listed in Table 4, opening pens and finger-crossing were distinctive compared to the others, and they are both motions designated to examine coordination in movement. The motions that require both hands’ involvement and synchronization have more significant potential to be applied in the evaluation system of a rehabilitation process. The parameters of beta rebound (amplitude and time) of ME and MI tasks can be distinguished more obviously.
Based on the findings mentioned above, we can conclude how to identify ME and MI with beta rebound at channel Cz in the time-domain: compared with ME, beta rebounds of MI have smaller amplitude, appear later after stimulus, and last longer.

3.2. ME versus MI Classification

We used feature vectors in Table 1 to train SVM and adopted hyperparameter optimization during training to search for kernel functions and related parameters to induce the best performance. Such procedures achieved a classification accuracy of 78.57%. We drew the scatter plots and found that power-related features may perform better in EEG classification tasks. We selected those four power-based feature vectors to describe the data set and trained the SVM again. The overall accuracy improved slightly to 79.51%, but the dimension of features was reduced, which will mitigate the computational load. Additionally, this indicates that excessive large feature vectors may not necessarily lead to higher accuracy in SVM classification tasks. Feature selection methods can be applied in training classification models, which enlightened us about resorting to SLR in left- versus right-hand MI classification tasks, as described in the following paragraphs.

3.3. Left—Versus Right-Hand Motor Imagery Classification

We adopted features in Table 2 to train a classifier that may facilitate SVM task in a higher dimensionality. The accuracy was only 62%, which was even lower than when sample entropy feature vectors were applied alone. This phenomenon warned us there were some redundant feature vectors in the SVM training data that spoiled the overall result.
We adopted different derivatives of SLR to select features and to calculate weights. The number of features left and the corresponding accuracies are shown in Table 5. Among all the models adopted, L1-SLR-LAP, which applied Laplacian approximation and L1-norm in SLR learning, attained the best performance. The accuracy of L1-SLR-LAP is 75.22%, and the corresponding confusion matrix is displayed in Figure 7. Note that the values here are the average number of 10-fold cross-validation. Higher accuracy was achieved in left-hand MI. Forty-two feature vectors were left after the selection procedure in L1-SLR-LAP. By checking their weights, we found that power features and SampEn displayed distinctive weights in the remaining vectors, which indicated that they were primary factors in the classification task.

3.4. Comparison and Analyses of Classification Accuracies

Previous studies of MI classification tasks generate interesting classification accuracies, based on different datasets, models and techniques. Table 6 compares the classification results of left- and right-hand MI among the proposed dataset and other datasets. It is important to note that the accuracy of our proposed method is obtained through group-level classification, while in other works, classifiers are trained in a subject-specific manner. Group-level classifications will reduce training sessions and be more applicable to patients, as elucidated in Section 4. Using the same EEG channels and classifier models as the ones we proposed, Malan et al. [34] suggested a novel feature selection algorithm, regularized neighborhood component analysis (RNCA), which outperformed other conventional feature selection techniques. The diverse parameters of RNCA increase its computational burden, while SLR is lighter. The dimension of features in [35] was relatively low, so the accuracy is comparable without feature selection. We achieved a similar accuracy with fewer EEG channels, which can lighten the workload of experiment and computation. Accuracies in [36] seem lower than other studies, which may verify that SVM is more preferable in such contexts.

4. Discussion

We applied a single neuroimaging modality, EEG, in the present study. EEG has a high temporal resolution and can produce good performance in BMI [18]. Other modalities have been explored, e.g., functional magnetic resonance imaging (fMRI) [37], functional near-infrared spectroscopy (fNIRS) [38], magnetoencephalography (MEG) [39], and electrical impedance tomography (EIT) [40]. Due to portability, non-invasiveness, and cost-effectiveness, EEG and fNIRS have an advantage in natural environment applications [41]. In terms of classification accuracy, EEG-based BMI outperforms fNIRS-based BMI [24]. Recent progress of hybrid EEG-fNIRS in BMI demonstrates great potential because data with complementary spatiotemporal resolution can exhibit synergistic effects, bringing about insights into crucial brain processes and structures.
Most reported EEG-based BMI systems can be categorized into one of three paradigms: motor imagery (MI), event-related potential (ERP), and steady-state visually evoked potential (SSVEP). We adopted MI, although successful cases of other paradigms have been proposed, such as P300 ERP [42], SSVEP [43], spatial attention [44], selective attention [45], mental arithmetic [46], action observation [47], late positive potential (LPP) [48], etc. With no need for external stimuli, motor imagery tasks are self-paced, simple, and stable. Our results validate its utility in EEG-based BMI.
The “follow-up” pattern we proposed is based on beta rebound. The mid-centrally located beta rebounds reveal electrophysiological correlates of synchronized “resetting” from overlapping brain networks. The occurrence of beta rebound depends heavily on the types of MI. Our study found that motors with more fingers involved can lead to better results. It can probably be explained by the superposition effect of MI, i.e., the neural activities triggered by hand MI can be interpreted as the summation of the activities invoked by simple finger MIs, which is validated in [49]. The variation is not limited to upper limbs. According to [19], most subjects displayed beta ERS during foot MI, while tongue MI induced no beta rebound in any subject. Luckily, even if there is only a slight laterality difference in the subject, improved BMI control accuracy can be achieved through visual feedback [50].
It is common practice to extract features based on statistical properties, wavelet coefficients, and average power [16]. In this work, we compared the features generated by these above-mentioned principles and SampEn and CSP. Our results show that power features and SampEn play a dominant role in classification tasks. Other innovative methods were proposed for extraction to solve MI classification tasks. Functional brain networks are being widely applied to extract extra features, delineating the interactions between each pair of electrodes [51].
Despite its popularity, SVM is not the only classifier model that can succeed in MI-EEG classification tasks. To evaluate their performance in EEG-based MI BMI, a comparative analysis of five classifiers, SVM, k-NN, naïve Bayes, decision tree, and logistic regression, was conducted in [52] and it concluded that SVM, logistic regression, and naïve Bayes outperformed the others in accuracy. Recently, with automatic end-to-end learning, deep learning (DL) is competent in this context, simplifying processing pipelines; hence, improved performance can be achieved [53].
Instead of feeding large quantities of feature vectors directly into classifiers, a three-feature selection method SLR was applied to lower the dimensionality of features in this work, with the intention of improving classification accuracy. Gu et al. applied a similar method in foot MI classification tasks, with the most remarkable accuracy of 75.33% achieved by SLR-variational approximation (SLR-VAR) [18]. Rejer et al. compared different methods of feature selection on the left- and right-hand MI [54]. Feature selection may also help discover new patterns of brain behavior and invent new explanations for neural pathways. μ-rhythm was suggested to reflect the translation of hearing an instruction into performing the required action, which is well in line with the feature selection results [55].
It is important to note that only a small portion of channels was used in subsequent analysis—to be specific, a single channel (Cz) in ME versus MI classification and three channels (Cz, C3, and C4) in left- versus right-hand MI classification. These channels have been proven to induce better classification results [24,27]. In previous EEG-based MI BMI, a large portion used many EEG channels. Thirty-two EEG electrodes are used in [56] and they achieve a classification accuracy of 59.65%. We successfully classified ME versus MI by 79.51% with one channel (Cz) and left- versus right-hand MI by 75.22% with three channels (Cz, C3, and C4). In general, we achieved better performance with less data, which can alleviate the computation load and reduce experiment preparation time.
Most classifiers in EEG-based BMI studies are trained in a subject-specific manner, which can decode intention from a specific patient based on his own signal features [18,57]. This manner demands laborious training for subjects and repetitive signal processing to ensure solid results. Moreover, it is also infeasible for physically disabled patients to provide these training data. Here, we trained classifiers with population-level features obtained from different subjects and gained competitive performance. It demonstrates excellent potential for simplified application, since real-time EEG signals can be acquired from patients without training and compared with the existing training sets.
Despite the group-level classification, our accuracy is still comparable. A possible merit lies in our carefully selected features for training SVM. In classification between ME and MI, we selected power-based features manually, which improved the accuracy minutely but reduced the feature dimension greatly. In classification between left- and right-hand MI, we adopted SLR to abrogate redundant feature vectors, so the corresponding accuracy increased by 13.22%. Admittedly, we did not reach perfect accuracy, but this appears reasonable given that untrained subjects can be unaffected by BMI protocols. The term “BMI illiteracy” was coined for this non-negligible portion of users, which is estimated at 15% to 30% [58]. The BMI illiteracy rate matches our classification results.
Our research explored the feasibility of EEG for evaluating post-stroke recovery. Previous work cross-validates the efficacy of EEG signals with other assessments, such as motor functions and activities of daily living (ADL), Fugl–Meyer assessment (FMA) scores, and the modified Ashworth scale (MAS). Based upon this fact and our results, we further propose a prospective for EEG assessment, wherein therapists record EEG signals from patients during rehabilitative MI tasks, then label them with classifiers trained by group level training sets and provide real-time feedback to make patients aware of the similarity between their neural activities and the correct ones.
Admittedly, the current study embodies some limitations. The experimental procedure can be modified, allowing subjects to repeat MI within specific time slots. Such modification will not only facilitate analysis but also induce more detectable signals. Moreover, our analysis was based on sensor-level techniques, while the volume conduction effect calls for source-level analysis, which would map EEG signals to cortical areas. The “follow-up” pattern we generalized still improves classification accuracy, so more detailed characteristics can be drawn from it.

5. Conclusions

EEG-based MI BMI has great potential in evaluating post-stroke rehabilitation. However, present assessments suffer from low efficiency and lack of objectiveness, and few related studies underline the difference between ME and MI. In this work, we proposed a dataset and corresponding analysis methods to classify both ME versus MI and left- versus right-hand MI tasks, which can induce plasticity during restoration. This study put forward a stable and straightforward detector of ME and MI based on beta rebound, investigated extracted feature vectors, and applied SVM with SLR to classification. The conclusions are summarized as follows:
“Follow-up” pattern based on the beta rebound is a stable indicator of ME and MI. Compared with ME, the beta rebounds of MI have smaller amplitude, appear later after stimulus, and last longer. The phenomenon is most significant at channel Cz. Such characteristics defined the “follow-up” pattern. Its occurrence is 59.77% ± 11.95% among all subjects, and motors with more fingers involved can generate better results (finger-crossing: 89.47%).
The ME versus MI classification accuracy is 79.51% with power-based features and SVM. We extracted 13 features with statistic, wavelet-based, and power-based methods. SVM generated a classification accuracy of 78.57% with these feature vectors. After examining the support vectors, features fed into SVM were pruned back to four power-based ones, while the accuracy increased.
The left- versus right-hand MI classification accuracy is 75.22% with SVM and L1-SLR-LAP. We extracted 59 features with statistic, wavelet-based, power-based, SampEn, and CSP methods. We compared the performance of different derivatives of SLR and found out that L1-SLR-LAP win over others with 42 feature vectors left. We concluded that power-based features and SampEn displayed distinctive weights in the remaining vectors.
Therefore, this work demonstrates an innovative approach that can be used for evaluating the rehabilitation results of MI BMI with neurofeedback. In future work, we will focus on the back-end design of the system and explore the addition of NIBS as an adjunct therapy. BMI+NIBS interventions could inform patients and therapists about real-time MI performance and enhance rehabilitation with additional clinical gains.

Author Contributions

Conceptualization, J.W. and Y.-H.C.; methodology, J.W. and Y.-H.C.; software, J.W.; validation, J.W., Y.-H.C. and J.Y.; formal analysis, J.W.; investigation, J.W. and Y.-H.C.; resources, Y.-H.C. and M.S.; data curation, J.W.; writing—original draft preparation, J.W.; writing—review and editing, Y.-H.C., J.Y. and M.S.; visualization, J.W. and Y.-H.C.; supervision, M.S.; project administration, Y.-H.C.; funding acquisition, M.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by Westlake University [041030080118] and Zhejiang Key R&D Program from Science and Technology Department Zhejiang Province [2021C03002].

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the Ethical Committee of Westlake University, Hangzhou, China (approval ID: 20191023swan001).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are available from the corresponding authors, Y.-H.C. and M.S., upon request. The data are not publicly available because they contain information that could compromise the privacy of research participants.

Acknowledgments

The authors acknowledge the support from Westlake University, Hangzhou, from Zhejiang Key R&D Program No. 2021C03002, and from the Key Laboratory of Child Development and Learning Science of the Ministry of Education, School of Biological Science and Medical Engineering, Southeast University, Nanjing.

Conflicts of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as potential conflicts of interest.

References

  1. López, N.D.; Pereira, E.M.; Centeno, E.J.; Page, J.C.M. Motor imagery as a complementary technique for functional recovery after stroke: A systematic review. Top. Stroke Rehabil. 2019, 26, 576–587. [Google Scholar] [CrossRef] [PubMed]
  2. Carrasco, D.G.; Cantalapiedra, J.A. Effectiveness of motor imagery or mental practice in functional recovery after stroke: A systematic review. Neurología 2016, 31, 43–52. [Google Scholar] [CrossRef]
  3. Chen, M.; Lin, C.-H. What is in your hand influences your purchase intention: Effect of motor fluency on motor simulation. Curr. Psychol. 2021, 40, 3226–3234. [Google Scholar] [CrossRef]
  4. Deng, X.; Yu, Z.L.; Lin, C.; Gu, Z.; Li, Y. A bayesian shared control approach for wheelchair robot with brain machine interface. IEEE Trans. Neural Syst. Rehabil. Eng. 2019, 28, 328–338. [Google Scholar] [CrossRef]
  5. López-Larraz, E.; Sarasola-Sanz, A.; Irastorza-Landa, N.; Birbaumer, N.; Ramos-Murguialday, A. Brain-machine interfaces for rehabilitation in stroke: A review. NeuroRehabilitation 2018, 43, 77–97. [Google Scholar] [CrossRef] [Green Version]
  6. Cincotti, F.; Pichiorri, F.; Aricò, P.; Aloise, F.; Leotta, F.; Fallani, F.D.V.; Millán, J.D.R.; Molinari, M.; Mattia, D. EEG-based Brain-Computer Interface to support post-stroke motor rehabilitation of the upper limb. In Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, San Diego, CA, USA, 28 August–1 September 2012; pp. 4112–4115. [Google Scholar]
  7. Ang, K.K.; Guan, C.T.; Chua, K.S.G.; Ang, B.T.; Kuah, C.W.K.; Wang, C.C.; Phua, K.S.; Chin, Z.Y.; Zhang, H.H. A Large Clinical Study on the Ability of Stroke Patients to Use an EEG-Based Motor Imagery Brain-Computer Interface. Clin. Eeg Neurosci. 2011, 42, 253–258. [Google Scholar] [CrossRef]
  8. Han, C.-H.; Müller, K.-R.; Hwang, H.-J. Brain-switches for asynchronous brain-computer interfaces: A systematic review. Electronics 2020, 9, 422. [Google Scholar] [CrossRef] [Green Version]
  9. Leeb, R.; Friedman, D.; Müller-Putz, G.R.; Scherer, R.; Slater, M.; Pfurtscheller, G. Self-paced (asynchronous) BCI control of a wheelchair in virtual environments: A case study with a tetraplegic. Comput. Intell. Neurosci. 2007, 2007, 79642. [Google Scholar] [CrossRef] [Green Version]
  10. Müller-Putz, G.R.; Kaiser, V.; Solis-Escalante, T.; Pfurtscheller, G. Fast set-up asynchronous brain-switch based on detection of foot motor imagery in 1-channel EEG. Med. Biol. Eng. Comput. 2010, 48, 229–233. [Google Scholar] [CrossRef]
  11. Lee, S.-B.; Kim, H.-J.; Kim, H.; Jeong, J.-H.; Lee, S.-W.; Kim, D.-J. Comparative analysis of features extracted from EEG spatial, spectral and temporal domains for binary and multiclass motor imagery classification. Inf. Sci. 2019, 502, 190–200. [Google Scholar] [CrossRef]
  12. Yang, H.; Sakhavi, S.; Ang, K.K.; Guan, C. On the use of convolutional neural networks and augmented CSP features for multi-class motor imagery of EEG signals classification. In Proceedings of the 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Milan, Italy, 25–29 August 2015; pp. 2620–2623. [Google Scholar]
  13. Park, S.H.; Lee, D.; Lee, S.G. Filter Bank Regularized Common Spatial Pattern Ensemble for Small Sample Motor Imagery Classification. IEEE Trans. Neural Syst. Rehabil. 2018, 26, 498–505. [Google Scholar] [CrossRef] [PubMed]
  14. Park, Y.; Chung, W. Frequency-Optimized Local Region Common Spatial Pattern Approach for Motor Imagery Classification. IEEE Trans. Neural Syst. Rehabil. 2019, 27, 1378–1388. [Google Scholar] [CrossRef] [PubMed]
  15. Wang, L.; Xu, G.; Yang, S.; Wang, J.; Guo, M.; Yan, W. Motor imagery BCI research based on sample entropy and SVM. In Proceedings of the Sixth International Conference on Electromagnetic Field Problems and Applications, Dalian, China, 19–21 June 2012; pp. 1–4. [Google Scholar]
  16. Chatterjee, R.; Bandyopadhyay, T. EEG based Motor Imagery Classification using SVM and MLP. In Proceedings of the 2nd international conference on Computational Intelligence and Networks (CINE), Bhubaneswar, India, 11 January 2016; pp. 84–89. [Google Scholar]
  17. Padfield, N.; Zabalza, J.; Zhao, H.; Masero, V.; Ren, J. EEG-based brain-computer interfaces using motor-imagery: Techniques and challenges. Sensors 2019, 19, 1423. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  18. Gu, L.; Yu, Z.; Ma, T.; Wang, H.; Li, Z.; Fan, H. EEG-based Classification of Lower Limb Motor Imagery with Brain Network Analysis. Neuroscience 2020, 436, 93–109. [Google Scholar] [CrossRef]
  19. Pfurtscheller, G.; Neuper, C.; Brunner, C.; Silva, F.L.D. Beta rebound after different types of motor imagery in man. Neurosci. Lett. 2005, 378, 156–159. [Google Scholar] [CrossRef]
  20. Chen, C.; Zhang, J.; Belkacem, A.N.; Zhang, S.; Xu, R.; Hao, B.; Gao, Q.; Shin, D.; Wang, C.; Ming, D. G-causality brain connectivity differences of finger movements between motor execution and motor imagery. J. Healthc. Eng. 2019, 2019, 5068283. [Google Scholar] [CrossRef] [Green Version]
  21. Kim, Y.K.; Park, E.; Lee, A.; Im, C.-H.; Kim, Y.-H. Changes in network connectivity during motor imagery and execution. PLoS ONE 2018, 13, e0190715. [Google Scholar] [CrossRef] [Green Version]
  22. Miller, K.J.; Schalk, G.; Fetz, E.E.; Nijs, M.D.; Ojemann, J.G.; Rao, R.P. Cortical activity during motor execution, motor imagery, and imagery-based online feedback. Proc. Natl. Acad. Sci. USA 2010, 107, 4430–4435. [Google Scholar] [CrossRef] [Green Version]
  23. Dai, M.; Zheng, D.; Liu, S.; Zhang, P. Transfer kernel common spatial patterns for motor imagery brain-computer interface classification. Comput. Math. Methods Med. 2018, 2018, 9871603. [Google Scholar] [CrossRef]
  24. Ge, S.; Yang, Q.; Wang, R.; Lin, P.; Gao, J.; Leng, Y.; Yang, Y.; Wang, H. A brain-computer interface based on a few-channel EEG-fNIRS bimodal system. IEEE Access 2017, 5, 208–218. [Google Scholar] [CrossRef]
  25. Herman, P.; Prasad, G.; McGinnity, T.M.; Coyle, D. Comparative analysis of spectral approaches to feature extraction for EEG-based motor imagery classification. IEEE Trans. Neural Syst. Rehabil. Eng. 2008, 16, 317–326. [Google Scholar] [CrossRef] [PubMed]
  26. O’Brien, A.; Bertolucci, F.; Torrealba-Acosta, G.; Huerta, R.; Fregni, F.; Thibaut, A. Non-invasive brain stimulation for fine motor improvement after stroke: A meta-analysis. Eur. J. Neurol. 2018, 25, 1017–1026. [Google Scholar] [CrossRef] [PubMed]
  27. Erdoĝan, S.B.; Özsarfati, E.; Dilek, B.; Kadak, K.S.; Hanoĝlu, L.; Akın, A. Classification of motor imagery and execution signals with population-level feature sets: Implications for probe design in fNIRS based BCI. J. Neural Eng. 2019, 16, 026029. [Google Scholar] [CrossRef] [PubMed]
  28. Pfurtscheller, G.; Silva, F.H.L.D. Event-related EEG/MEG synchronization and desynchronization: Basic principles. Clin. Neurophysiol. 1999, 110, 1842–1857. [Google Scholar] [CrossRef]
  29. Delgado-Bonal, A.; Marshak, A. Approximate entropy and sample entropy: A comprehensive tutorial. Entropy 2019, 21, 541. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  30. Gubert, P.H.; Costa, M.H.; Silva, C.D.; Trofino-Neto, A. The performance impact of data augmentation in CSP-based motor-imagery systems for BCI applications. Biomed. Signal Processing Control. 2020, 62, 102152. [Google Scholar] [CrossRef]
  31. Chatterjee, R.; Bandyopadhyay, T.; Sanyal, D.K.; Guha, D. Comparative analysis of feature extraction techniques in motor imagery EEG signal classification. In Proceedings of the First International Conference on Smart System, Innovations and Computing, Jaipur, India, 15–16 April 2017; pp. 73–83. [Google Scholar]
  32. Paul, J.K.; Iype, T.; Dileep, R.; Hagiwara, Y.; Koh, J.W.; Acharya, U.R. Characterization of fibromyalgia using sleep EEG signals with nonlinear dynamical features. Comput. Biol. Med. 2019, 111, 103331. [Google Scholar] [CrossRef]
  33. Espenhahn, S.; Rossiter, H.E.; Wijk, B.C.V.; Redman, N.; Rondina, J.M.; Diedrichsen, J.; Ward, N.S. Sensorimotor cortex beta oscillations reflect motor skill learning ability after stroke. Brain Commun. 2020, 2, fcaa161. [Google Scholar] [CrossRef]
  34. Malan, N.S.; Sharma, S. Feature selection using regularized neighbourhood component analysis to enhance the classification performance of motor imagery signals. Comput. Biol. Med. 2019, 107, 118–126. [Google Scholar] [CrossRef]
  35. Tang, Z.; Li, C.; Sun, S. Single-trial EEG classification of motor imagery using deep convolutional neural networks. Optik 2017, 130, 11–18. [Google Scholar] [CrossRef]
  36. Voinas, A.E.; Das, R.; Khan, M.A.; Brunner, I.; Puthusserypady, S. Motor Imagery EEG Signal Classification for Stroke Survivors Rehabilitation. In Proceedings of the 10th International Winter Conference on Brain-Computer Interface (BCI), Gangwon-do, Korea, 21–23 February 2022; pp. 1–5. [Google Scholar]
  37. Ge, S.; Liu, H.; Lin, P.; Gao, J.; Xiao, C.; Li, Z. Neural basis of action observation and understanding from first-and third-person perspectives: An fMRI study. Front. Behav. Neurosci. 2018, 12, 283. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  38. Hong, K.-S.; Ghafoor, U.; Khan, M.J. Brain-machine interfaces using functional near-infrared spectroscopy: A review. Artif. Life Robot. 2020, 25, 204–218. [Google Scholar] [CrossRef]
  39. Fukuma, R.; Yanagisawa, T.; Yokoi, H.; Hirata, M.; Yoshimine, T.; Saitoh, Y.; Kamitani, Y.; Kishima, H. Training in use of brain–machine Interface-controlled robotic hand improves accuracy decoding two types of hand movements. Front. Neurosci. 2018, 12, 478. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  40. Wu, Y.; Jiang, D.; Liu, X.; Bayford, R.; Demosthenous, A. A human–machine interface using electrical impedance tomography for hand prosthesis control. IEEE Trans. Biomed. Circuits Syst. 2018, 12, 1322–1333. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  41. Uchitel, J.; Vidal-Rosas, E.E.; Cooper, R.J.; Zhao, H. Wearable, Integrated EEG–fNIRS Technologies: A Review. Sensors 2021, 21, 6106. [Google Scholar] [CrossRef]
  42. Jin, J.; Li, S.; Daly, I.; Miao, Y.; Liu, C.; Wang, X.; Cichocki, A. The study of generic model set for reducing calibration time in P300-based brain–computer interface. IEEE Trans. Neural Syst. Rehabil. Eng. 2019, 28, 3–12. [Google Scholar] [CrossRef]
  43. Ravi, A.; Beni, N.H.; Manuel, J.; Jiang, N. Comparing user-dependent and user-independent training of CNN for SSVEP BCI. J. Neural Eng. 2020, 17, 026028. [Google Scholar] [CrossRef]
  44. Nazari, M.R.; Nasrabadi, A.M.; Daliri, M.R. Single-trial decoding of motion direction during visual attention from local field potential signals. IEEE Access 2021, 9, 66450–66461. [Google Scholar] [CrossRef]
  45. Yao, L.; Sheng, X.; Mrachacz-Kersting, N.; Zhu, X.; Farina, D.; Jiang, N. Performance of brain–computer interfacing based on tactile selective sensation and motor imagery. IEEE Trans. Neural Syst. Rehabil. Eng. 2017, 26, 60–68. [Google Scholar] [CrossRef] [Green Version]
  46. Chin, Z.Y.; Zhang, X.; Wang, C.; Ang, K.K. EEG-based discrimination of different cognitive workload levels from mental arithmetic. In Proceedings of the 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Honlulu, HI, USA, 17–21 July 2018; pp. 1984–1987. [Google Scholar]
  47. Ge, S.; Wang, P.; Liu, H.; Lin, P.; Gao, J.; Wang, R.; Iramina, K.; Zhang, Q.; Zheng, W. Neural activity and decoding of action observation using combined EEG and fNIRS measurement. Front. Hum. Neurosci. 2019, 13, 357. [Google Scholar] [CrossRef] [Green Version]
  48. Leng, Y.; Zhu, Y.; Ge, S.; Qian, X.; Zhang, J. Neural temporal dynamics of social exclusion elicited by averted gaze: An event-related potentials study. Front. Behav. Neurosci. 2018, 12, 21. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  49. Lindig-León, C.; Bougrain, L. Comparison of sensorimotor rhythms in EEG signals during simple and combined motor imageries over the contra and ipsilateral hemispheres. In Proceedings of the 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Milan, Italy, 25–29 August 2015; pp. 3953–3956. [Google Scholar]
  50. Hashimoto, Y.; Ushiba, J.; Kimura, A.; Liu, M.; Tomita, Y. Change in brain activity through virtual reality-based brain-machine communication in a chronic tetraplegic subject with muscular dystrophy. BMC Neurosci. 2010, 11, 1–9. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  51. Ai, Q.; Chen, A.; Chen, K.; Liu, Q.; Zhou, T.; Xin, S.; Ji, Z. Feature extraction of four-class motor imagery EEG signals based on functional brain network. J. Neural Eng. 2019, 16, 026032. [Google Scholar] [CrossRef] [PubMed]
  52. Isa, N.M.; Amir, A.; Ilyas, M.; Razalli, M. Motor imagery classification in Brain computer interface (BCI) based on EEG signal by using machine learning technique. Bull. Electr. Eng. Inform. 2019, 8, 269–275. [Google Scholar] [CrossRef]
  53. Roy, Y.; Banville, H.; Albuquerque, I.; Gramfort, A.; Falk, T.H.; Faubert, J. Deep learning-based electroencephalography analysis: A systematic review. J. Neural Eng. 2019, 16, 051001. [Google Scholar] [CrossRef]
  54. Rejer, I. EEG feature selection for BCI based on motor imaginary task. Found. Comput. Decis. Sci. 2012, 37, 283. [Google Scholar] [CrossRef] [Green Version]
  55. Pineda, J.A. The functional significance of mu rhythms: Translating “seeing” and “hearing” into “doing”. Brain Res. Rev. 2005, 50, 57–68. [Google Scholar] [CrossRef]
  56. Yu, J.; Ang, K.K.; Guan, C.; Wang, C. A multimodal fNIRS and EEG-based BCI study on motor imagery and passive movement. In Proceedings of the 6th International IEEE/EMBS Conference on Neural Engineering (NER), San Diego, CA, USA, 6–8 November 2013; pp. 5–8. [Google Scholar]
  57. Selim, S.; Tantawi, M.M.; Shedeed, H.A.; Badr, A. A CSP\AM-BA-SVM Approach for Motor Imagery BCI System. IEEE Access 2018, 6, 49192–49208. [Google Scholar] [CrossRef]
  58. Vidaurre, C.; Blankertz, B. Towards a cure for BCI illiteracy. Brain Topogr. 2010, 23, 194–198. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Tasks in the experiment. M1–M3 were to examine the feasibility, and M4–M5 were set for coordination. M1: move specific right fingers according to the auditory code; M2: move specific left fingers according to the auditory code; M3: make the gesture of holding a pen and ready to write; M4: unscrew the pen; M5: fingers of both hands cross over each other.
Figure 1. Tasks in the experiment. M1–M3 were to examine the feasibility, and M4–M5 were set for coordination. M1: move specific right fingers according to the auditory code; M2: move specific left fingers according to the auditory code; M3: make the gesture of holding a pen and ready to write; M4: unscrew the pen; M5: fingers of both hands cross over each other.
Biosensors 12 00384 g001
Figure 2. Timing paradigm of one trial: the duration of motor execution can be 15 s (tapping each finger for 3 s) or 4 s (other tasks); the endpoint of motor imagery depends on the participant’s self-regarded “completion”. The overall time course is estimated and denoted at the bottom. It can vary between subjects and tasks.
Figure 2. Timing paradigm of one trial: the duration of motor execution can be 15 s (tapping each finger for 3 s) or 4 s (other tasks); the endpoint of motor imagery depends on the participant’s self-regarded “completion”. The overall time course is estimated and denoted at the bottom. It can vary between subjects and tasks.
Biosensors 12 00384 g002
Figure 3. (a) The EEG system used in our study; (b) the recording scene: a participant is following the instructions showing on the screen when the EEG signals are recorded.
Figure 3. (a) The EEG system used in our study; (b) the recording scene: a participant is following the instructions showing on the screen when the EEG signals are recorded.
Biosensors 12 00384 g003
Figure 4. The 32-channel EEG recording montage used in our experiments. Channels C3, C4, and Cz are in the mid-central area, marked as red circles. REF denotes that Fz is the reference electrode.
Figure 4. The 32-channel EEG recording montage used in our experiments. Channels C3, C4, and Cz are in the mid-central area, marked as red circles. REF denotes that Fz is the reference electrode.
Biosensors 12 00384 g004
Figure 5. “Follow-up” pattern based on the beta rebound works as an indicator of ME and MI in amplitude, latency, duration, and distribution. An example of the time courses and topo-plots of ME (parts (a,b)) and MI (parts (c,d)) event-related desynchronization/synchronization (ERD/ERS) during the motion Tap: Right Finger 1. The bold red lines are the average across five subjects, while the dashed lines are the individuals’ ERD/ERS time courses at channel Cz. “Stimulus” marks the time when subjects hear the auditory instructions. Topo-plots are from subject S01 for motion Tap: Right Finger 1 at the time when the beta rebound is the most remarkable (ME: 1.624 s; MI: 1.818 s). Black dots mark the locations of electrodes. ERS is in red while ERD is in blue. Note that parts (a,c) are based on a part of our whole dataset (26.32%) to make the time courses more explicit for demonstration.
Figure 5. “Follow-up” pattern based on the beta rebound works as an indicator of ME and MI in amplitude, latency, duration, and distribution. An example of the time courses and topo-plots of ME (parts (a,b)) and MI (parts (c,d)) event-related desynchronization/synchronization (ERD/ERS) during the motion Tap: Right Finger 1. The bold red lines are the average across five subjects, while the dashed lines are the individuals’ ERD/ERS time courses at channel Cz. “Stimulus” marks the time when subjects hear the auditory instructions. Topo-plots are from subject S01 for motion Tap: Right Finger 1 at the time when the beta rebound is the most remarkable (ME: 1.624 s; MI: 1.818 s). Black dots mark the locations of electrodes. ERS is in red while ERD is in blue. Note that parts (a,c) are based on a part of our whole dataset (26.32%) to make the time courses more explicit for demonstration.
Biosensors 12 00384 g005
Figure 6. Color map of the differences between ME and MI tasks during the motion, open a pen. Red blocks show ERS during ME, while blue blocks represent ERS during MI. In most cases, a “follow-up” pattern—a red block followed by a blue block—can be observed, marked by the black frames.
Figure 6. Color map of the differences between ME and MI tasks during the motion, open a pen. Red blocks show ERS during ME, while blue blocks represent ERS during MI. In most cases, a “follow-up” pattern—a red block followed by a blue block—can be observed, marked by the black frames.
Biosensors 12 00384 g006
Figure 7. Confusion matrix of the “L1-SLR-LAP” classifier to distinguish the MI of left- and right-hands.
Figure 7. Confusion matrix of the “L1-SLR-LAP” classifier to distinguish the MI of left- and right-hands.
Biosensors 12 00384 g007
Table 1. Feature vectors classifying motor execution (ME) versus motor imagery (MI).
Table 1. Feature vectors classifying motor execution (ME) versus motor imagery (MI).
Feature VectorsSize
(No. of Trials × No. of Features)
Statistical Features532 × 6
Wavelet-based Features532 × 3
Power Features532 × 4
Total532 × 13
Table 2. Feature vectors classifying left versus right hand.
Table 2. Feature vectors classifying left versus right hand.
Feature VectorsSize
(No. of Trials × No. of Features)
Statistical Features100 × 18
Wavelet-based Features100 × 9
Power Features100 × 24
SampEn100 × 6
CSP100 × 2
Total100 × 59
Table 3. Percentage of “follow-up” pattern in subjects at Cz among all motions.
Table 3. Percentage of “follow-up” pattern in subjects at Cz among all motions.
SubjectsPercentage (%)SubjectsPercentage (%)
S0150.00S1757.14
S0250.00S1885.71
S0557.14S1942.86
S0678.57S2057.14
S0742.86S2257.14
S0864.29S2364.29
S0950.00S2764.29
S1471.43S2971.43
S1571.43S3050.00
S1650.00Mean ± SD59.77 ± 11.95
Table 4. Percentage of “follow-up” pattern in motions at Cz among all subjects.
Table 4. Percentage of “follow-up” pattern in motions at Cz among all subjects.
MotionsPercentage (%)MotionsPercentage (%)
Tap: Right Finger 157.89Tap: Left Finger 457.89
Tap: Right Finger 236.84Tap: Left Finger 542.10
Tap: Right Finger 357.89Hold a Pen63.16
Tap: Right Finger 463.16Open a Pen84.21
Tap: Right Finger 552.63Finger-crossing89.47
Tap: Left Finger 168.42Arm Movement52.63
Tap: Left Finger 252.63Mean ± SD59.77 ± 13.58
Tap: Left Finger 357.89
Table 5. Accuracy of different classifiers used on our EEG data.
Table 5. Accuracy of different classifiers used on our EEG data.
ModelsFeatures LeftAccuracy
SVM5962.00%
SLR-LAP257.78%
SLR-VAR950.22%
L1-SLR-LAP4275.22%
L1-SLR-COMP3558.67%
Table 6. Comparison of classification accuracies among different datasets and methods.
Table 6. Comparison of classification accuracies among different datasets and methods.
AuthorsEEG ChannelsParticipantsFeature ExtractionClassifiersFeature SelectionAverage
Accuracy
This work310Statistics, Wavelet Coefficients, Average Power, SampEn, CSPSVML1-SLR-LAP75.2%
Malan et al., 2019 [34]310DTCWTSVMGA78.9%
PCA64.3%
ReliefF75.7%
RNCA80.7%
Tang et al., 2017 [35]282Power spectrumSVM-77.2%
Voinas et al., 2022 [36]166WPD+HOSRF-71.0%
CSP66.0%
Filter Bank CSP69.0%
CSP: common spatial pattern; DTCWT: dual-tree complex wavelet transform; GA: genetic algorithm; L1-SLR-LAP: L1-norm-SLR with a Laplace approximation; PCA: principal component analysis; RF: Random Forest; RNCA: regularized neighborhood component analysis; SampEn: sample entropy; SVM: support vector machine; WPD+HOS: wavelet packet decomposition combined with higher order statistics.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wang, J.; Chen, Y.-H.; Yang, J.; Sawan, M. Intelligent Classification Technique of Hand Motor Imagery Using EEG Beta Rebound Follow-Up Pattern. Biosensors 2022, 12, 384. https://doi.org/10.3390/bios12060384

AMA Style

Wang J, Chen Y-H, Yang J, Sawan M. Intelligent Classification Technique of Hand Motor Imagery Using EEG Beta Rebound Follow-Up Pattern. Biosensors. 2022; 12(6):384. https://doi.org/10.3390/bios12060384

Chicago/Turabian Style

Wang, Jiachen, Yun-Hsuan Chen, Jie Yang, and Mohamad Sawan. 2022. "Intelligent Classification Technique of Hand Motor Imagery Using EEG Beta Rebound Follow-Up Pattern" Biosensors 12, no. 6: 384. https://doi.org/10.3390/bios12060384

APA Style

Wang, J., Chen, Y. -H., Yang, J., & Sawan, M. (2022). Intelligent Classification Technique of Hand Motor Imagery Using EEG Beta Rebound Follow-Up Pattern. Biosensors, 12(6), 384. https://doi.org/10.3390/bios12060384

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop