Next Article in Journal
Spectral–Spatial Features Integrated Convolution Neural Network for Breast Cancer Classification
Previous Article in Journal
Electrochemical Biosensors Based on Nanomaterials for Early Detection of Alzheimer’s Disease
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The CSP-Based New Features Plus Non-Convex Log Sparse Feature Selection for Motor Imagery EEG Classification

1
School of Electronic Engineering and Automation, Guilin University of Electronic Technology, Guilin 541004, China
2
School of Electronic Information and Automation, Guilin University of Aerospace Technology, Guilin 541004, China
3
School of Mathematics and Computational Science, Guangxi Colleges and Universities Key Laboratory of Data Analysis and Computation, Guilin University of Electronic Technology, Guilin 541004, China
4
School of Automation Science and Engineering, South China University of Technology, Guangzhou 510000, China
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(17), 4749; https://doi.org/10.3390/s20174749
Submission received: 3 July 2020 / Revised: 11 August 2020 / Accepted: 18 August 2020 / Published: 22 August 2020
(This article belongs to the Section Intelligent Sensors)

Abstract

:
The common spatial pattern (CSP) is a very effective feature extraction method in motor imagery based brain computer interface (BCI), but its performance depends on the selection of the optimal frequency band. Although a lot of research works have been proposed to improve CSP, most of these works have the problems of large computation costs and long feature extraction time. To this end, three new feature extraction methods based on CSP and a new feature selection method based on non-convex log regularization are proposed in this paper. Firstly, EEG signals are spatially filtered by CSP, and then three new feature extraction methods are proposed. We called them CSP-wavelet, CSP-WPD and CSP-FB, respectively. For CSP-Wavelet and CSP-WPD, the discrete wavelet transform (DWT) or wavelet packet decomposition (WPD) is used to decompose the spatially filtered signals, and then the energy and standard deviation of the wavelet coefficients are extracted as features. For CSP-FB, the spatially filtered signals are filtered into multiple bands by a filter bank (FB), and then the logarithm of variances of each band are extracted as features. Secondly, a sparse optimization method regularized with a non-convex log function is proposed for the feature selection, which we called LOG, and an optimization algorithm for LOG is given. Finally, ensemble learning is used for secondary feature selection and classification model construction. Combing feature extraction and feature selection methods, a total of three new EEG decoding methods are obtained, namely CSP-Wavelet+LOG, CSP-WPD+LOG, and CSP-FB+LOG. Four public motor imagery datasets are used to verify the performance of the proposed methods. Compared to existing methods, the proposed methods achieved the highest average classification accuracy of 88.86, 83.40, 81.53, and 80.83 in datasets 1–4, respectively. The feature extraction time of CSP-FB is the shortest. The experimental results show that the proposed methods can effectively improve the classification accuracy and reduce the feature extraction time. With comprehensive consideration of classification accuracy and feature extraction time, CSP-FB+LOG has the best performance and can be used for the real-time BCI system.

1. Introduction

The brain computer interface (BCI) converts the brain signals into external device control commands, which establishes a new channel for humans to directly interact with the external environment [1]. This technique is particularly useful for patients with motor disability and upper body paralysis [2]. Of course, BCI can also be used for healthy people, such as games or robot control [3]. Among various brain signals, the scalp electroencephalogram (EEG) is easy to obtain. With low cost and high time resolution, EEG is widely used in BCI [4]. Motor imagery is a spontaneously generated EEG signal, which does not require external stimulation. It is particularly suitable for patient rehabilitation training and motor control. However, the EEG signal is very weak, with a low signal-to-noise ratio and space blurred [5]. It is very difficult to extract stable and discriminative features. Therefore, feature extraction has always been a hotspot in the study of motor imagery based BCI. In addition, feature selection can reduce feature dimension and noise interference, the selected features are more stable and discriminative. Therefore, research on feature selection is also very important.
Commonly used feature extraction methods include the autoregressive model [6], wavelet features [7], band power [8], and common spatial pattern (CSP) [9]. CSP can effectively extract the features of event-related synchronization (ERS) and event-related desynchronization (ERD) in the motor imagery signals, so it has been widely used in BCI [10]. However, the performance of CSP depends to a large extent on the selection of the filtering frequency band, and the optimal frequency band is typically subject-specific, which is difficult to select manually [11]. There is a lot of research work on the frequency band selection, which is mainly divided into four categories. The first type of method, CSP combined with time-frequency analysis methods. Based on orthogonal empirical mode decomposition (OEMD), FIR filter, and the CSP algorithm, Li et al. [12] proposed a novel feature extraction method. Lin et al. [13] used wavelet-CSP algorithm to recognize driving action. Robinson et al. [14] used the wavelet-CSP algorithm to classify fast and slow hand movements. Feng et al. [15] proposed a feature extraction algorithm based on CSP and wavelet packet for motor imagery EEG signals, and Yang et al. [16] proposed subject-based feature extraction using the fisher WPD-CSP method. The second type of method, the spatial spectrum filter is optimized simultaneously. For example, the common spatio-spectral pattern algorithm (CSSP) is proposed by Lemm et al. [17], the common sparse spectral spatial pattern algorithm (CSSSP) is proposed by Dornhege et al. [18], and a new discriminant filter bank common spatial patterns (DFBCSP) is proposed by Hiroshi et al. [19]. The third type of method, the original EEG signals are filtered into multiple frequency bands, then the CSP features are extracted in each band, and finally the features of the optimal frequency band are selected for classification. There are many research works in this area, such as SBCSP [20], FBCSP [11], DFBCSP [21], SWDCSP [22], SFBCSP [23], and SBLFB [24]. The fourth type of method, the intelligent optimization method, is used to select the optimal frequency band. The multiple fixed frequency bands used in the third method are determined by human subjective experience, so the obtained frequency band may not be optimal, while the intelligent optimization algorithm can select a frequency band of any length. Wei et al. [25] used binary particle swarm optimization for frequency band selection in motor imagery-based brain-computer interfaces. Kumar et al. [26] proposed three methods to optimize the temporal filter parameters, including particle swarm optimization (PSO), genetic algorithm (GA), and artificial bee colony (ABC). Rivero et al. [27] used genetic algorithms and k-nearest neighbor for automatic frequency band selection. The first method uses the time-frequency analysis to obtain frequency information. It needs to decompose the EEG signals of each channel, which requires a large amount of calculation and is time-consuming, especially for the wavelet packet decomposition. The second method is difficult to solve and easy to get a local solution. The EEG signals are filtered into multiple sub-bands in the third method, which is very computationally intensive. The disadvantage of the fourth method is that it requires a long time for model training. Recently, the application of deep learning in motor imaging classification has become more and more widespread [28]. Tang et al. [29] used conditional empirical mode decomposition (CEMD) and one-dimensional multi-scale convolutional neural network (1DMSCNN) to recognize motor imagery EEG signals. Cheng et al. [30] classified EEG emotions by deep forest. However, features extracted by deep learning are abstract and difficult to understand [31]. In addition, compared with traditional machine learning methods, deep learning has no obvious advantages [32].
The existing feature selection methods are mainly divided into three categories: filter, wrapper, and embedded [33]. The filter method uses independent evaluation criteria, and the feature selection process of which has nothing to do with subsequent classifiers. Koprinska et al. [34] proposed five feature selection methods for the brain computer interface, including information gain ranking, correlation-based feature selection, relief, consistency-based feature selection, and 1R ranking. Experimental results show that the top three feature selectors in terms of classification accuracy were correlation-based feature selection, information gain and 1R ranking. Mutual information and its one-versus rest multi-class extension were used to select optimal spatial-temporal features in [35]. Li et al. [36] combined the Fisher score and classifier-dependent structure to implement the feature selection. Based on the descending sort on all Fisher score values, the wrapper model with support vector machine (SVM) and graph regularized extreme learning machine (GELM) were applied, and a 10-fold cross validation scheme was used to select the generalized features based on the training set. Mehmood et al. [37] selected the optimal EEG features using a balanced one-way ANOVA after calculating the Hjorth parameters for different frequency ranges. Features selected by this statistical method outperformed univariate and multivariate features. The optimal features were further processed for emotion classification using SVM, k-nearest neighbor (k-NN), linear discriminant analysis (LDA), naive Bayes, random forest, deep learning, and four ensembles methods (bagging, boosting, stacking, and voting). The maximum of average distance between events and non-events was used to select optimal EEG features in [38]. The filter method has certain advantages, such as low computational cost, but it does not consider the correlation between features and is independent of the classifier, so the classification accuracy is not high. The wrapper method uses the performance of classifier as the evaluation criterion of feature selection. An efficient feature selection method was proposed in [39]. The least angle regression (LARS) was used for properly ranking each feature, and then an efficient leave-one-out (LOO) estimation based on the PRESS statistic was used to choose the most relevant features. In [40], the genetic algorithm was used to select the EEG signal features. The fitness function used in the genetic algorithm was EEG signal classification error calculated using LDA classifier. Rakshit et al. [41] employed ABC cluster algorithm to reduce the features for motor imagery EEG data. Baig et al. [42] proposed a new hybrid method to select features. A differential evolution (DE) optimization algorithm was used to search the feature space to generate the optimal feature subset, and with performance evaluated by the SVM classifier. Liu et al. [43] proposed a method of combining the firefly algorithm and learning automata (LA) to optimize feature selection for motor imagery EEG. The learning automata was used as a tool of parameter optimization to avoid getting the local optimum. The wrapper method needs to train and test the classifier when evaluating each candidate feature subset, which is computationally expensive and tends to overfitting. The embedded method integrates feature selection with the training process of the classifier, and simultaneously performs feature selection and classification. Therefore, the embedded feature selection method has been widely used in recent years. Miao et al. [44] used LASSO to select the important space-frequency-time feature components of motor imagery. The minimum-redundancy and maximum-relevance (mRMR) and LASSO were used for feature selection in [45]. In both feature selection methods, the first three features were selected. Then, the common features between mRMR and LASSO regularization are selected to train the classification model. Zhang et al. [46] proposed a novel algorithm, namely the temporally constrained sparse group spatial pattern (TSGSP), which was modeled by combining the sparse group LASSO and fused LASSO penalties. The features with different filter bands and time window combinations were optimized and selected. Wang et al. [47] used the sparse group LASSO to simultaneously perform feature selection and channel selection on the motor imagery signal. Jiao et al. [48] proposed a sparse group LASSO representation model for transfer learning, the group LASSO selected subjects, and LASSO selected sample data. The above sparse optimization methods are convex optimization models. Although they have achieved good results, many applications have shown that non-convex sparse optimization methods can obtain better performance [49]. For example, LASSO has a bias problem, which would result in significantly biased estimates, and cannot achieve reliable recovery with the least observations [50].
Aiming to resolve the problem of large calculation and time-consumption of Wavelet-CSP [13,14], WPD-CSP [15,16], and FBCSP [11] methods, we have proposed three new feature extraction methods, namely CSP-Wavelet, CSP-WPD, and CSP-FB method. Firstly, the original EEG signals are pre-processed, including time window selection and band-pass filtering. Then, CSP transform is performed. For CSP-Wavelet, discrete wavelet transform (DWT) is used to decompose the spatially filtered signals, and then the energy and standard deviation of the wavelet coefficients are extracted as features. For CSP-WPD, the wavelet packet decomposition (WPD) is used to decompose the spatially filtered signals. Similar to CSP-Wavelet, the energy and standard deviation of the wavelet coefficients are extracted as features. For CSP-FB, the spatially filtered signals are filtered into multiple frequency bands by a filter bank (FB), and then the logarithm of variances of each band are extracted as features. In order to solve the bias problem of LASSO, a new feature selection method is proposed. A non-convex function is used to sparsely constrain feature weights. Since the non-convex function is a log function, we call this method LOG. In addition, in order to further optimize feature selection and enhance the robustness of the classification model, an ensemble learning method is proposed for secondary feature selection and the construction of multiple classification models. Fisher linear discriminant analysis (FLDA) is used for classification. Combining feature extraction with feature selection methods, we obtained three EEG signals decoding methods, namely CSP-Wavelet+LOG, CSP-WPD+LOG and CSP-FB+LOG. Experimental results show that the classification performances of three newly proposed methods are better than CSP, Wavelet-CSP, WPD-CSP, SFBCSP and SBLFB methods. In terms of feature extraction time, the proposed methods are much less than Wavelet-CSP, WPD-CSP, SFBCSP, and SBLFB methods.
The main contributions of this paper include three aspects. Firstly, we proposed three new feature selection methods based on CSP. These three methods can effectively improve the classification performance of CSP while reducing the feature extraction time. Secondly, we propose a new feature selection method. This method is a non-convex sparse optimization method, which can effectively solve the bias problem of LASSO and select more discriminative features. Thirdly, we use ensemble learning for secondary feature selection and classification model construction, which makes the EEG decoding method more robust and stable.
The content of this paper is organized as follows. Section 2 introduces experimental data, traditional CSP feature extraction method, three new feature extraction methods, a new feature selection method, and secondary feature selection and classification model construction using ensemble learning. The experimental results are showed in Section 3. Section 4 further discusses and analyzes the experimental results. The conclusion is provided in Section 5.

2. Materials and Methods

2.1. EEG Data Description

Four public motor imagery EEG datasets are briefly described as follow. For detailed information, please refer to related literature or website.
Dataset 1: data set I of BCI competition IV (2008) [51]. This dataset has a total of 59 channels with a sampling rate of 100 Hz. There are three types of motor imagery tasks, including left hand, right hand, and right foot. Seven subjects (1a, 1b, 1c, 1d, 1e, 1f, 1g) selected two of them to be performed. In this paper, the calibration data of this dataset are used for classification, which include two runs with 100 single trials for each run. The first run was selected as the training set and the second run was selected as the test set. For detailed information, please refer to the following website: http://www.bbci.de/competition/IV/.
Dataset 2: data set IIa of BCI competition IV (2008) [52]. This dataset has a total of 22 channels with a sampling rate of 250Hz. Nine subjects (A01, A02, …, and A09) performed four types of motor imagery tasks, including left hand, right hand, foot, and tongue. There are two sessions in this dataset, and each session was consisted of 6 runs with 48 trials (12 trials for each class) for each run. In this paper, the first session was selected as the training set and the second session was selected as the test set. The training and test sets of each subject were 72 trials. According to the practice of reference [53], four types of tasks are arranged and combined to obtain multiple binary classification problems, that is, C 4 2 = 6 groups of binary classification. Therefore, a total of 9 × 6 = 54 data subsets are obtained. The left hand, right hand, foot, and tongue motor imagery tasks were represented by letters L, R, F and T, respectively. A01T -LR indicated that the subject A01T performed left hand and right hand motor imagery tasks. For additional information, please refer to the following website: http://www.bbci.de/competition/IV/.
Dataset 3: data set IIb of BCI competition IV (2008) [24]. This dataset has a total of 3 channels with a sampling rate of 250Hz. Nine subjects (B01, B02, …, and B09) performed two types of motor imagery tasks, including left hand and right hand. There are five sessions in this dataset. However, only the third training session (B0103T, B0203T, …, B0903T) is used in this paper [24]. This session is consisted of 160 trials, and half for each class. 80 trials are used for training set, and the other 80 trials are used for test set. For additional information, please refer to the following website: http://www.bbci.de/competition/IV/.
Dataset 4: data set provided by David Steyrl (2016) [54]. This dataset has a total of 15 channels with a sampling rate of 512 Hz. Fourteen subjects performed two types of motor imagery tasks, including right hand and foot. The data of each subject were divided into two parts. The first part (runs 1–5) was used for train set, whereas the second part (runs 6–8) was used for test set, and each run was consisted of 20 trials (10 trials for each class). Therefore, the training and test sets are 100 and 60 trials, respectively. The original signals are downsampled with a sampling rate of 256 Hz. For more information, please refer to the following website: http://bnci-horizon-2020.eu/database/data-sets.
All datasets are scalp EEG signals, which are recorded by multiple electrode sensors placed on the scalp. Figure 1 shows the distribution of electrodes on the scalp for the four datasets. We focus on the signal processing and pattern recognition of electrode sensor signals in this paper.

2.2. The Processing Flow of the Proposed Method

Figure 2 is a flowchart of the overall processing of the proposed method. It mainly includes preprocessing, CSP transformation, feature extraction, feature selection, and classification. Each part will be discussed in detail in the following content.

2.3. Data Preprocessing

(1)
A 6th order Butterworth filter is used to perform 8–30 Hz band-pass filtering on the EEG signals of each channel, which filters out the EEG components that are not related to motor imagery. Butterworth filters are often used in EEG band-pass filtering [53], we are consistent with the practice of most literatures. Motor imagery can cause ERS and ERD phenomena, that is, power changes in specific frequency bands of EEG signals, specifically mu (8–12 Hz) and beta (18–26 Hz) rhythm [2]. Therefore, the 8–30 Hz band-pass filter is usually used to filter the motor imagery signals [55].
(2)
Extracting single trial data. The time window of dataset 1 is 0.5–3.5 s, and the other datasets are 0.5–2.5 s, where 0 s is the time when the motor imagery task starts. The time window of datasets 2–4 is different from that of dataset 1. This is because the sampling rate of datasets 2–4 is relatively high. Choosing the time window from 0.5 s to 2.5 s can reduce the amount of data and thus reduce the amount of calculation.

2.4. Feature Extraction

2.4.1. CSP Transformation

For the binary classification problem, CSP looks for a set of spatial filters to maximize the variance of the band-pass filtered EEG signals in one class while minimizing the other class. The spatial filter w is calculated by simultaneous diagonalization of sample covariance matrix from both classes, details as follows:
J ( w ) = w T C ¯ 1 w w T C ¯ 2 w
where T denotes transpose. C ¯ 1 and C ¯ 2 represent the average covariance matrix of two types of tasks, respectively, which are defined as follows:
C ¯ k = 1 N k n = 1 N k D ( k , n ) D ( k , n ) T t r a c e ( D ( k , n ) D ( k , n ) T ) ,   k = 1 , 2
where t r a c e ( · ) denotes the solution of the matrix trace. N k represents the number of samples of the k th task, that is, the number of single trial data. D ( k , n ) C × K represents the n th trial data of the k th task, where C represents the total number of EEG signal channels, and K represents the number of samples of each channel.
Formula (1) can be transformed into the following generalized eigenvalue problem [55].
C ¯ 2 1 C ¯ 1 w = λ w
The spatial filters are then the eigenvectors of M =   C ¯ 2 1 C ¯ 1   . The M is arranged in descending order of eigenvalues to obtain M ˜ , and the feature vectors of the first m columns and the last m columns of M ˜ are usually taken as the final spatial filter, which is denoted as W . In all experiments in this paper, m is set to 3. For single trial data D , its spatial projection signal is:
Z = W T D
The traditional CSP feature extraction method extracts the logarithm of the variances of spatially filtered signals as features, details as follows:
f p = log ( var ( Z p ) i = 1 2 m var ( Z i ) ) , p = 1 , 2 , , 2 m
where var ( · ) represents the solution of the variance. Finally, the feature vector of the single trial data can be obtained by calculating the formula (5), that is x = [ f 1 , f 2 , , f 2 m ] .

2.4.2. New Feature Extraction Methods

CSP-Wavelet

After spatial filtering, we can get Z in Section 2.4.1. Discrete wavelet transform is performed on each channel of Z . The derivation of the wavelet decomposition formula is described in detail in [14], interested readers can refer to literature [14]. After the wavelet decomposition, the frequency sub-bands related to motor imagery are selected, and the energy and standard deviation of the wavelet coefficients of the selected sub-bands are extracted as features. The wavelet base is db4. In order to select sub-bands related to motor imagery, we need to combine the sampling rate of the dataset to select the appropriate number of wavelet decomposition layers. For dataset 1 (the sampling rate is 100 Hz), the number of decomposition layers is 3 in this paper. For datasets 2–4 (the sampling rate is 250 and 256 Hz, respectively), the number of decomposition layers is 4. The selection of the number of decomposition layers will be discussed in detail in the discussion section. Figure 3 shows the process of wavelet decomposition with different sampling rates. The sampling rate of 256 Hz and 250 Hz are very close, so we only consider the decomposition process of 250 Hz, and the selected sub-bands of 256 Hz and 250 Hz are the same. Wavelet coefficients with sub-band frequencies in the range of 8–30 Hz are selected and used for feature extraction. The selected sub-bands are marked by the red dotted frame in Figure 3.
The energy of the wavelet coefficients of the selected sub-bands is calculated as follows:
e i = j = 1 N | D i j | 2 ,   i = 1 , 2 , , B
where B represents the number of selected sub-bands, N represents the number of the wavelet coefficients, and D i j represents the j th wavelet coefficient of the i th sub-band.
The standard deviation of the wavelet coefficients of the selected sub-bands is calculated as follows:
s i = ( 1 N 1 j = 1 N ( D i j μ i ) 2 ) 1 / 2 ,   i = 1 , 2 , , B
the meaning of B , N and D i j is consistent with formula (6) and μ i represents the average value of the wavelet coefficients of the i th sub-band. Finally, we can get the feature vector for the CSP-Wavelet feature extraction method as follows:
x D W T = [ e 1 1 , s 1 1 , e 2 1 , s 2 1 , , e B 1 , s B 1 c h a n n e l   1 ; e 1 2 , s 1 2 , e 2 2 , s 2 2 , , e B 2 , s B 2 c h a n n e l   2 ; ; e 1 2 m , s 1 2 m , e 2 2 m , s 2 2 m , , e B 2 m , s B 2 m c h a n n e l   2 m ]
where e i c and s i c represent energy and standard deviation of the i th sub-band of the c th channel of Z , respectively.

CSP-WPD

Similar to CSP-Wavelet, Wavelet packet decomposition is performed on each channel of Z . The derivation of the wavelet packet decomposition formula is described in detail in [56], interested readers can refer to literature [56]. The energy and standard deviation of the wavelet coefficients of the selected sub-bands are extracted as features. The wavelet base and the number of decomposition layers are the same as CSP-Wavelet. Figure 4 shows the process of wavelet packet decomposition with different sampling rates. Wavelet coefficients with sub-band frequencies in the range of 8–30 Hz are selected and used for feature extraction. Similar to CSP-Wavelet, the selected sub-bands of 256 Hz and 250 Hz are the same. The selected sub-bands are marked by the red dotted frame in Figure 4.
The calculation of the energy and standard deviation are the same as CSP-Wavelet, so the final feature vector form is similar, details as follow:
x W P D = [ e 1 1 , s 1 1 , e 2 1 , s 2 1 , , e B 1 , s B 1 c h a n n e l   1 ; e 1 2 , s 1 2 , e 2 2 , s 2 2 , , e B 2 , s B 2 c h a n n e l   2 ; ; e 1 2 m , s 1 2 m , e 2 2 m , s 2 2 m , , e B 2 m , s B 2 m c h a n n e l   2 m ]
where B represents the number of the selected sub-bands of CSP-WPD.

CSP-FB

After spatial filtering, the signals of each channel of Z are filtered into 10 sub-bands with bandwidth of 4 Hz and the overlap rate of 2 Hz in the range of 8–30 Hz. A 6th order Butterworth filter is used. Then, the logarithm of the variances of each sub-band are extracted as features. So, the final feature vector is
x F B = [ f 1 1 , f 2 1 , , f 2 m 1 f i l t e r   b a n d   1 ; f 1 2 , f 2 2 , , f 2 m 2 f i l t e r   b a n d   2 ; ; f 1 B , f 2 B , , f 2 m B f i l t e r   b a n d   B ]
where B represents the number of the filter bands.

2.5. Feature Selection

After feature extracted, we can get a sample feature matrix X = ( x 1 , x 2 , , x N ) T , where X N × P , N is the total number of feature samples, P is the dimension of feature sample, x i   P ,   i ( 1 , 2 ,   , N ) represents the i th feature sample (feature vector). According to different feature extraction methods, x can be x D W T , x W P D , or x F B .
The feature vector obtained by the feature extraction method usually contains redundant information. The redundant features not only increase the complexity of the classification model and model training time, but also easily lead to overfitting. Therefore, feature selection is required to remove redundant features and improve the classification accuracy. LASSO [57] is often used for feature selection, and its mathematical model is as follows:
min w 1 2 y X w 2 2 + λ w 1
where λ > 0 is the regularization parameter, w is feature weight, w 1 = i = 1 P | w i | , and w i is the i th element of w , y = ( y 1 , y 2 , , y N ) T are the sample labels and y i { 1 , 1 } . Although LASSO is widely used and works well, features selected by LASSO are usually too sparse and scattered throughout the feature space. When the feature dimension is much larger than the sample size, which is very common in EEG signal decoding, the selected results are unstable [58]. In addition, LASSO has a bias problem, which would result in significantly biased estimates, and cannot achieve reliable recovery with the least observations [59].
In order to ameliorate the bias problem of LASSO, we proposed a non-convex sparse optimization method for feature selection. The mathematical model can be described as follows:
min w 1 2 y X w 2 2 + λ log ( 1 + w 1 a )
where a > 0 is the scale parameters, a is set to 0.001 in this paper. This concave LOG function has the better ability to encourage the sparsity than l 1 -norm and penalizes all elements non-uniformly [60]. In order to solve the minimization problem (12), many efficient algorithms are proposed, such as proximal algorithms [61] or the alternating direction of multipliers method [62]. The proximal splitting algorithms including iterative shrinkage thresholding [63] are popular methods for solving (11) and (12). Using proximal gradient methods has many advantages than the other methods. They can deal with general conditions, for functions which are non-convex, or non-smooth and convex. Those algorithms have simple forms and it is easy to derive and implement. In particular, they can be used in large scale problems.
So, in this paper we use the iterative log thresholding [64] to solve (12). It has only two basic steps which are iterated until convergence: (i) Gradient step. Define an intermediate point v t at the t th step by taking a gradient step with respect to the differentiable term
v t = w t 1 / γ ( X T ( X w t y ) )
(ii) Proximal operator step. Evaluate the proximal operator of the non-convex log function at the intermediate point v t
w t + 1 = prox γ log ( v t ) = prox γ log ( w t 1 / γ ( X T ( X w t y ) ) )
where γ = X T X 2 and prox γ log ( v t ) is the proximal operator of log regularization function, which is defined as
w t + 1 = prox γ log ( v t ) = arg min w λ log ( 1 + w 1 a ) + γ 2 w v t 2 2
From the paper [65], we know that (15) has the explicit solution. Therefore, w t + 1 is given by the log function’s proximal operator:
w t + 1 = s i g n ( v t ) / 2 ( | v t | a + ( a | v t | ) 2 + 4 × max ( a | v t | λ / γ , 0 ) )
where s i g n ( v t ) is the algebraic sign of v t . Hence, the detailed iteration steps for solving (12) can be expressed as
{ v t = w t 1 / γ ( X T ( X w t y ) ) w t + 1 = s i g n ( v t ) / 2 ( | v t | a + ( a | v t | ) 2 + 4 × max ( a | v t | λ / γ , 0 ) )
We can see the iteration scheme (17) is easy to implement and only involved the matrix vector multiplication. Also, every step has a closed form solution. It is suitable for the large scale problems. At last, the convergence of (17) is established in [64], and we refer the interested readers to [64] for more details.

2.6. Secondary Feature Selection and Classification Model Construction

In order to select more effective features and construct a more robust classification model, we propose an ensemble learning method for secondary feature selection and the construction of multiple classification models. The overall processing flow is shown in Figure 5, where | w | represents the absolute value of w . In Section 2.5, we can obtain feature weights after feature selection performed by LASSO or LOG method. We further select features by setting a series of weight thresholds. The candidate threshold parameters are: [0, 0.1, ..., 0.8]. The features whose weight is bigger than the set threshold will be selected.
During the training phase, different thresholds will get different feature subsets, and feature subsets are trained to get different classification models. In this paper, we use FLDA as the classifier, so we get multiple FLDA classification models. During the test phase, we use the rules in the training phase to select a subset of features, and then use the classification models obtained in the training phase for classification. Because there are multiple classification models, we can get multiple classification accuracy. We take the maximum of these classification accuracy as the final classification accuracy. It is worth noting that if the feature subset is empty, the classification accuracy is directly set to 0.
Traditional machine learning methods use cross-validation to select the optimal feature subset and classification model in the training phase, and then use the obtained classification model for classification in the testing phase. However, our method trains multiple models and selects the maximum value of them as the final accuracy. This is where our work differs from previous research work. EEG signals have strong randomness and non-stationarity, and are also easily affected by the surrounding environment and noise during the collection process. The optimal feature subset and classification model selected in the training phase may not be optimal in the test set when they are interfered by noise. Different data samples suffer from different interferences. When classifying, they may get the best classification results in different feature subsets and classification models. We choose the maximum value of multiple classification models as the final accuracy, which has a certain degree of anti-interference and can increase the stability and robustness of the EEG decoding model.

3. Results

3.1. Compared Methods and Parameter Settings

In this paper, we use classification accuracy as the evaluation criterion. The classification accuracy is equal to the number of corrected classifications divided by the total number of test sets. FLDA is used for classification. For all methods, except for SFBCSP and SBLFB methods, the original EEG signals are filtered by 8–30 Hz band-pass filters. The compared methods are listed in Table 1, and the parameter setting will be introduced below.
The regularization parameters of LASSO and LOG are selected using 10-fold cross-validation and grid search method. The alternative set of hyperparameters is: λ [ 2 5 , 2 4 . 8 , 2 4 . 8 , 2 5 ] . The LASSO was implemented using the SLEP toolbox [66].

3.2. Experimental Results and Analysis

Table 2 shows the classification accuracy of various subjects in dataset 1 for each method. Except for CSP, the three proposed methods are significantly better than the compared methods. The CSP-FB+LOG method has achieved the highest average classification accuracy among the three proposed methods, and the highest classification accuracy in multiple subjects. Four CSP improvement methods, including Wavelet-CSP, WPD-CSP, SFBCSP, and SBLFB, have lower average accuracy than the traditional CSP.
There are 6 types of binary classification tasks in dataset 2 and a total of 54 subject-tasks. Table 3 shows the classification accuracy of various subjects in dataset 2 for each method. The three proposed methods are significantly better than the compared methods. The CSP-Wavelet+LOG method has achieved the highest average classification accuracy, followed by CSP-WPD+LOG and CSP-FB+LOG. The Wavelet-CSP and WPD-CSP methods are slightly better than CSP, but the SFBCSP and SBLFB methods are lower than CSP.
Table 4 shows the classification accuracy of various subjects in dataset 3 for each method. The CSP-FB+LOG methods are significantly better than the compared methods. CSP-WPD+LOG and CSP achieved the same average classification accuracy, and CSP-Wavelet+LOG was slightly lower than CSP. Other methods are lower than CSP.
Table 5 shows the classification accuracy of various subjects in dataset 4 for each method. The three proposed methods are significantly better than the compared methods. The CSP-WPD+LOG method has achieved the highest average classification accuracy among the three proposed methods, and the highest classification accuracy in multiple subjects. The Wavelet-CSP and WPD-CSP methods are better than CSP, but the SFBCSP and SBLFB methods are still lower than CSP.
In order to better demonstrate the superiority of the proposed methods, Figure 6 shows the classification accuracy of all methods in each subject. The red circle represents the classification accuracy of dataset 1 (seven subjects). The blue box represents the classification accuracy of dataset 2 (54 subjects). The cyan asterisk represents the classification accuracy of dataset 3 (nine subjects). The magenta triangle represents the classification accuracy of dataset 4 (14 subjects). Points above the diagonal indicate that the proposed methods are superior to the compared methods. From Figure 6, it can be seen that most of the points are above the diagonal, illustrating the superiority of the proposed methods.
In order to show the overall classification effect of the proposed methods more intuitively, Figure 7 shows the average classification accuracy of each dataset and the total average classification accuracy of all data. From Figure 7, it can be seen that the proposed methods are significantly better than other methods. For all data, the average classification accuracy and standard deviation obtained by the CSP, Wavelet-CSP, WPD-CSP, SFBCSP, SBLFB, CSP-Wavelet+LOG, CSP-WPD+LOG, and CSP-FB+LOG methods are: 79.58 ± 14.47, 79.68 ± 14.98, 79.36 ± 14.17, 74.97 ± 13.42, 75.30 ± 13.58, 82.25 ± 13.57, 82.38 ± 13.77 and 82.3 ± 13.61, respectively. The CSP-WPD+LOG method achieves the highest average classification accuracy in all data. The CSP-Wavelet+LOG and CSP-FB+LOG are slightly lower than CSP-WPD+LOG. The Wavelet-CSP and WPD-CSP methods are slightly better than CSP. The SFBCSP and SBLFB methods are always lower than CSP in each dataset and all data.
In order to study the effectiveness of secondary feature selection, Table 6 and Table 7 show the classification results with secondary feature selection and without secondary feature selection. LASSO and LOG are used for feature selection, respectively. It can be seen from Table 6 and Table 7 that no matter which feature extraction method, the second feature selection has achieved better results. Especially for CSP-FB feature extraction method, the overall average classification accuracy is improved by 3.91% for LASSO, and 3.62% for LOG.
Comparing Table 6 and Table 7, we can analyze the performance of LASSO and LOG. First, we analyze the situation without secondary feature selection. For all data, except for CSP-WPD feature extraction method, the average classification accuracy of LOG is higher than LASSO. In the situation with secondary feature selection, LOG is better than LASSO. No matter whether secondary feature selection is performed, LOG is better to LASSO for CSP-Wavelet and CSP-FB feature extraction method. In summary, the LOG is superior to LASSO.
In addition to LASSO, our method is also compared with other three feature selection methods, and the corresponding results are showed in Table 8. Fisher score (F-score) [36] combined with FLDA classifier constitutes a wrapped feature selection method. The optimal feature subset is selected using 10-fold cross-validation. Genetic algorithm (GA) and binary particle swarm optimization (BPSO) can be referred to the literature [67], and the parameter settings and classifier of these two methods are consistent with literature [67]. After feature selection, FLDA is used for classification. For data set 1, when CSP-WPD is usd for feature extraction, the average classification accuracy of LOG is slightly lower than that of LASSO. For dataset 2, when CSP-FB is used for feature extraction, the average classification accuracy of LOG is slightly lower than that of LASSO. In other cases, LOG is better than LASSO. It can be seen from Table 8 that LOG has achieved the best classification effect, which is significantly better than other feature selection methods. In addition, F-score is better than GA and BPSO.
In order to study the effect of different classifiers on the performance of LOG, Table 9 shows the classification results of LOG using three classifiers. SVM is implemented using LIBSVM toolbox [68]. SVM uses the linear kernel function, and the parameter settings of SVM are set according to the toolbox default settings. Bayesian linear discriminant analysis (BLDA) [69] is an improvement of FLDA. The BLDA model parameters are automatically estimated from the training data. For CSP-Wavelet+LOG and CSP-WPD+LOG, the average accuracy (for all data) of FLDA is higher than that of SVM and BLDA. For CSP-FB+LOG, the average accuracy of FLDA is almost the same as that of SVM and BLDA. In general, FLDA is better than SVM and BLDA. Therefore, when selecting a classifier, FLDA is a better choice for the proposed methods in this paper.
In order to more comprehensively evaluate the effectiveness of our method, Table 10, Table 11 and Table 12 respectively show the classification results of our method and other existing methods in the three BCI data sets.
Table 10 shows the classification accuracy of the proposed methods and other resent methods for BCI Competition IV Dataset I. It can be seen from Table 10 that CSP-FB+LOG is second only to literature [73]. The average classification accuracy of CSP-WPD+LOG is lower than that of literatures [73,74]. CSP-Wavelet+LOG is not good but not bad either, rather the effect is mediocre. The literature [73] proposed a novel feature extraction method in which the hybrid features of the brain function based on the bilevel network are extracted. Minimum spanning tree (MST) based on EEG signal nodes in different motor imagery is constructed as the first network layer to solve the global network connectivity problem. In addition, the regional network in different movement patterns is constructed as the second network layer to determine the network characteristics, which is consistent with the correspondence between limb movement patterns and cerebral cortex in neurophysiology. Although literature [73] has achieved better results, it is stronger a priori both in terms of frequency bands and EEG electrodes used to perform the classification. Our method does not require any prior information.
Table 11 shows the classification accuracy of the proposed methods and other resent methods for BCI Competition IV Dataset IIa. It can be seen from Table 11 that CSP-Wavelet+LOG and CSP-WPD+LOG is second only to literature [80]. CSP-FB+LOG is slightly lower than the literatures [76,77]. Although the literature [80] has achieved good classification results, the literature [80] relies on the data of other subjects. Our method only uses data from the subjects themselves. Therefore, our method is more independent.
Table 12 shows the classification accuracy of the proposed methods and other resent methods for BCI Competition IV Dataset IIb. It can be seen from Table 12 that CSP-FB+LOG is second only to literature [30]. CSP-WPD+LOG is lower than the literatures [30,84]. The effect of CSP-Wavelet+LOG is relatively poor. Literature [30] uses EMD time-frequency analysis method for feature extraction and CNN for classification. Compared with literature [30], considering the time of feature extraction and the complexity of the model, our method has certain advantages.
In summary, although our method has not achieved the best classification accuracy on each data set, they are better than most existing methods. In addition, our methods have certain advantages in feature extraction time and classification model complexity. Our method uses FLDA for classification, and obviously the complexity of the classification model is relatively low. For the feature extraction time, we will introduce it in detail in the discussion section.

4. Discussion

When CSP-Wavelet and CSP-WPD methods are used for feature extraction, the number of wavelet decomposition layers has a greater impact on classification accuracy. The selection for the number of layers considers two factors, namely the frequency resolution and the decomposition time. For a dataset with a sampling rate of 100 Hz, when the number of decomposition layers is less than or equal to 2, the frequency resolution is too low. It is impossible to correctly distinguish the frequency bands related to motor imagery, which is not conducive to extracting discriminative information. When the number of decomposition layers is greater than or equal to 5, the frequency band resolution is too high, and the extracted features are easily affected by noise. At the same time, the decomposition time also increased significantly. Therefore, for a dataset with a sampling rate of 100 Hz (dataset 1), only the case of the number of layers with 3 and 4 are considered in this paper. Similarly, for a dataset with a sampling rate of 250 Hz or 256 Hz (datasets 2–4), we only consider the case of the number of layers with 4 and 5.
Table 13 and Table 14 show the classification results of CSP-Wavelet and CSP-WPD methods using different decomposition layers, respectively. We first discuss the CSP-Wavelet method. In Table 13, when the sampling rate of the dataset is 100 Hz, L1 = 3, and L2 = 4. When the sampling rate of the dataset is 250 Hz or 256 Hz, L1 = 4, and L2 = 5. It can be seen from Table 13 that the classification accuracy of the smaller number of decomposition layers is usually greater than that of the larger number of decomposition layers. Even in some cases, the larger number of decomposition layers get a slightly better classification accuracy, considering the decomposition time, we still choose a smaller number of decomposition layers. In Table 14, for CSP-WPD method, we can obtain similar results. Therefore, for the sampling rate of the dataset is 100 Hz, the number of decomposition layer with 3 is selected in this paper. For the sampling rate of the dataset is 250 Hz or 256 Hz, the number of decomposition layer with 4 is selected.
In Table 13 and Table 14, we not only studied the influence of the number of decomposition layers on the classification results, but also studied the influence of sub-band selection on the classification results. As it can be seen from Table 13 and Table 14, in most cases, sub-band selection helps to improve the classification accuracy. Manually excluding sub-bands that are obviously unrelated to the motor imagery tasks can remove redundant information and reduce noise interference, and also reducing feature dimensions and model complexity. Therefore, sub-band selection can improve classification accuracy. It is worth pointing out that, when selecting sub-bands of CSP-Wavelet, the number of decomposition layers has no effect on the classification accuracy. The reason for this is that, when the number of decomposition layers is greater than or equal to three (100 Hz sampling rate) or greater than or equal to four (250 Hz or 256 Hz sampling rate), the selected sub-band are the same.
LASSO has been widely used in EEG feature selection. However, LASSO is a biased estimation of the l 0 -norm, which regularize the feature weights with l 1 -norm. The feature weights obtained by LASSO deviate from the true value and are too sparse. Non-convex regularization can alleviate the bias problem of l 1 -norm [50]. Therefore, the LOG method proposed in this paper can improve the classification accuracy. To illustrate the problem more intuitively, for subject A01 with motor imagery tasks of left hand vs. right hand, Figure 8 shows the feature weights obtained by LASSO and LOG, where features are extracted by CSP-FB. The lower part of Figure 8 is the feature weight obtained by performing the second feature selection on the feature weights obtained by LASSO and LOG. A total of six channels of signals are retained after CSP filtered. The feature index 1–10 in Figure 8 corresponds to the features of the first channel signal after filtered by 8–12 Hz, 10–14 Hz, ..., 26–30 Hz band-pass filters. The feature index 11–20 corresponds to the features of the second channel signal after filtered by 8–12 Hz, 10–14 Hz, ..., 26–30 Hz band-pass filters. The other feature indexes can be deduced by analogy. It can be seen from Figure 8 that the features selected by the LOG method include the features of the first and second channel signals and the fifth and sixth channel signals, while LASSO only selects the features of the second channel signal and the sixth channel signal. Therefore, the features selected by the LOG method contain more information, which is more discriminative (according to the CSP principle, the signals of the front and back m channel signals are more discriminative). In summary, the features selected by LASSO are too few, and at the same time the selected features are not discriminative enough.
In the classification results of all datasets, the SFBCSP and SBLFB methods are relatively poor. We use the ensemble learning method proposed in this paper to optimize these two methods, and the results are shown in Figure 9. Although the classification accuracy of the SFBCSP and SBLFB methods are effectively improved, compared with the other methods, the effect is still not good. The SFBCSP and SBLFB methods do not achieve good classification results in this paper, there may be two reasons. On the one hand, the datasets used in this paper are different from the compared methods. The SFBCSP and SBLFB methods may not be applicable to new datasets. On the other hand, although we have tried to restore the SFBCSP and SBLFB methods described by the author, there may be some data processing steps and some details may not be handled properly. It is worth noting that the effect of the algorithms restored in this paper is similar to that in the literature [26]. Specifically, the SFBCSP and SBLFB methods do not perform well on the dataset 1.
Table 15 shows the feature extraction time of training set for each method. Three subjects were used for the experiment, namely 1a, A01, and S01. These three subjects were from three different datasets, and these three datasets had different sampling rates and channel number. The feature extraction process of SFBCSP and SBLFB is the same, so the feature extraction time is the same. Comparing the three proposed methods with existing methods (CSP-Wavelet vs. Wavelet-CSP, CSP-WPD vs. WPD-CSP, and CSP-FB vs. SFBCSP), the feature extraction time is significantly reduced. Among the three newly proposed methods, CSP-FB has the least time. Although CSP-FB takes longer time than CSP, it can still be used in real-time BCI.
The three proposed methods in this paper do not consider the selection of time window during feature extraction. The correct selection of time window can effectively improve the classification accuracy, which has been verified in many existing works, such as literature [44,46]. Therefore, in future work, we will consider integrating the selection of time windows into the proposed methods to further improve classification performance. In addition, the feature selection method proposed in this paper uses cross-validation to obtain model parameters. The model training is cumbersome and time-consuming. On the other hand, the parameters obtained by cross-validation are not necessarily optimal, especially in the case of small samples [85]. Implementing LASSO and LOG under the Bayesian framework [86] to avoid tedious cross-validation will further improve the performance of the proposed methods.

5. Conclusions

In this paper, we have proposed three new feature extraction methods and one feature selection method. FLDA is used for classification. Combining feature extraction and feature selection methods, we can obtain three new EEG decoding methods, namely CSP-Wavelet+LOG, CSP-WPD+LOG, and CSP-FB. The classification performance of the proposed methods is better than the existing methods. CSP-WPD+LOG has achieved the highest total average classification accuracy among the three new methods, but the feature extraction time is the longest. The classification accuracy of CSP-Wavelet+LOG and CSP-FB is slightly lower than CSP-WPD+LOG, but these two methods have a huge time advantage, especially CSP-FB, which can be used in real-time brain-computer interface.
In future work, we will continue to optimize the proposed method. For example, we can improve the selection of the optimal filtering frequency band and time window, and the selection method of model parameters. In addition, multi-classification expansion will also be part of our future research content.

Author Contributions

S.Z. and Z.Z. contributed to the study concept and design. B.Z. performed algorithm optimization. B.F., T.Y. and Z.L. conducted the collection, analysis, and interpretation of data. S.Z. and Z.Z. worked on the preparation of the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by National Natural Science Foundation of China (No. 61967004, 11901137, and 81960324), Natural Science Foundation of Guangxi Province (No. 2018GXNSFBA281023), Guangxi Key Laboratory of Automatic Testing Technology and Instruments (No. YQ20113, YQ19209 and YQ18107); Guangxi Key Laboratory of Cryptography and Information Security (No. GCIS201927); Innovation Project of Guet Graduate Education (No. 2019YCXB03).

Conflicts of Interest

The authors declare that they have no competing interests.

References

  1. McFarland, D.J.; Wolpaw, J.R. Brain-computer interfaces for communication and control. Commun. ACM 2011, 54, 60–66. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Lazarou, I.; Nikolopoulos, S.; Petrantonakis, P.C.; Kompatsiaris, I.; Tsolaki, M. EEG-based brain–computer interfaces for communication and rehabilitation of people with motor impairment: A novel approach of the 21st century. Front. Hum. Neurosci. 2018, 12, 14. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Xu, B.; Li, W.; He, X.; Wei, Z.; Zhang, D.; Wu, C.; Song, A. Motor Imagery Based Continuous Teleoperation Robot Control with Tactile Feedback. Electronics 2020, 9, 174. [Google Scholar] [CrossRef] [Green Version]
  4. Qi, F.; Wu, W.; Yu, Z.L.; Gu, Z.; Wen, Z.; Yu, T.; Li, Y. Spatiotemporal-filtering-based channel selection for single-trial EEG classification. IEEE Trans. Cybern. 2020. [Google Scholar] [CrossRef]
  5. Daly, I.; Scherer, R.; Billinger, M.; Müller-Putz, G. FORCe: Fully online and automated artifact removal for brain-computer interfacing. IEEE Trans. Neural Syst. Rehab. Eng. 2014, 23, 725–736. [Google Scholar] [CrossRef]
  6. Zhang, Y.; Liu, B.; Ji, X.; Huang, D. Classification of EEG signals based on autoregressive model and wavelet packet decomposition. Neural Process. Lett. 2017, 45, 365–378. [Google Scholar] [CrossRef]
  7. Kevric, J.; Subasi, A. Comparison of signal decomposition methods in classification of EEG signals for motor-imagery BCI system. Biomed. Signal Process. Control 2017, 31, 398–406. [Google Scholar] [CrossRef]
  8. Brodu, N.; Lotte, F.; Lécuyer, A. Comparative study of band-power extraction techniques for motor imagery classification. In Proceedings of the 2011 IEEE Symposium on Computational Intelligence, Cognitive Algorithms, Mind, and Brain (CCMB), Paris, France, 11–15 April 2011; pp. 1–6. [Google Scholar]
  9. Mishuhina, V.; Jiang, X. Feature weighting and regularization of common spatial patterns in EEG-based motor imagery BCI. IEEE Signal Process. Lett. 2018, 25, 783–787. [Google Scholar] [CrossRef]
  10. Blankertz, B.; Tomioka, R.; Lemm, S.; Kawanabe, M.; Muller, K.R. Optimizing spatial filters for robust EEG single-trial analysis. IEEE Signal Process. Mag. 2007, 25, 41–56. [Google Scholar] [CrossRef]
  11. Ang, K.K.; Chin, Z.Y.; Zhang, H.; Guan, C. Filter bank common spatial pattern (FBCSP) in brain-computer interface. In Proceedings of the 2008 IEEE International Joint Conference on Neural Networks (IEEE World Congress on Computational Intelligence), Hong Kong, China, 1–8 June 2008; pp. 2390–2397. [Google Scholar]
  12. Mingai, L.; Shuoda, G.; Jinfu, Y.; Yanjun, S. A novel EEG feature extraction method based on OEMD and CSP algorithm. J. Intell. Fuzzy Syst. 2016, 30, 2971–2983. [Google Scholar] [CrossRef]
  13. Lin, J.; Liu, S.; Huang, G.; Zhang, Z.; Huang, K. The recognition of driving action based on EEG signals using wavelet-CSP algorithm. In Proceedings of the 2018 IEEE 23rd International Conference on Digital Signal Processing (DSP), Shanghai, China, 19–21 November 2018; pp. 1–5. [Google Scholar]
  14. Robinson, N.; Vinod, A.P.; Ang, K.K.; Tee, K.P.; Guan, C.T. EEG-based classification of fast and slow hand movements using wavelet-CSP algorithm. IEEE Trans. Biomed. Eng. 2013, 60, 2123–2132. [Google Scholar] [CrossRef] [PubMed]
  15. Feng, G.; Hao, L.; Nuo, G. Feature Extraction Algorithm based on CSP and Wavelet Packet for Motor Imagery EEG signals. In Proceedings of the 2019 IEEE 4th International Conference on Signal and Image Processing (ICSIP), Wuxi, China, 19–21 July 2019; pp. 798–802. [Google Scholar]
  16. Yang, B.; Li, H.; Wang, Q.; Zhang, Y. Subject-based feature extraction by using fisher WPD-CSP in brain–computer interfaces. Comput Methods Program. Biomed. 2016, 129, 21–28. [Google Scholar] [CrossRef] [PubMed]
  17. Lemm, S.; Blankertz, B.; Curio, G.; Muller, K.R. Spatio-spectral filters for improving the classification of single trial EEG. IEEE Trans. Biomed. Eng. 2005, 52, 1541–1548. [Google Scholar] [CrossRef] [PubMed]
  18. Dornhege, G.; Blankertz, B.; Krauledat, M.; Losch, F.; Curio, G.; Muller, K.R. Combined optimization of spatial and temporal filters for improving brain-computer interfacing. IEEE Trans. Biomed. Eng. 2006, 53, 2274–2281. [Google Scholar] [CrossRef]
  19. Higashi, H.; Tanaka, T. Simultaneous design of FIR filter banks and spatial patterns for EEG signal classification. IEEE Trans. Biomed. Eng. 2012, 60, 1100–1110. [Google Scholar] [CrossRef]
  20. Novi, Q.; Guan, C.; Dat, T.H.; Xue, P. Sub-band common spatial pattern (SBCSP) for brain-computer interface. In Proceedings of the 2007 3rd International IEEE/EMBS Conference on Neural Engineering, Kohala Coast, HI, USA, 2–5 May 2007; pp. 204–207. [Google Scholar]
  21. Thomas, K.P.; Guan, C.; Lau, C.T.; Vinod, A.P.; Ang, K.K. A new discriminative common spatial pattern method for motor imagery brain–computer interfaces. IEEE Trans. Biomed. Eng. 2009, 56, 2730–2733. [Google Scholar] [CrossRef]
  22. Sun, G.; Hu, J.; Wu, G. A novel frequency band selection method for common spatial pattern in motor imagery based brain computer interface. In Proceedings of the 2010 International Joint Conference on Neural Networks (IJCNN), Barcelona, Spain, 18–23 July 2010; pp. 1–6. [Google Scholar]
  23. Zhang, Y.; Zhou, G.; Jin, J.; Wang, X.; Cichocki, A. Optimizing spatial patterns with sparse filter bands for motor-imagery based brain-computer interface. J. Neurosci. Methods 2015, 255, 85–91. [Google Scholar] [CrossRef]
  24. Zhang, Y.; Wang, Y.; Jin, J.; Wang, X. Sparse Bayesian Learning for Obtaining Sparsity of EEG Frequency Bands Based Feature Vectors in Motor Imagery Classification. Int. J. Neural Syst. 2017, 27, 537–552. [Google Scholar] [CrossRef]
  25. Wei, Q.; Wei, Z. Binary particle swarm optimization for frequency band selection in motor imagery based brain-computer interfaces. Biomed. Mat. Eng. 2015, 26, S1523–S1532. [Google Scholar] [CrossRef] [Green Version]
  26. Kumar, S.; Sharma, A. A new parameter tuning approach for enhanced motor imagery EEG signal classification. Med. Biol. Eng. Comput. 2018, 56, 1861–1874. [Google Scholar] [CrossRef] [Green Version]
  27. Rivero, D.; Guo, L.; Seoane, J.A.; Dorado, J. Using genetic algorithms and k-nearest neighbour for automatic frequency band selection for signal classification. IET Signal Process. 2012, 6, 186–194. [Google Scholar] [CrossRef]
  28. Dai, M.; Zheng, D.; Na, R.; Wang, S.; Zhang, S. EEG classification of motor imagery using a novel deep learning framework. Sensors 2019, 19, 551. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  29. Tang, X.; Li, W.; Li, X.; Ma, W.; Dang, X. Motor imagery EEG recognition based on conditional optimization empirical mode decomposition and multi-scale convolutional Neural network. Expert Syst. Appl. 2020, 149, 113285. [Google Scholar] [CrossRef]
  30. Cheng, J.; Chen, M.; Li, C.; Liu, Y.; Song, R.; Liu, A.; Chen, X. Emotion Recognition from Multi-Channel EEG via Deep Forest. IEEE J. Biomed. Health Inf. 2020. [Google Scholar] [CrossRef] [PubMed]
  31. Padfield, N.; Zabalza, J.; Zhao, H.; Masero, V.; Ren, J. EEG-based brain-computer interfaces using motor-imagery: Techniques and challenges. Sensors 2019, 19, 1423. [Google Scholar] [CrossRef] [Green Version]
  32. Lotte, F.; Bougrain, L.; Cichocki, A.; Clerc, M.; Congedo, M.; Rakotomamonjy, A.; Yger, F. A review of classification algorithms for EEG-based brain–computer interfaces: A 10 year update. J. Neural Eng. 2018, 15, 031005. [Google Scholar] [CrossRef] [Green Version]
  33. Saeys, Y.; Inza, I.; Larrañaga, P. A review of feature selection techniques in bioinformatics. Bioinformatics 2007, 23, 2507–2517. [Google Scholar] [CrossRef] [Green Version]
  34. Koprinska, I. Feature selection for brain-computer interfaces. In Proceedings of the Pacific-Asia Conference on Knowledge Discovery and Data Mining; Springer: Berlin/Heidelberg, Germany, 2009; pp. 106–117. [Google Scholar]
  35. Ang, K.K.; Chin, Z.Y.; Zhang, H.; Guan, C. Mutual information-based selection of optimal spatial–temporal patterns for single-trial EEG-based BCIs. Pattern Recognit. 2012, 45, 2137–2144. [Google Scholar] [CrossRef]
  36. Li, P.; Liu, H.; Si, Y.; Li, C.; Li, F.; Zhu, X.; Huang, X.; Zeng, Y.; Yao, D.; Zhang, P.; et al. EEG based emotion recognition by combining functional connectivity network and local activations. IEEE Trans. Biomed. Eng. 2019, 66, 2869–2881. [Google Scholar] [CrossRef]
  37. Mehmood, R.M.; Du, R.; Lee, H.J. Optimal feature selection and deep learning ensembles method for emotion recognition from human brain EEG sensors. IEEE Access 2017, 5, 14797–14806. [Google Scholar] [CrossRef]
  38. LaRocco, J.; Innes, C.R.; Bones, P.J.; Weddell, S.; Jones, R.D. Optimal EEG feature selection from average distance between events and non-events. In Proceedings of the 2014 36th Annual International Conference of the IEEE Eng. in Medicine and Biology Society, Chicago, IL, USA, 26–30 August 2014; pp. 2641–2644. [Google Scholar]
  39. Rodríguez-Bermúdez, G.; García-Laencina, P.J.; Roca-González, J.; Roca-Dorda, J. Efficient feature selection and linear discrimination of EEG signals. Neurocomputing 2013, 115, 161–165. [Google Scholar] [CrossRef]
  40. Majkowski, A.; Kołodziej, M.; Zapała, D.; Tarnowski, P.; Francuz, P.; Rak, R.J.; Oskwarek, Ł. Selection of EEG signal features for ERD/ERS classification using genetic algorithms. In Proceedings of the 2017 18th International Conference on Computational Problems of Electrical Engineering (CPEE), New York, NY, USA, 11–13 September 2017; pp. 1–4. [Google Scholar]
  41. Rakshit, P.; Bhattacharyya, S.; Konar, A.; Khasnobish, A.; Tibarewala, D.N.; Janarthanan, R. Artificial bee colony based feature selection for motor imagery EEG data. In Proceedings of the Seventh International Conference on Bio-Inspired Computing: Theories and Applications (BIC-TA 2012); Springer: Gwalior, India, 2013; pp. 127–138. [Google Scholar]
  42. Baig, M.Z.; Aslam, N.; Shum, H.P.; Zhang, L. Differential evolution algorithm as a tool for optimal feature subset selection in motor imagery EEG. Expert Syst. Appl. 2017, 90, 184–195. [Google Scholar] [CrossRef]
  43. Liu, A.; Chen, K.; Liu, Q.; Ai, Q.; Xie, Y.; Chen, A. Feature selection for motor imagery EEG classification based on firefly algorithm and learning automata. Sensors 2017, 17, 2576. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  44. Miao, M.; Zeng, H.; Wang, A.; Zhao, C.; Liu, F. Discriminative spatial-frequency-temporal feature extraction and classification of motor imagery EEG: A sparse regression and Weighted Naïve Bayesian Classifier-based approach. J. Neurosci. Methods 2017, 278, 13–24. [Google Scholar] [CrossRef]
  45. Sreeja, S.R.; Rabha, J.; Nagarjuna, K.Y.; Samanta, D.; Mitra, P.; Sarma, M. Motor imagery EEG signal processing and classification using machine learning approach. In Proceedings of the 2017 International Conference on New Trends in Computing Sciences (ICTCS), Amman, Jordan, 9–11 October 2017; pp. 61–66. [Google Scholar]
  46. Zhang, Y.; Nam, C.S.; Zhou, G.; Jin, J.; Wang, X.; Cichocki, A. Temporally constrained sparse group spatial patterns for motor imagery BCI. IEEE Trans. Cybern. 2018, 49, 3322–3332. [Google Scholar] [CrossRef]
  47. Wang, J.J.; Xue, F.; Li, H. Simultaneous channel and feature selection of fused EEG features based on sparse group lasso. BioMed Res. Int. 2015, 2015. [Google Scholar] [CrossRef] [Green Version]
  48. Jiao, Y.; Zhang, Y.; Chen, X.; Yin, E.; Jin, J.; Wang, X.; Cichocki, A. Sparse group representation model for motor imagery EEG classification. IEEE J. Biomed. Health Inf. 2018, 23, 631–641. [Google Scholar] [CrossRef]
  49. Jain, P.; Kar, P. Non-convex Optimization for Machine Learning. Found. Trends Mach. Learn. 2017, 10, 142–336. [Google Scholar] [CrossRef] [Green Version]
  50. Wen, F.; Chu, L.; Liu, P.; Qiu, R.C. A survey on nonconvex regularization-based sparse and low-rank recovery in signal processing, statistics, and machine learning. IEEE Access 2018, 6, 69883–69906. [Google Scholar] [CrossRef]
  51. Blankertz, B.; Dornhege, G.; Krauledat, M.; Müller, K.R.; Curio, G. The non-invasive Berlin brain–computer interface: Fast acquisition of effective performance in untrained subjects. NeuroImage 2007, 37, 539–550. [Google Scholar] [CrossRef]
  52. Naeem, M.; Brunner, C.; Leeb, R.; Graimann, B.; Pfurtscheller, G. Seperability of four-class motor imagery data using independent components analysis. J. Neural Eng. 2006, 3, 208–216. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  53. Qi, F.; Li, Y.; Wu, W. RSTFC: A novel algorithm for spatio-temporal filtering and classification of single-trial EEG. IEEE Trans. Neural Netw. Learn. Syst. 2015, 26, 3070–3082. [Google Scholar] [CrossRef] [PubMed]
  54. Steyrl, D.; Scherer, R.; Faller, J.; Müller-Putz, G.R. Random forests in non-invasive sensorimotor rhythm brain-computer interfaces: A practical and convenient non-linear classifier. Biomed. Eng. Biomed. Tech. 2016, 61, 77–86. [Google Scholar] [CrossRef] [PubMed]
  55. Lotte, F.; Guan, C. Regularizing common spatial patterns to improve BCI designs: Unified theory and new algorithms. IEEE Trans. Biomed. Eng. 2010, 58, 355–362. [Google Scholar] [CrossRef] [Green Version]
  56. Ting, W.; Guo-Zheng, Y.; Bang-Hua, Y.; Hong, S. EEG feature extraction based on wavelet packet decomposition for brain computer interface. Measurement 2008, 41, 618–625. [Google Scholar] [CrossRef]
  57. Tibshirani, R. Regression shrinkage and selection via the lasso. J. R. Stat. Soc. Ser. B 1996, 58, 267–288. [Google Scholar] [CrossRef]
  58. Wen, Z.; Yu, T.; Yu, Z.; Li, Y. Grouped sparse Bayesian learning for voxel selection in multivoxel pattern analysis of fMRI data. NeuroImage 2019, 184, 417–430. [Google Scholar] [CrossRef]
  59. Meinshausen, N.; Yu, B. Lasso-type recovery of sparse representations for high-dimensional data. Ann. Stat. 2009, 37, 246–270. [Google Scholar] [CrossRef]
  60. Zhang, Z.G.; Zhu, Z. A TV-log nonconvex approach for image deblurring with impulsive noise. Signal Process 2020, 174, 107631. [Google Scholar] [CrossRef]
  61. Parikh, N.; Boyd, S. Proximal algorithms. Found. Trends Optim. 2014, 1, 127–239. [Google Scholar] [CrossRef]
  62. Boyd, S.; Parikh, N.; Chu, E. Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 2011, 3, 1–122. [Google Scholar] [CrossRef]
  63. Beck, A.; Teboulle, M. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2009, 2, 183–202. [Google Scholar] [CrossRef] [Green Version]
  64. Malioutov, D.; Aravkin, A. Iterative log thresholding. In Proceedings of the 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Florence, Italy, 4–9 May 2014; pp. 7198–7202. [Google Scholar]
  65. Polson, N.G.; Scott, J.G.; Willard, B.T. Proximal algorithms in statistics and machine learning. Stat. Sci. 2015, 30, 559–581. [Google Scholar] [CrossRef]
  66. Liu, J.; Ji, S.; Ye, J. SLEP: Sparse learning with efficient projections. Ariz. State Univ. 2009, 6, 7. [Google Scholar]
  67. Too, J.; Abdullah, A.R.; Mohd Saad, N. A new co-evolution binary particle swarm optimization with multiple inertia weight strategy for feature selection. Informatics 2019, 6, 21. [Google Scholar] [CrossRef] [Green Version]
  68. Chang, C.C.; Lin, C.J. LIBSVM: A library for support vector machines. ACM Trans. Intell. Syst. Technol. (TIST) 2011, 2, 1–27. [Google Scholar] [CrossRef]
  69. Hoffmann, U.; Vesin, J.M.; Ebrahimi, T.; Diserens, K. An efficient P300-based brain–computer interface for disabled subjects. J. Neurosci. Methods 2008, 167, 115–125. [Google Scholar] [CrossRef] [Green Version]
  70. Dai, Y.; Zhang, X.; Chen, Z.; Xu, X. Classification of electroencephalogram signals using wavelet-CSP and projection extreme learning machine. Rev. Sci. Instrum. 2018, 89, 074302. [Google Scholar] [CrossRef]
  71. Park, Y.; Chung, W. Frequency-optimized local region common spatial pattern approach for motor imagery classification. IEEE Trans. Neural Syst. Rehab. Eng. 2019, 27, 1378–1388. [Google Scholar] [CrossRef]
  72. Kumar, S.; Sharma, A.; Tsunoda, T. Brain wave classification using long short-term memory network based OPTICAL predictor. Sci. Rep. 2019, 9, 1–13. [Google Scholar] [CrossRef] [Green Version]
  73. Luo, Z.; Lu, X.; Xi, X. EEG Feature Extraction Based on a Bilevel Network: Minimum Spanning Tree and Regional Network. Electronics 2020, 9, 203. [Google Scholar] [CrossRef] [Green Version]
  74. Fu, R.; Han, M.; Tian, Y.; Shi, P. Improvement Motor Imagery EEG Classification based on sparse Common Spatial Pattern and Regularized Discriminant Analysis. J. Neurosci. Methods 2020, 343, 108833. [Google Scholar] [CrossRef] [PubMed]
  75. Luo, T.; Chao, F. Exploring spatial-frequency-sequential relationships for motor imagery classification with recurrent neural network. BMC Bioinf. 2018, 19, 344. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  76. Selim, S.; Tantawi, M.M.; Shedeed, H.A.; Badr, A. A CSP\AM-BA-SVM Approach for Motor Imagery BCI System. IEEE Access 2018, 6, 49192–49208. [Google Scholar] [CrossRef]
  77. Belwafi, K.; Romain, O.; Gannouni, S.; Ghaffari, F.; Djemal, R.; Ouni, B. An embedded implementation based on adaptive filter bank for brain–computer interface systems. J. Neurosci. Methods 2018, 305, 1–16. [Google Scholar] [CrossRef]
  78. Xu, Y.; Hua, J.; Zhang, H.; Hu, R.; Huang, X.; Liu, J.; Guo, F. Improved Transductive Support Vector Machine for a Small Labelled Set in Motor Imagery-Based Brain-Computer Interface. Comput. Intell. Neurosci. 2019, 2019, 2087132. [Google Scholar] [CrossRef] [Green Version]
  79. Dong, E.; Zhou, K.; Tong, J.; Du, S. A novel hybrid kernel function relevance vector machine for multi-task motor imagery EEG classification. Biomed. Signal Process. Control 2020, 60, 101991. [Google Scholar] [CrossRef]
  80. Wang, B.; Wong, C.M.; Kang, Z.; Liu, F.; Shui, C.; Wan, F.; Chen, C.P. Common Spatial Pattern Reformulated for Regularizations in Brain-Computer Interfaces. IEEE Trans. Cybern. 2020. [Google Scholar] [CrossRef]
  81. Yu, Z.; Ma, T.; Fang, N.; Wang, H.; Li, Z.; Fan, H. Local temporal common spatial patterns modulated with phase locking value. Biomed. Signal Process. Control 2020, 59, 101882. [Google Scholar] [CrossRef]
  82. Chu, Y.; Zhao, X.; Zou, Y.; Xu, W.; Han, J.; Zhao, Y. A decoding scheme for incomplete motor imagery EEG with deep belief network. Front. Neurosci. 2018, 12, 680. [Google Scholar] [CrossRef]
  83. Ha, K.W.; Jeong, J.W. Motor imagery EEG classification using capsule networks. Sensors 2019, 19, 2854. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  84. Molla, M.K.I.; Al Shiam, A.; Islam, M.R.; Tanaka, T. Discriminative Feature Selection-Based Motor Imagery Classification Using EEG Signal. IEEE Access 2020. [Google Scholar] [CrossRef]
  85. Lu, H.; Eng, H.L.; Guan, C.; Plataniotis, K.N.; Venetsanopoulos, A.N. Regularized Common Spatial Pattern with Aggregation for EEG Classification in Small-Sample Setting. IEEE Trans. Biomed. Eng. 2010, 57, 2936–2946. [Google Scholar] [PubMed]
  86. Zhang, Y.; Zhou, G.; Jin, J.; Zhao, Q.; Wang, X.; Cichocki, A. Sparse Bayesian classification of EEG for brain–computer interface. IEEE Trans. Neural Netw. Learn. Syst. 2015, 27, 2256–2267. [Google Scholar] [CrossRef]
Figure 1. Distribution of electrodes on the scalp for all datasets. (a) dataset 1, (b) dataset 2, (c) dataset 3. (d) dataset 4.
Figure 1. Distribution of electrodes on the scalp for all datasets. (a) dataset 1, (b) dataset 2, (c) dataset 3. (d) dataset 4.
Sensors 20 04749 g001
Figure 2. The processing flow of the proposed method.
Figure 2. The processing flow of the proposed method.
Sensors 20 04749 g002
Figure 3. Wavelet decomposition with different sampling rates. (a) The sampling rate is 100 Hz and the number of decomposition layers is 3. (b) The sampling rate is 250 Hz and the number of decomposition layers is 4.
Figure 3. Wavelet decomposition with different sampling rates. (a) The sampling rate is 100 Hz and the number of decomposition layers is 3. (b) The sampling rate is 250 Hz and the number of decomposition layers is 4.
Sensors 20 04749 g003
Figure 4. Wavelet packet decomposition with different sampling rates. (a) The sampling rate is 100 Hz and the number of decomposition layers is 3. (b) The sampling rate is 250 Hz and the number of decomposition layers is 4.
Figure 4. Wavelet packet decomposition with different sampling rates. (a) The sampling rate is 100 Hz and the number of decomposition layers is 3. (b) The sampling rate is 250 Hz and the number of decomposition layers is 4.
Sensors 20 04749 g004
Figure 5. The processing flow of secondary feature selection and classification.
Figure 5. The processing flow of secondary feature selection and classification.
Sensors 20 04749 g005
Figure 6. Classification accuracy comparison (All data).
Figure 6. Classification accuracy comparison (All data).
Sensors 20 04749 g006
Figure 7. Average classification accuracy comparison (all data).
Figure 7. Average classification accuracy comparison (all data).
Sensors 20 04749 g007
Figure 8. Feature selection by LASSO and LOG for subject A01 with motor imagery tasks of left hand vs right hand.
Figure 8. Feature selection by LASSO and LOG for subject A01 with motor imagery tasks of left hand vs right hand.
Sensors 20 04749 g008
Figure 9. SFBCSP and SBLFB optimized with secondary feature selection (All data).
Figure 9. SFBCSP and SBLFB optimized with secondary feature selection (All data).
Sensors 20 04749 g009
Table 1. Compared methods.
Table 1. Compared methods.
MethodsAlgorithm Composition and Processing Flow
CSPBand-pass filtered EEG signals are spatially filtered by CSP. The logarithm of the variances of spatially filtered signals are extracted as features [10].
Wavelet-CSPThe EEG signals of each channel are decomposed using DWT, the wavelet base is db4. The number of decomposition layers of dataset 1 is 3, and the other datasets are 4. The sub-bands related to motor imagery are used to reconstruct new channels, and then feature extraction is performed using CSP [13].
WPD-CSPThe EEG signals of each channel are decomposed using WPD, the wavelet base is db4. The number of decomposition layers of dataset 1 is 3, and the other datasets are 4. The sub-bands related to motor imagery are used to reconstruct new channels, and then feature extraction is performed using CSP [15].
SFBCSPThe original EEG signals are filtered into 17 sub-bands, and features are extracted for each sub-band using CSP. The filter bandwidth is 4 Hz and the overlap rate is 2 Hz in the range of 4-40 Hz. The sub-band features are selected by LASSO [23].
SBLFBThe original EEG signals are filtered into 17 sub-bands, and features are extracted for each sub-band using CSP. The filter bandwidth is 4 Hz and the overlap rate is 2 Hz in the range of 4–40 Hz. The sub-band features are selected by sparse Bayesian learning [24].
CSP-Wavelet+LASSOAfter band-pass filtering, features are extracted using CSP-Wavelet. Features are selected by LASSO. Ensemble learning is used for secondary feature selection and classification model construction.
CSP-WPD+LASSOAfter band-pass filtering, features are extracted using CSP-WPD. Features are selected by LASSO. Ensemble learning is used for secondary feature selection and classification model construction.
CSP-FB+LASSOAfter band-pass filtering, features are extracted using CSP-FB. Features are selected by LASSO. Ensemble learning is used for secondary feature selection and classification model construction.
CSP-Wavelet+LOGAfter band-pass filtering, features are extracted using CSP-Wavelet. Features are selected by LOG. Ensemble learning is used for secondary feature selection and classification model construction.
CSP-WPD+LOGAfter band-pass filtering, features are extracted using CSP-WPD. Features are selected by LOG. Ensemble learning is used for secondary feature selection and classification model construction.
CSP-FB+LOGAfter band-pass filtering, features are extracted using CSP-FB. Features are selected by LOG. Ensemble learning is used for secondary feature selection and classification model construction.
Table 2. Classification accuracy (Dataset 1).
Table 2. Classification accuracy (Dataset 1).
SubjectsCSPWavelet-CSPWPD-CSPSFBCSPSBLFBCSP-Wavelet+LOGCSP-WPD+LOGCSP-FB+LOG
1a77.0061.0059.0063.0061.0067.0064.0073.00
1b67.0063.0064.0054.0053.0069.0069.0075.00
1c74.0080.0081.0058.0058.0076.0080.0087.00
1d92.0093.0092.0090.0089.0097.0095.0098.00
1e97.0097.0097.0096.0096.0097.00100.0099.00
1f89.0086.0086.0082.0080.0090.0093.0093.00
1g94.0095.0072.0080.0080.0094.0094.0097.00
Mean ± Std84.29 ± 10.6682.14 ± 13.8278.71 ± 13.2074.71 ± 15.1873.86 ± 15.3484.29 ± 12.2685.00 ± 13.0588.86 ± 10.12
Table 3. Classification accuracy (Dataset 2).
Table 3. Classification accuracy (Dataset 2).
SubjectsCSPWavelet-CSPWPD-CSPSFBCSPSBLFBCSP-Wavelet+LOGCSP-WPD+LOGCSP-FB+LOG
A01-LR89.5888.1990.2876.3979.1793.0691.6790.28
A02-LR56.2551.3954.1752.7852.7861.8155.5661.81
A03-LR96.5393.0693.7587.588.8995.8397.2297.92
A04-LR71.5366.6764.5863.8963.1972.9272.2269.44
A05-LR52.085054.8681.9481.2558.3362.552.78
A06-LR70.1461.8170.8357.6459.0368.0668.0669.44
A07-LR81.9482.6485.4277.7882.6481.2579.1777.08
A08-LR93.0693.7594.4488.1990.2895.1495.1495.14
A09-LR89.5890.2890.2885.4284.0393.0693.0690.97
A01-LF95.8395.1498.6189.5890.2899.3199.3197.22
A02-LF71.5368.0672.2270.8373.6169.4467.3675.69
A03-LF94.4495.8395.1490.2890.9795.8395.1495.83
A04-LF78.4783.3384.7281.2580.5681.9488.8985.42
A05-LF62.568.7554.8653.4754.1771.5370.8363.89
A06-LF66.6768.0672.2264.5865.2870.8368.7568.75
A07-LF98.6199.3195.8392.3694.4499.3199.31100
A08-LF75.6983.3375.6977.0878.4786.1179.8684.72
A09-LF95.8393.0693.7593.7593.0697.2297.2293.75
A01-LT95.1497.9295.1488.1986.8199.3198.6194.44
A02-LT65.2861.8164.5861.1162.569.4467.3663.19
A03-LT94.4496.5396.5383.3385.4295.8396.5395.83
A04-LT87.589.5888.1976.3977.7888.8988.1989.58
A05-LT71.5367.3669.4468.0670.8370.1470.8372.92
A06-LT70.1465.9771.5366.6768.7570.1470.1476.39
A07-LT98.6197.2295.1488.8988.8995.1496.5396.53
A08-LT91.6789.5890.9780.5684.7293.7593.7588.19
A09-LT95.8395.8396.5395.8395.8397.9297.9297.92
A01-RF93.0696.5396.5377.0878.4798.6198.6198.61
A02-RF79.8668.0672.9267.3667.3681.9481.2577.08
A03-RF93.0695.1495.1484.0384.7293.7595.8397.22
A04-RF89.5892.3686.1181.9479.8690.2893.7590.28
A05-RF52.7855.5658.3359.7258.3364.5867.3663.89
A06-RF61.8166.6767.3671.5367.3666.6766.6764.58
A07-RF97.2210098.6197.2298.6110097.9298.61
A08-RF79.8677.0871.5373.6176.3977.0874.3181.25
A09-RF83.3384.0385.4274.317586.8186.8187.5
A01-RT98.6110010087.585.4210010099.31
A02-RT67.3666.6765.2861.8160.4265.2865.2870.83
A03-RT90.9796.5395.8390.9789.5896.5396.5396.53
A04-RT85.4286.1184.7281.9483.3385.4286.8190.28
A05-RT57.6457.6462.581.2580.5673.6174.3163.89
A06-RT65.2868.7564.5856.9458.3373.6174.3167.36
A07-RT97.9295.8397.9295.1497.2299.3197.9297.92
A08-RT90.2889.5888.8978.4779.8689.5891.6788.89
A09-RT90.2886.1184.0378.4782.6493.7593.0680.56
A01-FT68.7575.6973.6171.5372.2274.3172.2274.31
A02-FT70.1466.677563.8963.8978.477570.14
A03-FT69.4480.5678.4772.9270.8375.6975.6987.5
A07-FT59.7272.2269.447577.7871.5361.1170.83
A08-FT60.4255.5661.1150.6950.6965.2859.7254.86
A09-FT68.7572.2270.1463.1965.9769.4471.5370.14
A07-FT80.5678.4780.567574.3183.3384.7278.47
A08-FT83.3379.1782.6477.0879.8682.6486.8184.72
A09-FT93.7590.2890.2872.9272.2294.4494.4485.42
Mean ± Std80.36 ± 3.9880.52 ± 4.3680.86 ± 3.5476.21 ± 1.8076.94 ± 11.9483.40 ± 12.4783.05 ± 3.1582.15 ± 3.16
Table 4. Classification accuracy (Dataset 3).
Table 4. Classification accuracy (Dataset 3).
SubjectsCSPWavelet-CSPWPD-CSPSFBCSPSBLFBCSP-Wavelet+LOGCSP-WPD+LOGCSP-FB+LOG
B0177.578.7576.2571.2578.7578.7581.2588.75
B026052.556.2546.254556.2557.552.5
B0346.2543.754547.548.75454548.75
B0498.7598.7598.7510098.7596.2596.2598.75
B0588.7591.2591.2583.7576.2591.2591.2588.75
B0681.2581.257567.563.7577.58090
B0781.2578.7578.758583.758586.2590
B0893.7593.7593.757571.2592.592.592.5
B0981.2581.2573.756568.7581.2578.7583.75
Mean ± Std78.75 ± 15.4777.78 ± 17.3276.53 ± 16.4271.25 ± 16.4370.56 ± 15.878.19 ± 16.1378.75 ± 16.0181.53 ± 16.95
Table 5. Classification accuracy (Dataset 4).
Table 5. Classification accuracy (Dataset 4).
SubjectsCSPWavelet-CSPWPD-CSPSFBCSPSBLFBCSP-Wavelet+LOGCSP-WPD+LOGCSP-FB+LOG
S0156.6758.3365.0063.3366.6768.3370.0066.67
S0285.0090.0086.6776.6778.3390.0093.3390.00
S03100.0098.33100.0098.3398.33100.00100.00100.00
S0485.0085.0085.0076.6783.3391.6791.6786.67
S0560.0058.3360.0050.0050.0063.3363.3371.67
S0666.6775.0066.6760.0056.6776.6775.0081.67
S0790.0093.3388.3393.3391.6791.6791.6785.00
S0881.6783.3386.6781.6781.6788.3390.0091.67
S0998.3398.3398.3395.0093.33100.00100.0098.33
S1053.3355.0060.0051.6755.0053.3360.0065.00
S1176.6780.0071.6780.0080.0081.6781.6780.00
S1281.6775.0075.0071.6770.0088.3390.0088.33
S1360.0050.0055.0061.6763.3358.3361.6760.00
S1451.6770.0061.6758.3350.0060.0063.3356.67
Mean ± Std74.76 ± 15.9376.43 ± 15.6375.72 ± 14.4472.74 ± 15.3172.74 ± 15.7079.40 ± 15.3980.83 ± 14.2980.12 ± 13.44
Table 6. The effect of secondary feature selection on average classification accuracy of various datasets (the LASSO method).
Table 6. The effect of secondary feature selection on average classification accuracy of various datasets (the LASSO method).
DatasetsWithout Secondary Feature SelectionWith Secondary Feature Selection
CSP-Wavelet+LASSOCSP-WPD+LASSOCSP-FB+LASSOCSP-Wavelet+LASSOCSP-WPD+LASSOCSP-FB+LASSO
dataset 180.7181.8678.2982.7185.4386
dataset 280.5380.9579.5182.6982.9182.57
dataset 376.9475.1477.3677.6475.9780.97
dataset 476.5576.7973.3378.5779.1778.81
All data79.579.7178.1581.4681.7582.06
Table 7. The effect of secondary feature selection on average classification accuracy of various datasets (the LOG method).
Table 7. The effect of secondary feature selection on average classification accuracy of various datasets (the LOG method).
DatasetsWithout Secondary Feature SelectionWith Secondary Feature Selection
CSP-Wavelet+LOGCSP-WPD+LOGCSP-FB+LOGCSP-Wavelet+LOGCSP-WPD+LOGCSP-FB+LOG
dataset 180.7181.5781.7184.298588.86
dataset 280.6380.479.2283.483.0582.15
dataset 375.9776.1179.1778.1978.7581.53
dataset 477.3877.8674.7679.480.8380.12
All data79.679.6278.6882.2582.3882.3
Table 8. The effect of different feature selection methods on average classification accuracy of various datasets.
Table 8. The effect of different feature selection methods on average classification accuracy of various datasets.
Feature Extraction Feature SelectionDataset 1Dataset 2Dataset 3Dataset 4All Data
CSP-WaveletF-score82.8680.7975.8376.7979.76
GA79.2980.0975.9775.1278.76
BPSO81.2980.1778.4776.4379.46
LASSO82.7182.6977.6478.5781.46
LOG84.2983.478.1979.482.25
CSP-WPDF-score81.5780.5675.5677.9779.67
GA81.5780.5974.7277.1479.47
BPSO81.1480.0874.8675.5978.86
LASSO85.4382.9175.9779.1781.75
LOG8583.0578.7580.8382.38
CSP-FBF-score82.2980.7478.7577.2680.07
GA81.8680.0378.7574.0579.05
BPSO81.8679.4979.1773.8178.7
LASSO8682.5780.9778.8182.06
LOG88.8682.1581.5380.1282.3
Table 9. The effect of different classifiers on average classification accuracy of various datasets.
Table 9. The effect of different classifiers on average classification accuracy of various datasets.
DatasetsCSP-Wavelet+LOGCSP-WPD+LOGCSP-FB+LOG
SVMBLDAFLDASVMBLDAFLDASVMBLDAFLDA
dataset 18583.1484.2984.2984.298589.1489.2988.86
dataset 282.1182.9983.482.8282.9183.0582.5482.6482.15
dataset 378.4778.3378.1978.7580.2878.7580.8380.2881.53
dataset 476.6779.5279.477.0279.5280.8379.5279.480.12
All data81.0681.9282.2581.5482.1882.3882.482.482.3
Table 10. Classification accuracy of the proposed methods and other resent methods for BCI Competition IV Dataset I.
Table 10. Classification accuracy of the proposed methods and other resent methods for BCI Competition IV Dataset I.
Methods1a1b1c1d1e1f1gMean ± Std
PELM [70] (2018)79.0056.5059.5073.0071.5064.5085.0070.00 ± 10.33
LRFCSP [71] (2019)87.4070.0067.4092.9093.4088.8093.2084.70 ± 11.22
OPTICAL [72] (2019)87.3261.6771.8388.1789.0085.8393.8382.53 ± 8.17
BF [73] (2020)86.2488.3192.8989.5190.9288.4690.1689.50 ± 2.12
SCSP-RDA [74] (2020)97.0096.0072.5075.0078.5096.0095.5087.21 ± 11.26
CSP-Wavelet+LOG6769769797909484.29 ± 12.26
CSP-WPD+LOG64698095100939485.00 ± 13.05
CSP-FB+LOG7375879899939788.86 ± 10.12
Table 11. Classification accuracy of the proposed methods and other resent methods for BCI Competition IV Dataset IIa.
Table 11. Classification accuracy of the proposed methods and other resent methods for BCI Competition IV Dataset IIa.
MethodsA01A02A03A04A05A06A07A08A09Mean ± Std
GRU-RNN [75] (2018)84.8265.3283.5467.6764.0070.8784.9671.9568.9073.56 ± 4.38
CSP\AM-BA-SVM [76] (2018)90.5666.3291.9970.2868.5355.7590.6387.8085.0778.55 ± 13.40
ERDSA [77] (2018)86.8163.8994.4468.7556.2569.4478.4797.9193.7578.86 ± 15.07
IST-TSVM [78] (2019)80.1451.5595.5453.6051.6556.8356.5893.4292.6670.22 ± 19.74
CA+PSR+CSP [79] (2020)80.0065.3687.1467.5055.5450.1891.7984.1187.8674.39 ± 15.18
MTFL [80] (2020)91.6763.1995.1472.2264.5868.0679.1797.9292.3780.48 ± 13.97
p-LTCSP [81] (2020)82.6070.2370.2355.1554.3660.1473.3885.2974.6269.56 ± 11.10
CSP-Wavelet+LOG93.0661.8195.8372.9258.3368.0681.2595.1493.0679.91 ± 15.06
CSP-WPD+LOG91.6755.5697.2272.2262.5068.0679.1795.1493.0679.40 ± 15.56
CSP-FB+LOG90.2861.8197.9269.4452.7869.4477.0895.1490.9778.32 ± 16.02
Table 12. Classification accuracy of the proposed methods and other resent methods for BCI Competition IV Dataset IIb.
Table 12. Classification accuracy of the proposed methods and other resent methods for BCI Competition IV Dataset IIb.
MethodsB01B02B03B04B05B06B07B08B09Mean ± Std
DBN [82] (2018)70.3870.3471.2071.2471.2170.5270.7970.4970.3270.72 ± 0.40
CapsNet [83] (2019)78.7555.7155.0095.9383.1283.4375.6291.2587.1878.44 ± 14.44
SGRM [48] (2019)77.3059.1051.5097.0087.4072.5086.7084.7085.6078.00 ± 2.30
NCFS [84] (2020)79.2563.4856.6599.2888.6779.9688.7692.6684.9581.52 ± 13.72
CEMD [30] (2020)80.5665.4465.9799.3289.1986.1181.2588.8286.8182.61 ± 11.00
CSP-Wavelet+LOG78.7556.254596.2591.2577.58592.581.2578.19 ± 16.13
CSP-WPD+LOG81.2557.54596.2591.258086.2592.578.7578.75 ± 16.01
CSP-FB+LOG88.7552.548.7598.7588.75909092.583.7581.53 ± 16.95
Table 13. The effect of wavelet decomposition layers and sub-band selection on average classification accuracy of various datasets (the CSP-Wavelet method).
Table 13. The effect of wavelet decomposition layers and sub-band selection on average classification accuracy of various datasets (the CSP-Wavelet method).
DatasetWithout Sub-Bands SelectedWith Sub-Bands Selected
CSP-Wavelet+LOG (L1)CSP-Wavelet+LOG (L2)CSP-Wavelet+LOG (L1)CSP-Wavelet+LOG (L2)
dataset 18382.8684.2984.29
dataset 283.0382.9683.483.4
dataset 379.7279.4478.1978.19
dataset 480.1280.4879.479.4
All data82.1982.1682.2582.25
Table 14. The effect of wavelet decomposition layers and sub-band selection on average classification accuracy of various datasets (the CSP-WPD method).
Table 14. The effect of wavelet decomposition layers and sub-band selection on average classification accuracy of various datasets (the CSP-WPD method).
DatasetWithout Sub-Bands SelectedWith Sub-Bands Selected
CSP-WPD+LOG (L1)CSP-WPD+LOG (L2)CSP-WPD+LOG (L1)CSP-WPD+LOG (L2)
dataset 184.1483.868584.71
dataset 282.6580.9583.0582.51
dataset 378.3376.8178.7577.36
dataset 477.6277.8680.8380.48
All data81.4780.2482.3881.8
Table 15. Feature extraction time of training set for each method (Unit: second).
Table 15. Feature extraction time of training set for each method (Unit: second).
SubjectCSPWavelet-CSPWPD-CSPSFBCSP (SBLFB)CSP-WaveletCSP-WPDCSP-FB
1a0.14511.70260.7971.0372.1098.2990.543
A010.1268.42036.2410.8503.64819.5250.809
S010.0963.28617.2830.6462.65013.6360.492

Share and Cite

MDPI and ACS Style

Zhang, S.; Zhu, Z.; Zhang, B.; Feng, B.; Yu, T.; Li, Z. The CSP-Based New Features Plus Non-Convex Log Sparse Feature Selection for Motor Imagery EEG Classification. Sensors 2020, 20, 4749. https://doi.org/10.3390/s20174749

AMA Style

Zhang S, Zhu Z, Zhang B, Feng B, Yu T, Li Z. The CSP-Based New Features Plus Non-Convex Log Sparse Feature Selection for Motor Imagery EEG Classification. Sensors. 2020; 20(17):4749. https://doi.org/10.3390/s20174749

Chicago/Turabian Style

Zhang, Shaorong, Zhibin Zhu, Benxin Zhang, Bao Feng, Tianyou Yu, and Zhi Li. 2020. "The CSP-Based New Features Plus Non-Convex Log Sparse Feature Selection for Motor Imagery EEG Classification" Sensors 20, no. 17: 4749. https://doi.org/10.3390/s20174749

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop