Next Article in Journal
Basic Simulation Environment for Highly Customized Connected and Autonomous Vehicle Kinematic Scenarios
Next Article in Special Issue
Microfluidic-Based Measurement Method of Red Blood Cell Aggregation under Hematocrit Variations
Previous Article in Journal
Cross-Reactive Plasmonic Aptasensors for Controlled Substance Identification
Previous Article in Special Issue
The Role of Visual Noise in Influencing Mental Load and Fatigue in a Steady-State Motion Visual Evoked Potential-Based Brain-Computer Interface
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

EEG-Based Brain-Computer Interface for Decoding Motor Imagery Tasks within the Same Hand Using Choi-Williams Time-Frequency Distribution

1
Department of Computer Engineering, School of Electrical Engineering and Information Technology, German Jordanian University, Amman 11180, Jordan
2
Faculty of Engineering, University of Freiburg, Freiburg 79098, Germany
3
Department of Biomedical Engineering, School of Applied Medical Sciences, German Jordanian University, Amman 11180, Jordan
*
Author to whom correspondence should be addressed.
Sensors 2017, 17(9), 1937; https://doi.org/10.3390/s17091937
Submission received: 21 July 2017 / Revised: 16 August 2017 / Accepted: 21 August 2017 / Published: 23 August 2017
(This article belongs to the Special Issue Biomedical Sensors and Systems 2017)

Abstract

:
This paper presents an EEG-based brain-computer interface system for classifying eleven motor imagery (MI) tasks within the same hand. The proposed system utilizes the Choi-Williams time-frequency distribution (CWD) to construct a time-frequency representation (TFR) of the EEG signals. The constructed TFR is used to extract five categories of time-frequency features (TFFs). The TFFs are processed using a hierarchical classification model to identify the MI task encapsulated within the EEG signals. To evaluate the performance of the proposed approach, EEG data were recorded for eighteen intact subjects and four amputated subjects while imagining to perform each of the eleven hand MI tasks. Two performance evaluation analyses, namely channel- and TFF-based analyses, are conducted to identify the best subset of EEG channels and the TFFs category, respectively, that enable the highest classification accuracy between the MI tasks. In each evaluation analysis, the hierarchical classification model is trained using two training procedures, namely subject-dependent and subject-independent procedures. These two training procedures quantify the capability of the proposed approach to capture both intra- and inter-personal variations in the EEG signals for different MI tasks within the same hand. The results demonstrate the efficacy of the approach for classifying the MI tasks within the same hand. In particular, the classification accuracies obtained for the intact and amputated subjects are as high as 88 . 8 % and 90 . 2 % , respectively, for the subject-dependent training procedure, and 80 . 8 % and 87 . 8 % , respectively, for the subject-independent training procedure. These results suggest the feasibility of applying the proposed approach to control dexterous prosthetic hands, which can be of great benefit for individuals suffering from hand amputations.

1. Introduction

Nowadays, many individuals are suffering from hand motor impairments due to strokes, hand amputations, and spinal cord injuries. Developing a system that can recover a significant part of the lost or disabled hand functionality is crucial to improve the quality of life of those individuals. Recently, we have witnessed substantial advancements in designing and developing wearable assistive devices, such as robotic prosthetic hands and exoskeletal orthotic hands. These assistive devices can be of great benefit for individuals who are cognitively intact and suffering from motor impairments. In particular, an individual with amputated hand can utilize a prosthetic hand to recover part of the missing hand functionality [1]. Moreover, an individual who had a stroke attack can utilize an exoskeletal orthotic hand to support his/her disabled hand [1]. In this vein, brain-computer interface (BCI) systems have been employed to provide alternative non-muscular communication pathways to assist people suffering from motor disabilities or living with lost limbs to interact with their surroundings [1,2,3].
BCI systems translate the neural signals of the human brain into control commands for peripheral and assistive devices, which in turn can improve the communication capabilities of the individuals who suffer from severe motor impairments. Several noninvasive neuroimaging modalities have been utilized in BCI systems, such as functional magnetic resonance imaging (fMRI) [4], electroencephalography (EEG) [5,6,7,8,9], and positron emission tomography (PET) [10,11]. Among these different neuroimaging modalities, EEG is considered the most commonly used modality in BCI systems. This can be attributed to several factors such as the high temporal resolution, relatively low cost, and high portability [12,13,14]. The use of EEG provides a measure of the electrical potentials generated at various locations of the brain in response to the execution or imagination of different movements [15].
Over the past two decades, motor imagery (MI) has been used to design EEG-based BCI systems that enable individuals with motor impairments to control various assistive devices, such as wheelchairs [16,17], prosthetic devices [18,19,20], and computers [1,21]. In fact, a MI task can be defined as a mental process in which an individual imagines himself/herself performing a specific action without real activation of the muscles [22]. During MI tasks, various regions in the brain are activated such as primary motor cortex (M1), primary and secondary sensory areas, pre-frontal areas, superior and inferior parietal lobules, and dorsal and ventral pre-motor cortices [15]. Therefore, the development of BCI systems that can effectively analyze brain signals and discriminate between different MI tasks to control neural prostheses devices has the potential to enhance the quality of life for people with severe motor disabilities.
Literature reveals that the vast majority of the existing MI EEG-based BCI systems were focused on differentiating between MI tasks that are associated with four different body parts [23,24,25,26,27], including feet, left hand, right hand, and tongue. Despite the relatively high classification accuracies attained for classifying MI tasks performed by different body parts, the discrimination between MI tasks within the same hand is considered challenging [6,7,8,9]. This can be attributed to three limitations associated with the EEG signals. First, the low spatial resolution of the EEG signals constrains the ability to discriminate between MI tasks of the same hand that activate similar and close areas in the brain [6]. In fact, this limitation becomes more pronounced when the MI tasks are associated with the same joint in the hand, such as wrist movements. Second, due to the volume conducted effect [28], EEG signals have a limited signal-to-noise ratio [6]. This in turn can drastically reduce the ability to discriminate between EEG signals of different dextrous MI tasks within the same hand, such as fingers- and wrist-related tasks. Third, the spectral characteristics of the EEG signals are time varying, or non-stationary. The non-stationary characteristics of EEG signals introduce large intra-trial variations for each subject and inter-personal variations between subjects, which increase the difficulty to discriminate between the EEG signals of MI tasks within the same hand. Therefore, traditional time-domain and frequency-domain representations, which are employing the time-invariance assumption, are considered inadequate to represent EEG signals [29,30,31,32].
Recently, a few studies have been reported to utilize EEG signals in order to discriminate between flexion/extension movements of the fingers [6,33] as well as several wrist movements [5,9], including flexion, extension, supination and pronation. The promising results reported in these studies demonstrate the possibility of utilizing EEG signals to discriminate between MI tasks within the same hand. Nonetheless, these studies have been conducted using EEG signals acquired from intact subjects, without exploring the capability of classifying MI tasks within the same hand using EEG signals that are acquired from individuals with hand amputations. Moreover, these studies, which explored a limited number of movements, have focused on decoding finger movements or wrist movements without attempting to discriminate between MI tasks associated with different parts of the hand.
The aim of the current study is to contribute to the ongoing research in the field of EEG signal analysis by introducing an EEG-based BCI system that employs an extracted set of time-frequency features (TFFs) to discriminate between eleven MI tasks within the same hand. In fact, we hypothesize that the use of time-frequency distribution (TFD) as joint time-frequency representation of EEG signals enables the extraction of salient TFFs that comprise discriminative information about different MI tasks within the same hand. The MI tasks considered in the present study range from basic wrist and fingers tasks, such as flexion/extension tasks, to complex hand tasks, such as functional grasping tasks. This diverse set of MI tasks makes the problem of classifying the MI tasks challenging, due to the substantial inter- and intra-personal variations of the EEG signals associated with different MI tasks.
In order to discriminate between the EEG signals associated with the eleven MI tasks, a sliding window approach is employed to decompose each EEG signal into overlapping segments. Then, we utilize the Choi-Williams TFD (CWD) to construct a time-frequency representation of the EEG segments that can describe the dynamic changes in the EEG signals during different MI tasks. Using the time-frequency representation, we extract five categories of TFFs, including log-amplitude-based category, amplitude-based category, statistical-based category, spectral-based category, and spectral entropy-based category. The extracted TFFs are utilized to construct a hierarchical classification model that classifies each EEG segment into one of the eleven MI tasks considered in this study. Specifically, the hierarchical classification model consists of four layers. The first layer classifies the EEG segments into rest or movement segments. In the second layer, the EEG segments that were identified as movement segments at the first layer are further classified into functional or basic wrist and finger movements. The third layer classifies the EEG segments that comprise functional movements into small diameter grasp, lateral grasp, and extension-type grasp, and the EEG segments that comprise basic wrist and fingers movements into wrist-related movements and finger-related movements. Finally, the fourth layer classifies the EEG segments that comprise wrist-related movements into wrist flexion/extension, wrist ulnar/radial, and the EEG segments that comprise finger-related movements into index flexion/extension, middle flexion/extension, ring flexion/extension, little flexion/extension, and thumb flexion/extension.
In order to evaluate the performance of the proposed approach, we have recorded EEG data for both intact and amputated subjects while imagining to perform the eleven hand MI tasks. Two performance evaluation analyses are conducted to evaluate the performance of the proposed TFD-based approach in identifying the eleven different MI tasks. The two performance evaluation analyses are: the channel-based performance evaluation analysis and the TFF-based performance evaluation analysis. These performance evaluation analyses quantify the effect of the utilized EEG channel locations and TFFs on the capability of the proposed system to decode different MI tasks within the same hand. Furthermore, within each evaluation analysis, the hierarchial classification model is trained using two different procedures, namely subject-dependent and subject-independent training procedures. These two training procedures measure the ability of the proposed TFD-based system to capture both intra- and inter-personal variations of the EEG signals for different MI tasks within the same hand. To the best of our knowledge, this is the first study that explores the use of TFD for classifying MI tasks within the same hand for both intact and amputated subjects.
The remainder of this paper is organized as follows: In Section 2, we describe the experimental procedure, the proposed TFD-based features, the classification model, and the evaluation procedures. The experimental results and discussion are presented in Section 3 and Section 4, respectively. Finally, the conclusion is provided in Section 5.

2. Materials and Methods

2.1. Subjects

The EEG dataset employed in the current study is composed of two databases, namely D B 1 and D B 2 . D B 1 includes EEG signals acquired from eighteen intact subjects (6 females and 12 males, 4 left-handed and 14 right-handed) who volunteered to participate in the experiments. The mean ± standard deviation age of the subjects was 21 . 2 ± 2 . 9 years. Furthermore, the subjects did not have any known neurological or neuromuscular disorders. In D B 2 , four male subjects with upper limb amputation participated in the experiments. The mean ± standard deviation age of the subjects was 28 . 5 ± 6 . 2 years. Table 1 provides characterization information about the amputations associated with the subjects who were recruited in D B 2 . Before data acquisition, the experimental procedure of our study was explained in details to each subject and signed consent forms were collected from all subjects. The participants had the chance to withdraw from the study at anytime during the experimental procedure. Moreover, the experimental procedure was reviewed and approved by the Research Ethics Committee at the German Jordanian University.

2.2. Experimental Procedure

The experimental procedure adopted in the current study is similar to the experimental procedures employed in several previous studies related to EEG-based MI tasks classification, such as [8,34,35,36]. In particular, each subject was seated on a comfortable upright chair at a distance of approximately 0.5 m from a computer monitor placed on top of a desk. During the experiments, the subjects were asked to comfortably rest their arms on the desk. Then, each subject was asked to imagine performing different hand tasks according to the displayed visual cues on the computer monitor. The visual cues associated with the hand tasks are shown in Figure 1. In this work, we consider three sets of hand motor imagery tasks (HMITs), namely set 1 (see Figure 1a), set 2 (see Figure 1b), and set 3 (see Figure 1c). Specifically, set 1 includes the rest configuration of the hand, which we denoted as A 1 . Set 2 comprises grasping and functional movements of the hand, including the small diameter grasp ( A 2 ), lateral grasp ( A 3 ), and extension-type grasp ( A 4 ). Finally, set 3 contains basic movements of the wrist and the fingers, including wrist ulnar/radial deviation ( A 5 ), wrist flexion/extension ( A 6 ), index finger flexion/extension ( A 7 ), middle finger flexion/extension ( A 8 ), ring finger flexion/extension ( A 9 ), little finger flexion/extension ( A 10 ), and thumb flexion/extension ( A 11 ). The eleven HMITs that are comprised in the aforementioned three sets were selected to cover a wide range of the hand movements that are involved in activities of daily living (ADL) [35].
The experimental procedure consists of a training phase and recording phase. In the training phase, each subject was asked to watch a set of videos displaying each of the movements depicted in Figure 1. Then, the subjects were asked to practice imagining themselves performing the displayed movements in order to become familiar with the experiment. During the recording phase, each subject was asked to relax his/her arms on the desk. Then, a visual cue was displayed on the computer monitor in front of the subject for 3 s. After that, the visual cue disappeared and a black screen was displayed on the monitor. The subject was asked to close his/her eyes when the screen turned black, and to start to imagine performing the movement that was specified by the visual cue until the experimenter prompted him/her that the recording was over. For D B 1 , each subject was asked to imagine performing the eleven HMITs using his/her right hand. However, for D B 2 , the subjects were asked to imagine performing each movement using the missing limb. The duration of the recorded EEG signals of the HMITs varies according to the complexity of the movement being imagined as depicted in Figure 2. In particular, for the movements in set 1 and set 3, the duration of each trial is equal to 10 s. For the extension-type grasp movement in set 2, the duration of each trial is equal to 12 s. Finally, for the small diameter grasp and lateral grasp movements in set 2, the duration of each trial is equal to 14 s. The average duration of the experiment for each subject was approximately 1 . 5 h. This time includes the subject preparation and the recording of 7 trials for each of the hand movements depicted in Figure 1.

2.3. EEG Data Acquisition and Preprocessing

Raw EEG data was acquired using the Biosemi ActiveTwo EEG recording system (Biosemi B.V., Amsterdam, Netherlands). The Biosemi ActiveTwo system employs the 10–20 international EEG electrode placement system to localize 16 Ag/AgCl electrodes at the following locations: Fp1, Fp2, C3, C4, Cz, F3, F4, Fz, T7, T8, O1, O2, Oz, P3, P4, and Pz, referenced to the common mode sense (CMS)/ driven right leg (DRL) at C1/C2 locations for noise cancelation (see Figure 3). In this study, we consider four different groups of electrodes that cover different motor cortex related regions in the brain [9,36,37]. Table 2 shows the electrodes included within each group.
The EEG signals were acquired at a sampling frequency of 2048 Hz. The acquired signals were filtered using a band pass filter with a bandwidth of 0.5–35 Hz to reduce low-frequency noise [8,34] and ensure that the mu and beta rhythms, which are necessary for classifying EEG signals related to MI tasks, are within the bandwidth of the filtered EEG signals [8]. The filtered EEG signals were downsampled to 256 Hz to reduce the processing and storage requirements. In addition, EEGLAB toolbox [38] was utilized to remove the muscular and ocular artifacts from the acquired EEG signals using the automatic artifact rejection (AAR) toolbox [39].

2.4. Time-Frequency Representation of EEG Signals

The non-stationary nature of the EEG signals implies that the frequency contents of the EEG signals are rapidly changing over time [40]. This imposes the requirement of employing a time-frequency representation in order to effectively analyze the EEG signals. Indeed, recent studies on detecting seizure activities in EEG signals have indicated that utilizing joint time-frequency representations of EEG signals can significantly outperform traditional time-domain or frequency-domain representations [41,42]. This can be attributed to the fact that several key features of the EEG signals are encapsulated within either the time-domain or frequency-domain. Hence, the use of joint time-frequency representation has the potential to provide more discriminative features of EEG signals and can enhance the classification accuracy of MI tasks within the same hand.
In this study, we propose a time-frequency representation for analyzing EEG signals that is based on computing the time-frequency distribution (TFD) of the EEG signals. Specifically, TFD can be viewed as a transformation that maps the EEG signals from the one-dimensional time-domain into a two-dimensional time-frequency plane (TFP), which allows capturing the spectral changes in the EEG signals occurring over time [43]. In order to compute the TFD of the acquired EEG signals, we segment the EEG signals of each channel using a sliding window of size W = 256 samples and overlap size of O = 128 samples. Each EEG segment is transformed into its analytic form to enhance the resolution of the TFP representation [44,45,46]. Specifically, the analytic signal of a real EEG segment x ( t ) can be defined as follows [43]:
s x ( t ) = x ( t ) + j HT { x ( t ) } ,
where s x ( t ) is the analytic signal of x ( t ) and HT { · } is the Hilbert transform [47]. The time-frequency representation of the segment x ( t ) is carried out by computing the TFD of the analytic signal s x ( t ) . In this vein, Cohen [44,45] provided a general formula to compute the TFD of an analytic signal, which can be applied to various types of distributions. In particular, the TFD of the analytic signal s x ( t ) can be computed as follows:
Γ s ( t , f ) = AF s ( ϕ , τ ) ψ ( ϕ , τ ) e j 2 π f τ j 2 π t ϕ τ ϕ ,
where Γ s ( t , f ) is the TFD of the analytic signal s x ( t ) and AF s ( ϕ , τ ) is the ambiguity function of s x ( t ) . The ambiguity function AF s ( ϕ , τ ) is defined as the Fourier transform of the auto-correlation function of s ( t ) , which can be expressed as follows [44,45]:
AF s ( ϕ , τ ) = s x ( t + τ 2 ) s x * ( t τ 2 ) e j 2 π ϕ t t ,
where s x * ( · ) is the complex conjugate of s x ( · ) . In Equation (2), ψ ( ϕ , τ ) is the smoothing kernel function that defines the type of the TFD. In fact, various kernel functions can be employed to compute TFDs, where the design of these kernels depends on the information to be extracted from the TFP, the resolution in both time and frequency domains, and the ability to suppress the cross-terms generated from the bi-linearity of the TFDs [48]. When the kernel function is defined as ψ ( ϕ , τ ) = 1 , the generated TFD is called Wigner-Ville distribution (WVD) [49]. The WVD is a quadratic TFD that produces prevalent interference terms in the TFP, which are usually called cross-terms. The existence of cross-terms in the generated TFP increases the difficulty of interpreting the energy distribution in the TFP as a function of both time and frequency [43]. Therefore, in this study, we utilize the Choi-Williams distribution (CWD) [50] in order to minimize the cross-terms in the TFP. Unlike the WVD, the CWD employs an exponential kernel function to suppress the cross-term artifacts while maintaining a good resolution in the TFP [40,43]. The kernel function of the CWD can be expressed as follows [50]:
ψ ( ϕ , τ ) = exp ϕ 2 τ 2 γ 2 ,
where γ > 0 is a parameter that controls the suppression of the cross-terms and its value is experimentally selected to be 0 . 5 . Figure 4 shows the time-frequency representations computed for three EEG segments that represent three HMITs, namely rest, wrist flexion/extension, and lateral grasp. These time-frequency representations demonstrate the effect of utilizing the CWD on reducing the cross-terms in comparison with the WVD, which in turn enables a more distinguishable TFPs for differentiating HMITs. The dimensionality of the constructed time-frequency representation for each EEG segment is equal to W × N , where W and N represent the number of time-domain samples of s ( t ) and the number of frequency-domain samples, respectively. In this study, we have only used the CWD to compute the time-frequency representation of the EEG segments. In fact, the computation of the CWD is carried out using the HOSA toolbox [51], where the values of W and N are set to 256 and 512, respectively.

2.5. Time-Frequency Features

The constructed CWD-based time-frequency representation of each EEG segment has a 256 × 512 points. Therefore, to reduce the dimensionality of the constructed time-frequency representation, we extract a set of 12 time-frequency features (TFFs) from the CWD of each EEG segment. In this study, we group the extracted TFFs into five different categories, namely the log-amplitude-based category ( C 1 ), amplitude-based category ( C 2 ), statistical-based category ( C 3 ), spectral-based category ( C 4 ), and spectral entropy-based category ( C 5 ). These categories are described as follows:

2.5.1. Log-Amplitude-Based Category

In this category, we adopt and extend the concept of moment-related features presented in [34], in which MI tasks associated with different limbs were classified by computing spectral moment-related features extracted from the bispectrum of the EEG signals. Among the different spectral moment-related features, the sum of the logarithmic amplitudes of the bispectrum achieved promising classification results [34]. Thus, in this study, we compute a TFF ( T F 1 ) that quantifies the sum of the logarithmic amplitudes of the CWD of an EEG segment. The feature T F 1 is defined as follows:
T F 1 = t = 1 W f = 1 N log ( | Γ s ( t , f ) | ) ,
where Γ s ( t , f ) is the CWD of the analytic signal s x ( t ) .

2.5.2. Amplitude-Based Category

In this category, we utilize the amplitudes of the points in the CWD to classify the EEG segments. In particular, we adopt three amplitude-based TFFs [42,52,53,54], including the median absolute deviation of the CWD ( T F 2 ), the root mean square value of the CWD ( T F 3 ), and the inter-quartile range of the CWD ( T F 4 ). The features T F 2 , T F 3 , and T F 4 can be expressed as follows [42]:
T F 2 = 1 W N t = 1 W f = 1 N | Γ s ( t , f ) 1 W N t = 1 W f = 1 N Γ s ( t , f ) | .
T F 3 = 1 W N t = 1 W f = 1 N Γ s ( t , f ) .
T F 4 = 1 N f = 1 N Γ s 3 ( W + 1 ) 4 , f Γ s ( W + 1 ) 4 , f .

2.5.3. Statistical-Based Category

This category of features consists of the mean ( T F 5 ), variance ( T F 6 ), skewness ( T F 7 ), and kurtosis ( T F 8 ) of the CWD computed for each EEG segment [42,43,48]. These features can be defined as follows:
T F 5 = 1 W N t = 1 W f = 1 N Γ s ( t , f ) .
T F 6 = 1 W N t = 1 W f = 1 N Γ s ( t , f ) T F 5 2 .
T F 7 = 1 W N ( T F 6 ) 3 / 2 t = 1 W f = 1 N Γ s ( t , f ) T F 5 3 .
T F 8 = 1 W N ( T F 6 ) 2 t = 1 W f = 1 N Γ s ( t , f ) T F 5 4 .

2.5.4. Spectral-Based Category

The features in this category are based on adapting some of the frequency-domain spectral features of the EEG signals to the time-frequency domain. In particular, we employ two spectral-based TFFs [31,43,48], namely the flatness of the CWD ( T F 9 ) and the flux of the CWD ( T F 10 ). These two TFFs are the time-frequency extension of the spectral flux and spectral flatness in the frequency domain [42,52,55]. The use of spectral-based TFFs enables the quantification of several spectral information of the EEG signals, which can be used to classify different HMITs. In particular, the flatness of the CWD provides a measure that describes the uniformity of the distribution of the signal energy in the TFP [48]. Moreover, the flux of the CWD quantifies the changing rate of the signal energy in the TFP [48]. In this study, the features T F 9 and T F 10 are defined as follows [48]:
T F 9 = W N t = 1 W f = 1 N ( | Γ s ( t , f ) | ) 1 / W N t = 1 W f = 1 N ( | Γ s ( t , f ) | ) .
T F 10 = t = 1 W l f = 1 N k Γ s ( t + l , f + k ) Γ s ( t , f ) , l = k = 1 .

2.5.5. Spectral Entropy-Based Category

This category comprises two TFFs, namely the normalized Renyi entropy of the CWD ( T F 11 ) and the energy concentration of the CWD ( T F 12 ) [31]. The normalized Renyi entropy of the CWD measures the regularity of the distribution of the signal energy in the TFP. In fact, the EEG signals that have a uniformly distributed energy in the TFP tend to have a larger values of T F 11 , while the signals that have energy concentrated within specific regions in the TFP tend to have smaller values of T F 11 [43,56,57]. The energy concentration of the CWD measures the spread of the energy in the TFP. Specifically, EEG signals that have broadly distributed energy across the TFP tend to have a larger values of T F 12 , while signals that have energy concentrated within specific areas in the TFP tend to have smaller values of T F 12 [58]. In this study, the features T F 11 and T F 12 are defined as follows:
T F 11 = 1 2 log 2 t = 1 W f = 1 N Γ s ( t , f ) W N ( T F 5 ) 2 .
T F 12 = t = 1 W f = 1 N | Γ s ( t , f ) | 2 .

2.6. Classification of HMITs

As indicated by Edelman et al. [9], the EEG signals are characterized by low spatial resolution in the motor cortex regions. Hence, classifying EEG segments that encapsulate different MI tasks is considered challenging, particularly when these tasks are within the same hand. Another challenge is the variability in the duration of the HMITs, in which the length of each MI task depends on the complexity of the movement being imagined. Hence, the number of samples associated with different HMITs can vary significantly, which leads to unbalanced data samples across different HMITs. Therefore, direct application of a multi-class classifier for classifying the EEG segments into different MI movements within the same hand might lead to limited recognition accuracy [59,60].
To address this limitation, we propose a four-layer hierarchical classification model to classify each EEG segment into one of the eleven HMITs considered in our study. The four-layers in our classification model convert the original complex classification task (i.e., classifying an EEG segment into one of the eleven HMITs) into a sequence of simpler classification tasks that are performed at each layer. In particular, the first layer consists of a classification node, namely C N 1 , that classifies EEG segments into rest segments ( A 1 ) and movement segments ( I C 1 ), where movement segments are EEG segments that can comprise HMITs from set 2 or set 3 in our collected dataset. Then, the EEG segments of class I C 1 are passed on to the second layer to identify whether the movement in each EEG segment belongs to set 2 or set 3. Specifically, the second layer consists of a classification node, denoted as C N 2 , that classifies each EEG segment of class I C 1 into a movement segment that comprises HMIT from set 2 ( I C 2 ) or a movement segment that comprises HMIT from set 3 ( I C 3 ). Then, the EEG segments of classes I C 2 and I C 3 are passed on to layer 3, which consists of two classification nodes, namely C N 3 and C N 4 . At the third layer, C N 3 classifies the EEG segments of class I C 2 into one of the three HMITs comprised in set 2, namely small diameter grasping ( A 2 ), lateral grasping ( A 3 ), and extension-type grasp ( A 4 ). Similarly, C N 4 classifies EEG segments of class I C 3 into wrist-related HMITs ( I C 4 ) or finger-related HMITs movements ( I C 5 ). Finally, the EEG segments of classes I C 4 and I C 5 are passed on to layer 4, which consists of two classification nodes, namely C N 5 and C N 6 . At the fourth layer, C N 5 classifies the EEG segments of class I C 4 into wrist ulnar/radial deviation ( A 5 ) and wrist flexion/extension ( A 6 ). Similarly, C N 6 classifies the EEG segments of class I C 5 into one of the five finger-related HMITs comprised in set 3, namely index finger flexion/extension ( A 7 ), middle finger flexion/extension ( A 8 ), ring finger flexion/extension ( A 9 ), little finger flexion/extension ( A 10 ), and thumb flexion/extension ( A 11 ). Figure 5 provides a schematic diagram of the proposed four-layer hierarchical classification model.
In this study, the input to the hierarchical classification model is a feature vector that consists of the TFFs extracted from the CWD computed for an EEG segment. Moreover, the classification nodes in the four layers are implemented using support vector machine (SVM) classifiers with radial basis function (RBF) kernel [61,62]. Previous studies have shown that utilizing SVM classifiers with RBF kernel can be more effective than generative models for supervised learning problems [63]. Moreover, using the SVM classifier with RBF kernel can achieve a better performance and generalization compared with the other state-of-the-art classifiers, such as Naive Bayes, k-nearest neighbors (k-NN), and neural networks [31,64]. Therefore, we realize the classification nodes C N 1 , C N 2 , C N 4 and C N 5 using binary SVM classifiers. The classification nodes C N 3 and C N 6 are realized using multi-class SVM classifiers. The multi-class SVM classifiers are implemented using a one-against-one scheme [65,66], in which we construct n ( n 1 ) / 2 binary SVM classifiers for each classification node, where n is the number of classes. In particular, for C N 3 , the number of classes is three, including A 2 , A 3 and A 4 , whereas the number of classes for C N 6 is five, including A 7 , A 8 , A 9 , A 10 and A 11 .
The performance of the SVM classifier with RBF kernel depends on the selected values of the RBF kernel parameter ( σ ) and the regularization parameter ( C > 0 ). To tune these two parameters, we perform a grid-based search [66] along two directions to determine the values of σ and C for each classification node. In the first direction, we vary the value of the parameter σ , while in the second direction we vary the value of the parameter C. Then, the best SVM model is selected such that its parameters maximize the average classification accuracy.

2.7. Performance Evaluation Procedures and Metrics

The acquired EEG signals of the intact subjects ( D B 1 ) and amputated subjects ( D B 2 ) are used to perform two types of performance evaluation analyses, namely channel-based analysis and TFF-based analysis. In the channel-based analysis, we study the effect of selecting different groups of EEG channels on the accuracy of classifying HMITs. In particular, we evaluate the performance of our proposed approach based on its ability to classify the feature vectors extracted from the EEG channels comprised within each of the four channel groups ( G 1 , G 2 , G 3 and G 4 ) presented in Table 2. For the TFF-based analysis, we explore the effect of using each of the five categories of TFFs, namely C 1 , C 2 , C 3 , C 4 and C 5 , on the accuracy of classifying HMITs. For each performance evaluation analysis, we measure the performance of the proposed approach using standard performance evaluation metrics, including the precision (PRC), recall (RCL), F 1 -score, and accuracy (ACC), which are defined as follows [67,68]:
P R C = T P ( T P + F P ) × 100 % ,
R C L = T P ( T P + F N ) × 100 % ,
F 1 s c o r e = 2 ( P R C * R C L ) ( P R C + R C L ) × 100 % ,
A C C = ( T P + T N ) ( T P + T N + F P + F N ) × 100 % ,
where T P , T N , F P and F N represent the numbers of true positive cases, true negative cases, false positive cases, and false negative cases, respectively. These performance evaluation metrics are obtained using two types of training procedures, including subject-dependent training procedure (SDTP) and subject-independent training procedure (SITP). In the SDTP, we employ a ten-fold cross-validation procedure [8,69] to construct a hierarchical classification model for each subject, as described in Section 2.6. In particular, we randomly divide the feature vectors associated with the HMITs performed by each subject into 10 folds. Nine folds are used to train the classification nodes at each layer in the constructed classification model, while the remaining fold is used for testing. This procedure is repeated for ten times, and the overall performance of each classification node is computed by averaging the results obtained from each repetition. The SDTP measures the ability of the proposed approach to capture the intra-personal variations of the performed HMITs. For the SITP, we employ a leave-one-subject-out cross-validation (LOSO-CV) procedure to evaluate the performance of the proposed approach [31]. This procedure is based on constructing a single hierarchical classification model for all the subjects in each database. Then, the classification nodes at each layer of the constructed classification model are trained using the feature vectors extracted from all subjects except one subject. The feature vectors of the subject that was excluded from the training are used for testing. This procedure is repeated for each subject to guarantee that the feature vectors of each subject are used for testing, and the overall performance is computed by averaging the results obtained from all repetitions. The SITP quantifies the ability of the proposed approach to capture the inter-personal variations of the performed HMITs.

3. Experimental Results

In this section, we present the performance evaluation results of the proposed approach for the channel-based and TFF-based evaluation analyses obtained for the intact and amputated subjects using the SDTP and SITP.

3.1. Evaluation Results of the Intact Subjects ( D B 1 )

In this section, we evaluate the performance of the proposed approach based on D B 1 that includes the EEG signals of the intact subjects. In particular, we provide the performance evaluation results of the channel-based analysis and the TFF-based analysis obtained using the SDTP and SITP.

3.1.1. Results of the Channel-Based Analysis

Table 3 provides the average PRC, RCL, F 1 -score, and ACC of the classification nodes for each group of EEG channels, computed using the SDTP and SITP. In particular, for the SDTP, we extract the twelve TFFs ( T F 1 T F 12 ) from the EEG signals in each group of EEG channels. Then, for each subject, we construct four hierarchical classification models using the TFFs extracted from the four groups of EEG channels. The classification nodes in each classification model are trained and tested using the ten-fold cross-validation procedure, which is described in Section 2.7. Finally, for each classification node, we report the average values of the PRC, RCL, F 1 -score, and ACC metrics computed over the eighteen subjects in D B 1 . For SITP, we utilize the twelve TFFs extracted from each group of EEG channels to construct a single hierarchical classification model for all the subjects in D B 1 . The classification nodes in the constructed classification model are trained and tested using the LOSO-CV procedure, described in Section 2.7. Finally, for each classification node, we present the average values of the PRC, RCL, F 1 -score, and ACC metrics computed over the repetitions of the LOSO-CV procedure.
In Table 3, the results obtained using SDTP show that the classification nodes achieved an average PRC, RCL, F 1 -score, and ACC values that are higher than 70 % using the various groups of EEG channels. In fact, the lowest PRC, RCL, F 1 -score, and ACC values of 73 . 2 % , 71 . 4 % , 72 . 3 % and 73 . 8 % , respectively, are obtained using the TFFs extracted from the EEG channels of G 3 . Moreover, the highest PRC, RCL, F 1 -score, and ACC values of 84 . 4 % , 82 . 9 % , 83 . 6 % and 84 . 6 % , respectively, are achieved using the TFFs extracted from the EEG channels of G 1 . On the other hand, the results obtained based on the SITP show that the classification nodes achieved an average PRC, RCL, F 1 -score, and ACC values that are higher than 52 % using the different groups of EEG channels, with the lowest PRC, RCL, F 1 -score, and ACC of 57 . 4 % , 53 . 0 % , 54 . 9 % and 58 . 7 % , respectively, obtained using the TFFs extracted from the EEG channels of G 3 and the highest PRC, RCL, F 1 -score, and ACC of 64 . 2 % , 62 . 2 % , 63 . 2 % and 67 . 8 % , respectively, obtained using the TFFs extracted from the EEG channels of G 1 . In fact, the results obtained using both SDTP and SITP are well above the average random classification accuracy, which is defined as the the reciprocal of the number of classes, i.e. 9 . 1 % .
To compare between the performance of the proposed approach using each group of EEG channels, we have conducted paired t-tests with significance level of 0 . 05 to compare the accuracies of the classification nodes obtained using the TFFs extracted from the EEG channels of G 1 with the accuracies of the classification nodes achieved based on the TFFs extracted from the other three groups of EEG channels. For the SDTP, the computed p values for G 1 versus G 2 , G 1 versus G 3 and G 1 versus G 4 were 0 . 0007 , 0 . 004 and 0 . 0014 , respectively. Similarly, for the SITP, the computed p values for G 1 versus G 2 , G 1 versus G 3 and G 1 versus G 4 were 0 . 0141 , 0 . 0101 and 0 . 0072 , respectively. The calculated p values, for both SDTP and SITP, demonstrate that the performance of the classification nodes achieved based on the TFFs extracted from the EEG channels of G 1 outperforms significantly the performance of the classification nodes obtained using the TFFs of the other three groups.

3.1.2. Results of the TFF-Based Analysis

Table 4 provides the average PRC, RCL, F 1 -score, and ACC of the classification nodes computed for each of the five categories of TFFs using both SDTP and SITP. The results presented in Table 4 are based on the TFFs extracted from the EEG channels of G 1 . The selection of G 1 is based on the results of the channel-based analysis, described in the previous subsection, in which the classification nodes achieved the best performance when the TFFs were extracted from the EEG channels of G 1 in both SDTP and SITP. More specifically, for the SDTP, we construct five hierarchical classification models for each subject. Each classification model is constructed using the TFFs comprised within one of the five categories of TFFs. The classification nodes in each classification model are trained and tested using the ten-fold cross-validation procedure, as described in Section 2.7. The reported results of each classification node are the average values of PRC, RCL, F 1 -score, and ACC computed over the eighteen subjects in D B 1 . For the SITP, we utilize the TFFs extracted from each of the five categories of TFFs to construct five hierarchical classification models for all subjects in D B 1 . In particular, each classification model utilizes the TFFs of one of the five categories extracted from the EEG signals of all subjects. The classification nodes in the constructed classification models are trained and tested using the LOSO-CV procedure, as described in Section 2.7. Finally, for each classification node, we present the average values of PRC, RCL, F 1 -score, and ACC computed over the repetitions of the LOSO-CV procedure.
In Table 4, the obtained results based on the SDTP indicate that the classification nodes achieved average PRC, RCL, F 1 -score, and ACC values that are higher than 72 % using the different categories of TFFs, with the lowest PRC, RCL, F 1 -score, and ACC values of 74 . 4 % , 72 . 9 % , 73 . 6 % and 75 . 2 % , respectively, obtained using the TFFs of the statistical-based category and the highest PRC, RCL, F 1 -score, and ACC values of 88 . 1 % , 86 . 7 % , 87 . 4 % and 88 . 8 % , respectively, achieved using the TFFs of the log-amplitude-based category. On the other hand, the obtained results based on the SITP show that the classification nodes achieved average PRC, RCL, F 1 -score and ACC values that are higher than 51 % using the different categories of TFFs, with the lowest PRC, RCL, F 1 -score, and ACC values of 55 . 4 % , 51 . 3 % , 53 . 2 % and 55 . 4 % , respectively, obtained using the TFFs of the statistical-based category and the highest PRC, RCL, F 1 -score and ACC values of 81 . 3 % , 79 . 5 % , 80 . 4 % and 80 . 8 % , respectively, obtained using the TFFs of the log-amplitude-based category. The results of both the SDTP and the SITP are well above the average random classification accuracy, which is equal to 9 . 1 % .
To compare the performance of the proposed approach obtained using each of the five TFFs categories, we have conducted paired t-tests with significance level of 0 . 05 to compare the obtained accuracies of the classification nodes based on the TFF of C 1 with the accuracies of the classification nodes based on the TFFs of the other four categories. For the SDTP, the computed p values for C 1 versus C 2 , C 1 versus C 3 , C 1 versus C 4 , and C 1 versus C 5 are 0 . 0042 , 0 . 0015 , 0 . 0032 and 0 . 0012 , respectively. Similarly, for the SITP, the computed p values for C 1 versus C 2 , C 1 versus C 3 , C 1 versus C 4 , and C 1 versus C 5 are 0 . 00071 , 0 . 0005 , 0 . 0019 , and 0 . 0074 , respectively. The calculated p values, for both SDTP and SITP, indicate that the performance of the classification nodes based on the TFF of C 1 outperforms significantly the performance of the classification nodes obtained using the TFFs of the other four categories.
Figure 6 shows the average PRC, RCL, and F 1 -score values obtained by the classification nodes ( C N 1 C N 6 ) in terms of their ability to classify the eleven HMITs ( A 1 A 11 ) and five intermediate classes ( I C 1 I C 5 ) based on the TFF of C 1 for both SDTP and SITP.

3.2. Evaluation Results of the Amputated Subjects ( D B 2 )

In this section, we evaluate the performance of the proposed approach based on the acquired EEG signals of the amputated subjects recruited in D B 2 . In particular, we provide the performance evaluation results of the channel-based analysis and the TFF-based analysis obtained using the SDTP and the SITP.

3.2.1. Results of the Channel-Based Analysis

Table 5 provides the average PRC, RCL, F 1 -score, and ACC of the classification nodes for each group of EEG channels, computed using the SDTP and SITP. In particular, for the SDTP, we extract twelve TFFs ( T F 1 T F 12 ) from the EEG signals in each group of EEG channels. Then, for each subject, we construct four hierarchical classification models using the TFFs extracted from the four groups of EEG channels. The classification nodes in each classification model are trained and tested using the ten-fold cross-validation procedure. Finally, for each classification node, we report the average values of PRC, RCL, F 1 -score, and ACC that are computed for the four subjects in D B 2 . For SITP, we utilize the twelve TFFs extracted from each group of EEG channels to construct a single hierarchical classification model for all subjects in D B 2 . The classification nodes in the constructed classification model are trained and tested using the LOSO-CV procedure. Finally, for each classification node, we present the average values of PRC, RCL, F 1 -score, and ACC that are computed over the four repetitions of the LOSO-CV procedure.
In Table 5, the SDTP results indicate that the classification nodes achieved average PRC, RCL, F 1 -score, and ACC values that are higher than 79 % using the different groups of EEG channels, with the lowest PRC, RCL, F 1 -score, and ACC values of 80 . 3 % , 79 . 3 % , 79 . 8 % and 80 . 2 % , respectively, obtained using the TFFs extracted from the EEG channels of G 2 and the highest PRC, RCL, F 1 -score, and ACC values of 84 . 8 % , 83 . 5 % , 84 . 1 % and 84 . 8 % , respectively, achieved using the TFFs extracted from the EEG channels of G 1 . On the other hand, the results obtained based on the SITP show that the classification nodes achieved average PRC, RCL, F 1 -score, and ACC values that are higher than 61 % using the different groups of EEG channels, with the lowest PRC, RCL, F 1 -score, and ACC of 63 . 4 % , 61 . 3 % , 62 . 3 % and 63 . 6 % , respectively, obtained using the TFFs extracted from the EEG channels of G 2 and the highest PRC, RCL, F 1 -score, and ACC of 73 . 0 % , 72 . 0 % , 72 . 5 % and 73 . 9 % , respectively, obtained using the TFFs extracted from the EEG channels of G 1 . The results reported for both the SDTP and the SITP are well above the average random classification accuracy, which is equal to 9 . 1 % .
To evaluate the performance of the proposed approach using each group of EEG channels, we have conducted paired t-tests with significance level of 0 . 05 to compare the accuracies obtained using the TFFs extracted from the EEG channels of G 1 with the accuracies of the classification nodes based on the TFFs extracted from the other three groups of EEG channels. For the SDTP, the computed p values for G 1 versus G 2 , G 1 versus G 3 and G 1 versus G 4 are 0 . 0037 , 0 . 0238 and 0 . 0084 , respectively. Similarly, for the SITP, the p values for G 1 versus G 2 , G 1 versus G 3 and G 1 versus G 4 are 0 . 0108 , 0 . 0054 and 0 . 008 , respectively. The calculated p values, for both SDTP and SITP, demonstrate that the performance of the classification nodes based on the TFFs that are extracted from the EEG channels of G 1 outperforms significantly the performance of the classification nodes obtained using the TFFs of the other three groups.

3.2.2. Results of the TFF-Based Analysis

Table 6 provides the average PRC, RCL, F 1 -score, and ACC of the classification nodes computed for each of the five categories of TFFs using both SDTP and SITP. The results presented in Table 6 are based on the TFFs extracted from the EEG channels of G 1 . The selection of G 1 is based on the results of the channel-based analysis, described in the previous subsection, in which the classification nodes achieved their best performance when the TFFs were extracted from the EEG channels of G 1 in both SDTP and SITP. More specifically, for the SDTP, we construct five hierarchical classification models for each subject. Each classification model is constructed using the TFFs comprised within one of the five categories of TFFs. The classification nodes in each classification model are trained and tested using the ten-fold cross-validation procedure. The results reported for each classification node include the average values of the PRC, RCL, F 1 -score, and ACC computed over the four subjects in D B 2 . For the SITP, we utilize the TFFs extracted from each of the five categories of TFFs to construct five hierarchical classification model for all subjects in D B 2 . In particular, each classification model utilizes the TFFs of one of the five categories extracted from the EEG signals of all subjects. The classification nodes in the constructed classification models are trained and tested using the LOSO-CV procedure. Finally, for each classification node, we present the average values of the PRC, RCL, F 1 -score, and ACC computed over the four repetitions of the LOSO-CV procedure.
The SDTP results reported in Table 6 indicate that the classification nodes achieved average PRC, RCL, F 1 -score and ACC values that are higher than 72 % using the different categories of TFFs, with the lowest PRC, RCL, F 1 -score and ACC values of 73 . 5 % , 72 . 6 % , 73 . 1 % and 74 . 9 % , respectively, obtained using the TFFs of the statistical-based category and the highest PRC, RCL, F 1 -score and ACC values of 90 . 5 % , 89 . 5 % , 90 . 0 % and 90 . 2 % , respectively, obtained using the TFFs of the log-amplitude-based category. On the other hand, the obtained results based on the SITP show that the classification nodes achieved average PRC, RCL, F 1 -score and ACC values thar are higher than 57 % using the different categories of TFFs, with the lowest PRC, RCL, F 1 -score and ACC of 60 . 0 % , 57 . 4 % , 58 . 7 % and 61 . 0 % , respectively, obtained using the TFFs of the statistical-based category and the highest PRC, RCL, F 1 -score and ACC of 87 . 9 % , 86 . 4 % , 87 . 1 % and 87 . 8 % , respectively, obtained using the TFFs of the log-amplitude-based category. The results achieved using both the SDTP and the SITP are well above the average random classification accuracy.
To evaluate the performance of the proposed approach obtained using the five categories of TFFs, we have conducted paired t-tests with significance level of 0 . 05 to compare the accuracies of the classification nodes obtained using the TFF of C 1 with the accuracies of the classification nodes based on the TFFs of the other four categories. For the SDTP, the p values computed for C 1 versus C 2 , C 1 versus C 3 , C 1 versus C 4 and C 1 versus C 5 are 0 . 0098 , 0 . 005 , 0 . 0104 and 0 . 0084 , respectively. Similarly, for the SITP, the p values computed for C 1 versus C 2 , C 1 versus C 3 , C 1 versus C 4 and C 1 versus C 5 are 0 . 146 , 0 . 0063 , 0 . 0067 and 0 . 0089 , respectively. The p values calculated for the SDTP and SITP indicate that the performance of the classification nodes based on the TFF of C 1 outperforms significantly the performance of the classification nodes obtained using the TFFs of the other four categories.
Figure 7 shows the average PRC, RCL, and F 1 -score values obtained by the classification nodes ( C N 1 C N 6 ) in terms of their ability to classify the eleven HMITs ( A 1 A 11 ) and five intermediate classes ( I C 1 I C 5 ) based on the TFF of C 1 for both SDTP and SITP.

4. Discussion

This study aims to investigate the feasibility of using the CWD, which enables the extraction of TFFs from the EEG signals, along with a hierarchical classification model to discriminate between eleven HMITs, including: rest, basic finger and wrist movements, and grasping and functional movements. The results demonstrate that our proposed approach can classify the eleven HMITs and achieve promising performance in both subject-dependent and subject-independent evaluation scenarios.

4.1. Channel-Based Analyses

In order to study the effect of the utilized different EEG channels on the capability of the proposed approach to classify the eleven HMITs, channel-based analyses were carried out using four groups of EEG channels that overlay the motor cortex regions in the brain. The results of the channel-based analyses for both intact and amputated subjects, which are provided in Table 3 and Table 5 for both SDTP and SITP, indicate that our proposed approach achieved higher classification accuracies using the TFFs extracted from the EEG channels of G 1 compared to the performance obtained using the TFFs extracted from the other three groups of EEG channels. In fact, G 1 comprises EEG channels that cover various motor cortex regions on both sides of the brain, including: midline region, left and right fronto-central regions, left and right centro-parietal regions, and left and right temporal lobe regions, while the other three groups of EEG channels cover only subsets of the regions overlayed by the EEG channels of G 1 . Hence, the results reported in Table 3 and Table 5 suggest that the electrical activities generated during different MI tasks within the motor cortex regions are propagating to various other regions in the brain. This might be attributed to the volume conductor effect on the EEG signals [28], in which the electrical activities generated within small cortical region are spatially propagated to other regions in the brain, and consequently recorded by the sparsely distributed electrodes on the scalp [6,7,28,70,71].

4.2. TFF-Based Analyses

The TFF-based analyses aimed to investigate the effect of utilizing different categories of TFFs on the classification accuracy of the eleven HMITs. Table 4 and Table 6 show that the best performance of our proposed approach was achieved using the TFF of C 1 , which outperforms significantly the other four categories of TFFs, for both SDTP and SITP. Moreover, the results indicate that the performance of the proposed approach achieved higher performance using the TFFs of C 4 compared to the TFFs of C 2 , C 3 , and C 5 , for both SDTP and SITP. On the other hand, the results obtained based on the TFFs of C 2 , C 3 , and C 5 vary depending on the training procedure and the EEG database used in the analysis. In addition, the results in Table 3 and Table 4 indicate that the performance of the hierarchical classification framework has been increased significantly when the TFF of C 1 was utilized in comparison with the performance achieved using the all the TFFs extracted from the EEG channels of G 1 , for both SDTP and SITP. Similarly, for amputated subjects, the results in Table 6 show that the average accuracy of the hierarchical classification framework using the TFF of C 1 has increased significantly compared with the performance achieved using all the TFFs extracted from the EEG channels of G 1 , as shown in Table 5, for both SDTP and SITP. The results of the TFF-based analyses indicate that the use of more TFFs does not necessarily improve the classification accuracy. In fact, this finding can be attributed to the fact that utilizing large sets of TFFs, without applying any selection procedure, can degrade the performance of the classification nodes by exposing them to an extended group of unrelated features. Moreover, the TFF-based analyses suggest that the TFF of C 1 provides a low dimensional descriptor that can capture the intra- and inter-personal variations in the EEG signals associated with different HMITs for both intact and amputated subjects.
Furthermore, the results of the TFF-based analyses indicate that the proposed approach is significantly degraded for the SITP compared to the SDTP. This is mainly due to the large variations in the EEG signals of various HMITs across different subjects. Despite the reduction in the accuracies, the performance of the proposed approach based on the SITP is still significantly higher than the average random classification accuracy, which suggests the feasibility of applying the proposed approach to optimize the number of training sessions required to control neural prosthetic devices.

4.3. Comparison with Previous Approaches

Literature reveals that the majority of the existing studies have focused on classifying MI tasks associated with different limbs, such as feet, right hand, and left hand [8]. Despite the high classification accuracies attained for classifying MI tasks performed by different limbs, discriminating between the MI tasks within the same hand remains challenging [7,8,9].
Recently, few studies have investigated the possibility of applying EEG signal analysis to classify different MI tasks and actual movements within the same hand. In this vein, few researchers have investigated the possibility of decoding actual movements performed by each finger in one hand using EEG signals. For example, Liao et al. [6] proposed a pairwise binary classification scheme to discriminate between the flexion/extension movements of each pair of fingers. In particular, EEG signals were acquired from eleven right-handed intact subjects while performing flextion/extension movements of each finger using 128 electrodes. The acquired EEG signals were processed using a power spectrum decoupling procedure and principle component analysis to extract a set of features. For each pair of fingers, a SVM classifier is constructed based on the extracted features. Moreover, the training and validation of the classifier was performed using a subject-dependent scheme. In other words, for each individual subject, the SVM was trained and validated using the EEG signals of the subject under consideration without considering the EEG signals of the other subjects. The average classification accuracy computed for all pairs of fingers over all subjects was 77 . 1 % . In comparison with the study presented in [6], our proposed hierarchial classification model incorporates a multi-class classification node, namely C N 6 , that is responsible for discriminating between the flexion/extension MI tasks of the individual fingers within the same hand. In fact, Liao et al. [6] have suggested that the use of the multi-class classification scheme, which is employed in the present study, is more challenging than the pairwise classification approach that has been adopted in their study. One advantage of using the multi-class classification scheme compared to the pairwise classification is the capability to control prosthetic hands using brain signals for performing real-world tasks [6]. In terms of classification performance, C N 6 of our proposed approach has enabled the discrimination between the fingers flexion/extension MI tasks of the intact subjects with an average classification accuracies of 85 . 5 % and 74 . 4 % for SDTP and SITP, respectively, based on the TFF of C 1 . Similarly, for amputated subjects, C N 6 achieved an average classification accuracies of 88 . 3 % and 85 . 5 % for SDTP and SITP, respectively. Therefore, the results of our proposed approach is considered an improvement over those reported in [6].
In another related study, Quandt et al. [33] proposed a one-versus-all multi-class classification scheme to discriminate between the movements of four fingers, which are thumb, index, middle, and little fingers. In particular, 32 EEG electrodes were utilized to record EEG signals from thirteen intact subjects while pressing and releasing a button using each of the four fingers. Then, a low-pass digital filter with bandwidth 0 . 15 Hz to 16 Hz was applied to the EEG signals. The samples of the filtered EEG signals were directly used as features without employing any feature selection technique. For each subject, four linear SVM classifiers were constructed using the extracted features to discriminate between the movements of the four fingers. The accuracies of classifying the movements of the thumb versus all other fingers, index versus all other fingers, middle versus all other fingers, and little versus all other fingers, which were computed for each subject individually and averaged over all subjects, were equal to 54 % , 42 % , 35 % and 43 % , respectively. Moreover, the average classification accuracy computed over these four finger movements was 43 . 5 % . Compared with the work of Quandt et al. [33], Figure 6a shows that C N 6 of our proposed approach was able to discriminate between the flextion/extension MI tasks performed by the index, middle, ring, little, and thumb fingers with an average F 1 -score of 82 . 8 % , 88 . 6 % , 86 . 1 % , 84 . 5 % and 85 . 1 % , respectively, using the SDTP. These results indicate that our proposed approach is an improvement over the work presented in [33], taking into consideration that our proposed approach classifies various types of MI tasks besides the fingers flexion/extension imagery tasks.
Other groups of researchers focused on investigating the possibility of classifying different wrist MI movements within the same hand using EEG signals. In this vein, Vuckovic and Sepulveda [5] developed an EEG-based BCI system to classify four different wrist MI movements, including flexion, extension, pronation, and supination. Specifically, a pairwise binary classification scheme was used to differentiate between the four wrist imagery movements, such that one binary classifier was employed to classify each pair of wrist movements. To evaluate the performance of the BCI system, EEG signals were recorded for ten intact subjects using 64 electrodes. Then, feature extraction and selection were performed using Gabor transform and Davis-Douldin index methods, respectively. For each subject, an Elman recurrent neural network was constructed using the extracted features to classify between each pair of wrist imagery movements. The classification results reported in [5] indicate that the true positive rate was as high as 80 % . In a relevant study, Edelman et al. [9] utilized an EEG source imaging (ESI) technique to classify four wrist MI movements associated with the right hand, including flexion, extension, supination, and pronation. The EEG signals were recorded using 64 EEG channels for five intact subjects. The EEG signals were processed using wavelet-based time-frequency analysis and Mahalanobis distance (MD)-based multi-class classifier to differentiate between the four wrist MI movements. The average classification accuracy was 82 . 2 % .
In comparison with the studies presented in [5,9], for the intact subjects, Table 4 indicates that using any of the five TFFs categories, our proposed approach outperforms the results reported in [5,9]. For example, using the TFFs of C 3 , C N 5 was able to discriminate between the wrist flexion/extension and wrist ulnar/radial deviation MI tasks with average classification accuracies of 84 . 4 % and 60 . 3 % for SDTP and SITP, respectively. Similarly, using the TFF of C 1 , C N 5 was able to discriminate between the wrist flexion/extension and wrist ulnar/radial deviation MI tasks with average classification accuracies of 94 . 9 % and 86 . 9 % for SDTP and SITP, respectively.
In another study, Yong and Menon [8] proposed an EEG-based BCI system that can discriminate between four MI tasks within the same limb, including rest, grasp movement, elbow movements, and goal-oriented elbow movements. EEG signals were recorded for twelve intact subjects using 32 electrodes. Several configurations of feature extraction and classification methods were evaluated, and the best classification accuracies were achieved using the logarithmic band-power feature extraction method and the SVM classifier. The training of the SVM classifier was performed using a subject-dependent procedure. The average classification accuracies reported for the differentiation between rest versus grasp, rest versus elbow, and rest versus goal-oriented elbow movements were 80 . 5 % , 75 . 1 % and 76 . 6 % , respectively. Moreover, the average three-way classification accuracies in discriminating between rest, grasp, and elbow was 56 . 2 % and between rest, grasp, and goal-oriented elbow was 60 . 7 % . In comparison, the C N 1 of our proposed approach was able to discriminate between the rest configuration of the hand and the 10 different HMITs of the intact subjects with average classification accuracies of 85 . 1 % and 76 . 6 % for SDTP and SITP, respectively, using the TFF of C 1 . Furthermore, using the TFF of C 1 , C N 3 was able to discriminate between three task-oriented MI grasp-related movements, namely small diameter grasp, lateral grasp, and extension-type grasp, which were performed by the intact subjects, with average classification accuracies of 87 . 6 % and 80 . 8 % for SDTP and SITP, respectively. Therefore, the results of our proposed approach show an improvement over the work presented in [8] with respect to the number of HMITs being classified and the ability to generalize to new subjects.

5. Conclusions and Future Work

In this paper, we investigated the capability for decoding eleven MI tasks within the same hand using EEG signals. In particular, the CWD was employed to extract a set of TFFs from the EEG signals. The extracted TFFs are processed using a four-layer hierarchical classification model to classify each EEG segment into one of the eleven MI tasks. Two different performance evaluation analyses were conducted to quantify the effect of utilizing different combinations of EEG channels and TFFs on the capability of the proposed approach to decode MI tasks within the same hand. These performance evaluation analyses were applied to the EEG signals obtained from eighteen intact subjects and four amputated subjects. For the intact subjects, the results of the channel-based and TFF-based analyses show that the proposed system achieved average accuracies of 88 . 8 % and 80 . 8 % for the SDTP and SITP, respectively, using the TFF of C 1 that are extracted from the EEG channels in G 1 . Similarly, for the amputated subjects, the results of the channel-based and TFF-based analyses show that our proposed system achieved average accuracies of 90 . 2 % and 87 . 8 % for SDTP and SITP, respectively, using the TFF of C 1 that are extracted from the EEG channels in G 1 . The results reported in this study demonstrate the feasibility of utilizing the CWD as a time-frequency representation of EEG signals to enable the extraction of TFFs that can effectively discriminate between the MI tasks within the same hand for both intact and amputated subjects. Moreover, the results reported for the SITP indicate that the CWD-based TFFs capture the inter-personal variations in the EEG signals for different HMITs. In fact, such a capability can increase the control dimension of EEG-based BCI systems to better control dexterous prosthetic hands.
Although the main goal of our proposed approach is to investigate the possibility of classifying MI tasks within the same hand using EEG signals, the current study did not consider the problem of describing the kinematic information of the MI tasks based on EEG signals. The kinematic information of the hand’s joints, such as the fingers and wrist joint angles, is important to effectively control the prosthetic hand in performing dexterous tasks. Therefore, we believe that combining the MI tasks classification, which is considered in the current study, and the hand kinematic information, which we plan to investigate in the near future, would enable better real-time control of prosthetic hands. Moreover, our future research directions include comparing the performance of our proposed approach in recognizing different MI tasks based on EEG signals that are acquired from closed-eyes subjects with the performance achieved when the eyes are open. Such a comparison can be of great benefit to assess the performance of real-world applications that involve subjects controlling prosthetic hands while their eyes are open.

Acknowledgments

This work was supported by the Scientific Research Support Fund-Jordan (Grant No. ENG/1/9/2015).

Author Contributions

Rami Alazrai and Yara Baslan conceived and designed the experiments; Rami Alazrai, Yara Baslan, and Nasim Alnuman performed the experiments; Rami Alazrai, Mohammad I. Daoud, and Hisham Alwanni analyzed the data; Rami Alazrai and Mohammad I. Daoud wrote the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wolpaw, J.R.; Birbaumer, N.; McFarland, D.J.; Pfurtscheller, G.; Vaughan, T.M. Brain–computer interfaces for communication and control. Clin. Neurophysiol. 2002, 113, 767–791. [Google Scholar] [CrossRef]
  2. Birbaumer, N. Breaking the silence: Brain–computer interfaces (BCI) for communication and motor control. Psychophysiology 2006, 43, 517–532. [Google Scholar] [CrossRef] [PubMed]
  3. Shih, J.J.; Krusienski, D.J.; Wolpaw, J.R. Brain-computer interfaces in medicine. Mayo Clin. Proc. 2012, 87, 268–279. [Google Scholar] [CrossRef] [PubMed]
  4. Sitaram, R.; Caria, A.; Veit, R.; Gaber, T.; Rota, G.; Kuebler, A.; Birbaumer, N. FMRI brain-computer interface: A tool for neuroscientific research and treatment. Comput. Intell. Neurosci. 2007, 2007. [Google Scholar] [CrossRef] [PubMed]
  5. Vuckovic, A.; Sepulveda, F. Delta band contribution in cue based single trial classification of real and imaginary wrist movements. Med. Biol. Eng. Comput. 2008, 46, 529–539. [Google Scholar] [CrossRef] [PubMed]
  6. Liao, K.; Xiao, R.; Gonzalez, J.; Ding, L. Decoding individual finger movements from one hand using human EEG signals. PLoS ONE 2014, 9, e85192. [Google Scholar] [CrossRef] [PubMed]
  7. Ge, S.; Wang, R.; Yu, D. Classification of four-class motor imagery employing single-channel electroencephalography. PLoS ONE 2014, 9, e98019. [Google Scholar] [CrossRef] [PubMed]
  8. Yong, X.; Menon, C. EEG classification of different imaginary movements within the same limb. PLoS ONE 2015, 10, e0121896. [Google Scholar] [CrossRef] [PubMed]
  9. Edelman, B.J.; Baxter, B.; He, B. EEG source imaging enhances the decoding of complex right-hand motor imagery tasks. IEEE Trans. Biomed. Eng. 2016, 63, 4–14. [Google Scholar] [CrossRef] [PubMed]
  10. Grafton, S.T.; Arbib, M.A.; Fadiga, L.; Rizzolatti, G. Localization of grasp representations in humans by positron emission tomography. Exp. Brain Res. 1996, 112, 103–111. [Google Scholar] [CrossRef] [PubMed]
  11. Pfurtscheller, G.; Neuper, C.; Muller, G.R.; Obermaier, B.; Krausz, G.; Schlogl, A.; Scherer, R.; Graimann, B.; Keinrath, C.; Skliris, D.; et al. Graz-BCI: State of the art and clinical applications. IEEE Trans. Neural Syst. Rehabil. Eng. 2003, 11, 1–4. [Google Scholar] [CrossRef] [PubMed]
  12. Qin, L.; Ding, L.; He, B. Motor imagery classification by means of source analysis for brain–computer interface applications. J. Neural Eng. 2004, 1, 135. [Google Scholar] [CrossRef] [PubMed]
  13. Nicolas-Alonso, L.F.; Gomez-Gil, J. Brain computer interfaces, a review. Sensors 2012, 12, 1211–1279. [Google Scholar] [CrossRef] [PubMed]
  14. Ge, S.; Yang, Q.; Wang, R.; Lin, P.; Gao, J.; Leng, Y.; Yang, Y.; Wang, H. A brain-computer interface based on a few-channel EEG-fNIRS bimodal system. IEEE Access 2017, 5, 208–218. [Google Scholar] [CrossRef]
  15. Gatti, R.; Tettamanti, A.; Gough, P.; Riboldi, E.; Marinoni, L.; Buccino, G. Action observation versus motor imagery in learning a complex motor task: A short review of literature and a kinematics study. Neurosci. Lett. 2013, 540, 37–42. [Google Scholar] [CrossRef] [PubMed]
  16. Muller-Putz, G.R.; Pfurtscheller, G. Control of an electrical prosthesis with an SSVEP-based BCI. IEEE Trans. Biomed. Eng. 2008, 55, 361–364. [Google Scholar] [CrossRef] [PubMed]
  17. Sellers, E.W.; Vaughan, T.M.; Wolpaw, J.R. A brain-computer interface for long-term independent home use. Amyotroph. Lateral Scler. 2010, 11, 449–455. [Google Scholar] [CrossRef] [PubMed]
  18. Donchin, E.; Spencer, K.M.; Wijesinghe, R. The mental prosthesis: Assessing the speed of a P300-based brain-computer interface. IEEE Trans. Rehabil. Eng. 2000, 8, 174–179. [Google Scholar] [CrossRef] [PubMed]
  19. Pfurtscheller, G.; Guger, C.; Müller, G.; Krausz, G.; Neuper, C. Brain oscillations control hand orthosis in a tetraplegic. Neurosci. Lett. 2000, 292, 211–214. [Google Scholar] [CrossRef]
  20. Tang, Z.; Sun, S.; Zhang, S.; Chen, Y.; Li, C.; Chen, S. A brain-machine interface based on ERD/ERS for an upper-limb exoskeleton control. Sensors 2016, 16, 2050. [Google Scholar] [CrossRef] [PubMed]
  21. Scherer, R.; Muller, G.; Neuper, C.; Graimann, B.; Pfurtscheller, G. An asynchronously controlled EEG-based virtual keyboard: Improvement of the spelling rate. IEEE Trans. Biomed. Eng. 2004, 51, 979–984. [Google Scholar] [CrossRef] [PubMed]
  22. Jeannerod, M. Mental imagery in the motor context. Neuropsychologia 1995, 33, 1419–1432. [Google Scholar] [CrossRef]
  23. Pfurtscheller, G.; Neuper, C. Motor imagery and direct brain-computer communication. Proc. IEEE 2001, 89, 1123–1134. [Google Scholar] [CrossRef]
  24. Wolpaw, J.R.; McFarland, D.J. Control of a two-dimensional movement signal by a noninvasive brain-computer interface in humans. Proc. Natl. Acad. Sci. USA 2004, 101, 17849–17854. [Google Scholar] [CrossRef] [PubMed]
  25. Royer, A.S.; Doud, A.J.; Rose, M.L.; He, B. EEG control of a virtual helicopter in 3-dimensional space using intelligent control strategies. IEEE Trans. Neural Syst. Rehabil. Eng. 2010, 18, 581–589. [Google Scholar] [CrossRef] [PubMed]
  26. Doud, A.J.; Lucas, J.P.; Pisansky, M.T.; He, B. Continuous three-dimensional control of a virtual helicopter using a motor imagery based brain-computer interface. PLoS ONE 2011, 6, e26322. [Google Scholar] [CrossRef] [PubMed]
  27. LaFleur, K.; Cassady, K.; Doud, A.; Shades, K.; Rogin, E.; He, B. Quadcopter control in three-dimensional space using a noninvasive motor imagery-based brain–computer interface. J. Neural Eng. 2013, 10, 046003. [Google Scholar] [CrossRef] [PubMed]
  28. Van den Broek, S.P.; Reinders, F.; Donderwinkel, M.; Peters, M. Volume conduction effects in EEG and MEG. Electroencephalogr. Clin. Neurophysiol. 1998, 106, 522–534. [Google Scholar] [CrossRef]
  29. Debbal, S.; Bereksi-Reguig, F. Time-frequency analysis of the first and the second heartbeat sounds. Appl. Math. Comput. 2007, 184, 1041–1052. [Google Scholar] [CrossRef]
  30. Boashash, B.; Azemi, G.; O’Toole, J.M. Time-frequency processing of nonstationary signals: Advanced TFD design to aid diagnosis with highlights from medical applications. IEEE Signal Process Mag. 2013, 30, 108–119. [Google Scholar] [CrossRef]
  31. Boashash, B.; Ouelha, S. Automatic signal abnormality detection using time-frequency features and machine learning: A newborn EEG seizure case study. Knowledge-Based Syst. 2016, 106, 38–50. [Google Scholar] [CrossRef]
  32. Wang, Y.; Veluvolu, K.C. Time-frequency analysis of non-stationary biological signals with sparse linear regression based fourier linear combiner. Sensors 2017, 17, 1386. [Google Scholar] [CrossRef] [PubMed]
  33. Quandt, F.; Reichert, C.; Hinrichs, H.; Heinze, H.J.; Knight, R.; Rieger, J.W. Single trial discrimination of individual finger movements on one hand: A combined MEG and EEG study. NeuroImage 2012, 59, 3316–3324. [Google Scholar] [CrossRef] [PubMed]
  34. Zhou, S.M.; Gan, J.Q.; Sepulveda, F. Classifying mental tasks based on features of higher-order statistics from EEG signals in brain–computer interface. Inf. Sci. 2008, 178, 1629–1640. [Google Scholar] [CrossRef]
  35. Atzori, M.; Gijsberts, A.; Castellini, C.; Caputo, B.; Hager, A.G.M.; Elsig, S.; Giatsidis, G.; Bassetto, F.; Müller, H. Electromyography data for non-invasive naturally-controlled robotic hand prostheses. Sci. Data 2014, 1, 140053. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  36. Zhou, B.; Wu, X.; Lv, Z.; Zhang, L.; Guo, X. A fully automated trial selection method for optimization of motor imagery based brain-computer interface. PLoS ONE 2016, 11, e0162657. [Google Scholar] [CrossRef] [PubMed]
  37. Lan, T.; Erdogmus, D.; Adami, A.; Pavel, M.; Mathan, S. Salient EEG channel selection in brain computer interfaces by mutual information maximization. In Proceedings of the IEEE 27th Annual International Conference of the Engineering in Medicine and Biology Society, Shanghai, China, 1–4 September 2005. [Google Scholar]
  38. Delorme, A.; Makeig, S. EEGLAB: An open source toolbox for analysis of single-trial EEG dynamics including independent component analysis. J. Neurosci. Methods 2004, 134, 9–21. [Google Scholar] [CrossRef] [PubMed]
  39. Gómez-Herrero, G.; De Clercq, W.; Anwar, H.; Kara, O.; Egiazarian, K.; Van Huffel, S.; Van Paesschen, W. Automatic removal of ocular artifacts in the EEG without an EOG reference channel. In Proceedings of the IEEE 7th Nordic Signal Processing Symposium, Rejkjavik, Iceland, 7–9 June 2006. [Google Scholar]
  40. Castiglioni, P. Choi–williams distribution. In Encyclopedia of Biostatistics; John Wiley & Sons, Ltd: Hoboken, NJ, USA, 2005. [Google Scholar]
  41. Tzallas, A.T.; Tsipouras, M.G.; Fotiadis, D.I. Epileptic seizure detection in EEGs using time–frequency analysis. IEEE Trans. Inf. Technol. Biomed. 2009, 13, 703–710. [Google Scholar] [CrossRef] [PubMed]
  42. Boubchir, L.; Al-Maadeed, S.; Bouridane, A. On the use of time-frequency features for detecting and classifying epileptic seizure activities in non-stationary EEG signals. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, Florence, Italy, 4–9 May 2014. [Google Scholar]
  43. Boashash, B. Time-Frequency Signal Analysis and Processing: A Comprehensive Reference; Academic Press: London, UK, 2015. [Google Scholar]
  44. Cohen, L. Time-frequency distributions-a review. Proc. IEEE 1989, 77, 941–981. [Google Scholar] [CrossRef]
  45. Cohen, L. Time-Frequency Analysis; Prentice Hall PTR: Englewood Cliffs, NJ, USA, 1995. [Google Scholar]
  46. Guerrero-Mosquera, C.; Trigueros, A.M.; Franco, J.I.; Navia-Vázquez, Á. New feature extraction approach for epileptic EEG signal detection using time-frequency distributions. Med. Biol. Eng. Comput. 2010, 48, 321–330. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  47. Hahn, S.L. Hilbert Transforms in Signal Processing; Artech House: Norwood, MA, USA, 1996. [Google Scholar]
  48. Boashash, B.; Azemi, G.; Khan, N.A. Principles of time–frequency feature extraction for change detection in non-stationary signals: Applications to newborn {EEG} abnormality detection. Pattern Recognit. 2015, 48, 616–627. [Google Scholar] [CrossRef]
  49. Claasen, T.; Mecklenbräuker, W. Time-frequency signal analysis. Philips J. Res. 1980, 35, 372–389. [Google Scholar]
  50. Choi, H.I.; Williams, W.J. Improved time-frequency representation of multicomponent signals using exponential kernels. IEEE Trans. Acoust. Speech Signal Process. 1989, 37, 862–871. [Google Scholar] [CrossRef]
  51. Swami, A.; Mendel, J.; Nikias, C. Higher-order spectra analysis (hosa) toolbox. Version 2000, 2. Available online: https://labcit.ligo.caltech.edu/ rana/mat/HOSA/ (accessed on 23 August 2017).
  52. Löfhede, J.; Thordstein, M.; Löfgren, N.; Flisberg, A.; Rosa-Zurera, M.; Kjellmer, I.; Lindecrantz, K. Automatic classification of background EEG activity in healthy and sick neonates. J. Neural Eng. 2010, 7, 016007. [Google Scholar] [CrossRef] [PubMed]
  53. Greene, B.; Faul, S.; Marnane, W.; Lightbody, G.; Korotchikova, I.; Boylan, G. A comparison of quantitative EEG features for neonatal seizure detection. Clin. Neurophysiol. 2008, 119, 1248–1261. [Google Scholar] [CrossRef] [PubMed]
  54. Mitra, J.; Glover, J.R.; Ktonas, P.Y.; Kumar, A.T.; Mukherjee, A.; Karayiannis, N.B.; Frost, J.D., Jr.; Hrachovy, R.A.; Mizrahi, E.M. A multi-stage system for the automated detection of epileptic seizures in neonatal EEG. J. Clin. Neurophysiol. Off. Publ. Am. Electroencephalogr. Soc. 2009, 26, 218. [Google Scholar]
  55. Hassan, A.R.; Bashar, S.K.; Bhuiyan, M.I.H. On the classification of sleep states by means of statistical and spectral features from single channel electroencephalogram. In Proceedings of the IEEE International Conference on Advances in Computing, Communications and Informatics (ICACCI), Kochi, India, 10–13 August 2015. [Google Scholar]
  56. Acharya, U.R.; Fujita, H.; Sudarshan, V.K.; Bhat, S.; Koh, J.E. Application of entropies for automated diagnosis of epilepsy using EEG signals: A review. Knowledge-Based Syst. 2015, 88, 85–96. [Google Scholar] [CrossRef]
  57. Acharya, U.R.; Fujita, H.; Sudarshan, V.K.; Oh, S.L.; Adam, M.; Koh, J.E.; Tan, J.H.; Ghista, D.N.; Martis, R.J.; Chua, C.K.; et al. Automated detection and localization of myocardial infarction using electrocardiogram: A comparative study of different leads. Knowledge-Based Syst. 2016, 99, 146–156. [Google Scholar] [CrossRef]
  58. Stanković, L. A measure of some time–frequency distributions concentration. Signal Process. 2001, 81, 621–631. [Google Scholar] [CrossRef]
  59. Doyle, S.; Feldman, M.; Tomaszewski, J.; Shih, N.; Madabhushi, A. Cascaded multi-class pairwise classifier (CASCAMPA) for normal, cancerous, and cancer confounder classes in prostate histology. In Proceedings of the IEEE International Symposium on Biomedical Imaging: From Nano to Macro, Chicago, IL, USA, 30 March–2 April 2011. [Google Scholar]
  60. Granitto, P.M.; Rébola, A.; Cerviño, U.; Gasperi, F.; Biasoli, F.; Ceccatto, H.A. Cascade classifiers for multiclass problems. In Proceedings of the 7-th Argentine Symposium on Artificial Intelligence (ASAI), Rosario, Argentina, 29–30 August 2005. [Google Scholar]
  61. Cortes, C.; Vapnik, V. Support-vector networks. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef]
  62. Chang, C.C.; Lin, C.J. LIBSVM: A library for support vector machines. ACM Trans. Intell. Syst. Technol. 2011, 2, 1–27. [Google Scholar] [CrossRef]
  63. Ng, A.Y.; Jordan, M.I. On discriminative vs. generative classifiers: A comparison of logistic regression and naive bayes. Adv. Neural Inform. Process. Syst. 2002, 2, 841–848. [Google Scholar]
  64. Qian, H.; Mao, Y.; Xiang, W.; Wang, Z. Recognition of human activities using SVM multi-class classifier. Pattern Recognit. Lett. 2010, 31, 100–111. [Google Scholar] [CrossRef]
  65. Kreßel, U.H.G. Pairwise classification and support vector machines. In Advances in Kernel Methods; MIT Press: Cambridge, MA, USA, 1999; pp. 255–268. [Google Scholar]
  66. Hsu, C.W.; Lin, C.J. A comparison of methods for multiclass support vector machines. IEEE Trans. Neural Netw. 2002, 13, 415–425. [Google Scholar] [PubMed]
  67. Alam, S.S.; Bhuiyan, M.I.H. Detection of seizure and epilepsy using higher order statistics in the EMD domain. IEEE J. Biomed. Health Inf. 2013, 17, 312–318. [Google Scholar] [CrossRef] [PubMed]
  68. Alazrai, R.; Momani, M.; Daoud, M.I. Fall Detection for Elderly from Partially Observed Depth-Map Video Sequences Based on View-Invariant Human Activity Representation. Appl. Sci. 2017, 7, 316. [Google Scholar] [CrossRef]
  69. Qiu, Z.; Allison, B.Z.; Jin, J.; Zhang, Y.; Wang, X.; Li, W.; Cichocki, A. Optimized motor imagery paradigm based on imagining Chinese characters writing movement. IEEE Trans. Neural Syst. Rehabil. Eng. 2017, 25, 1009–1017. [Google Scholar] [CrossRef] [PubMed]
  70. Bradberry, T.J.; Gentili, R.J.; Contreras-Vidal, J.L. Reconstructing three-dimensional hand movements from noninvasive electroencephalographic signals. J. Neurosci. 2010, 30, 3432–3437. [Google Scholar] [CrossRef] [PubMed]
  71. Makeig, S.; Kothe, C.; Mullen, T.; Bigdely-Shamlo, N.; Zhang, Z.; Kreutz-Delgado, K. Evolving signal processing for brain–computer interfaces. Proc. IEEE 2012, 100, 1567–1584. [Google Scholar] [CrossRef]
Figure 1. Sample images of the visual cues associated with the different HMITs investigated in this study.
Figure 1. Sample images of the visual cues associated with the different HMITs investigated in this study.
Sensors 17 01937 g001
Figure 2. Experimental paradigm. A visual cue is displayed on the computer monitor for 3 s. After that, the visual cue disappears and a black screen is displayed on the monitor. During this period of time, the subject starts to imagine performing the movement that was specified by the visual cue until time T s. The value of T varies according to the complexity of the movement being imagined. In particular, for the movements in set 1 and set 3, T is equal to 10 s. For the extension-type grasp movement in set 2, T is equal to 12 s. Finally, for the small diameter grasp and lateral grasp movements in set 2, T is equal to 14 s.
Figure 2. Experimental paradigm. A visual cue is displayed on the computer monitor for 3 s. After that, the visual cue disappears and a black screen is displayed on the monitor. During this period of time, the subject starts to imagine performing the movement that was specified by the visual cue until time T s. The value of T varies according to the complexity of the movement being imagined. In particular, for the movements in set 1 and set 3, T is equal to 10 s. For the extension-type grasp movement in set 2, T is equal to 12 s. Finally, for the small diameter grasp and lateral grasp movements in set 2, T is equal to 14 s.
Sensors 17 01937 g002
Figure 3. The positions of the EEG electrodes employed in this study arranged according to the 10–20 EEG system.
Figure 3. The positions of the EEG electrodes employed in this study arranged according to the 10–20 EEG system.
Sensors 17 01937 g003
Figure 4. Illustration of the constructed time-frequency representation of EEG segments. The plots (ac) represent the WVD of the EEG segments associated with rest, wrist flexion/extension, and lateral grasp, respectively. The plots (df) represent the CWD of the EEG segments associated with rest, wrist flexion/extension, and lateral grasp, respectively.
Figure 4. Illustration of the constructed time-frequency representation of EEG segments. The plots (ac) represent the WVD of the EEG segments associated with rest, wrist flexion/extension, and lateral grasp, respectively. The plots (df) represent the CWD of the EEG segments associated with rest, wrist flexion/extension, and lateral grasp, respectively.
Sensors 17 01937 g004
Figure 5. Illustration of the structure of the proposed four-layer hierarchical classification model. Nodes in gray color represent the classification nodes in each layer. Orange nodes represent the eleven classes of the HMITs in our study. Blue nodes represent an intermediate classes of EEG segments.
Figure 5. Illustration of the structure of the proposed four-layer hierarchical classification model. Nodes in gray color represent the classification nodes in each layer. Orange nodes represent the eleven classes of the HMITs in our study. Blue nodes represent an intermediate classes of EEG segments.
Sensors 17 01937 g005
Figure 6. Average PRC, RCL, F 1 -score values of each of the eleven HMITs and the five intermediate classes for the intact subjects obtained based on the TFF of C 1 using (a) SDTP and (b) SITP.
Figure 6. Average PRC, RCL, F 1 -score values of each of the eleven HMITs and the five intermediate classes for the intact subjects obtained based on the TFF of C 1 using (a) SDTP and (b) SITP.
Sensors 17 01937 g006
Figure 7. Average PRC, RCL, F 1 -score values of each of the eleven HMITs and the five intermediate classes for the amputated subjects obtained based on the TFF of C 1 using (a) SDTP and (b) SITP.
Figure 7. Average PRC, RCL, F 1 -score values of each of the eleven HMITs and the five intermediate classes for the amputated subjects obtained based on the TFF of C 1 using (a) SDTP and (b) SITP.
Sensors 17 01937 g007
Table 1. Detailed information about the amputation associated with each subject in D B 2 .
Table 1. Detailed information about the amputation associated with each subject in D B 2 .
SubjectHandednessAmputated HandYears Since AmputationCause of AmputationProsthesis Use
A S 1 Right handLeft hand3.5AccidentCosmetic
A S 2 Right handRight hand1.5AccidentNone
A S 3 Left handLeft hand4AccidentMyoelectric
A S 4 Right handRight hand5AccidentCosmetic
Table 2. The groups of electrodes analyzed in this study.
Table 2. The groups of electrodes analyzed in this study.
Group NameComprised Electrodes
Broad bilateral ( G 1 ) C 3 , C 4 , C z , P 3 , P 4 , P z , F 3 , F 4 , F z , T 7 , T 8
Left side ( G 2 ) C 3 , P 3 , F 3 , T 7
Right side ( G 3 ) C 4 , P 4 , F 4 , T 8
Narrow bilateral ( G 4 ) C 3 , C 4 , C z , P z , F z
Table 3. Results of the channel-based analysis obtained using the SDTP and SITP for the intact subjects in D B 1 . The set of EEG electrodes comprised in each group is provided in Table 2.
Table 3. Results of the channel-based analysis obtained using the SDTP and SITP for the intact subjects in D B 1 . The set of EEG electrodes comprised in each group is provided in Table 2.
Group of EEG
Electrodes
Classification
Layer
Classification
Node
Results of the SDTPResults of the SITP
PRCRCL F 1 -ScoreACCPRCRCL F 1 -ScoreACC
G 1 Layer 1 C N 1 81.078.379.682.974.171.772.976.0
Layer 2 C N 2 83.582.382.983.766.565.465.972.0
Layer 3 C N 3 86.586.086.285.858.758.558.660.9
C N 4 86.883.285.087.871.563.467.276.2
Layer 4 C N 5 91.090.991.090.768.868.768.769.7
C N 6 77.976.677.276.945.645.545.651.9
Overall average84.482.983.684.664.262.263.267.8
G 2 Layer 1 C N 1 77.472.174.777.270.367.068.673.1
Layer 2 C N 2 73.372.372.874.465.358.461.665.8
Layer 3 C N 3 74.273.773.973.649.349.449.449.3
C N 4 76.769.072.679.170.152.560.172.8
Layer 4 C N 5 84.183.383.783.765.565.565.565.5
C N 6 63.762.663.262.937.637.037.337.8
Overall average74.972.273.575.159.755.057.160.7
G 3 Layer 1 C N 1 76.674.775.678.670.768.069.373.2
Layer 2 C N 2 73.071.772.373.761.957.259.563.6
Layer 3 C N 3 69.369.069.269.046.846.446.647.1
C N 4 76.971.173.979.070.351.659.572.7
Layer 4 C N 5 83.983.183.583.260.460.560.460.4
C N 6 59.658.659.159.434.434.134.335.0
Overall average73.271.472.373.857.453.054.958.7
G 4 Layer 1 C N 1 75.672.173.876.667.364.465.870.1
Layer 2 C N 2 73.573.173.374.564.657.160.665.5
Layer 3 C N 3 71.771.371.570.849.649.349.550.1
C N 4 78.570.874.580.170.252.960.371.2
Layer 4 C N 5 83.182.983.082.969.669.569.566.5
C N 6 62.061.761.961.536.636.136.337.0
Overall average74.172.073.074.459.654.957.060.1
Table 4. Results of the TFF-based analysis obtained using the SDTP and SITP for the intact subjects in  D B 1 .
Table 4. Results of the TFF-based analysis obtained using the SDTP and SITP for the intact subjects in  D B 1 .
Category of
TFFs
Classification
Layer
Classification
Node
Results of the SDTPResults of the SITP
PRCRCL F 1 -ScoreACCPRCRCL F 1 -ScoreACC
Log-amplitude-based
category ( C 1 )
Layer 1 C N 1 81.477.179.285.177.472.674.976.6
Layer 2 C N 2 87.686.887.287.882.681.882.281.7
Layer 3 C N 3 88.488.188.387.681.980.881.380.8
C N 4 91.087.889.391.783.080.681.884.6
Layer 4 C N 5 95.194.794.994.986.985.986.386.9
C N 6 85.086.085.585.576.275.175.674.4
Overall average88.186.787.488.881.379.580.480.8
Amplitude-based
category ( C 2 )
Layer 1 C N 1 78.876.377.580.572.769.270.970.0
Layer 2 C N 2 79.678.378.980.272.270.071.162.1
Layer 3 C N 3 81.380.781.080.367.065.366.161.2
C N 4 84.980.482.686.971.962.666.970.9
Layer 4 C N 5 89.689.989.789.871.770.471.168.4
C N 6 71.271.071.171.559.856.257.953.4
Overall average80.979.480.181.569.265.667.464.3
Statistical-based
category ( C 3 )
Layer 1 C N 1 76.473.274.877.667.262.764.965.9
Layer 2 C N 2 74.272.973.575.259.855.157.356.5
Layer 3 C N 3 72.772.672.672.546.346.046.251.5
C N 4 75.771.673.679.365.852.458.357.7
Layer 4 C N 5 84.485.084.784.461.461.461.460.3
C N 6 63.162.262.762.531.730.331.040.8
Overall average74.472.973.675.255.451.353.255.4
Spectral-based
category ( C 4 )
Layer 1 C N 1 79.580.379.983.172.970.171.572.3
Layer 2 C N 2 85.485.085.286.073.572.673.071.1
Layer 3 C N 3 87.387.387.386.869.669.469.566.1
C N 4 89.187.788.490.772.367.469.872.6
Layer 4 C N 5 91.390.691.092.880.480.380.379.1
C N 6 81.982.482.182.460.359.459.957.8
Overall average85.785.585.687.071.569.970.769.8
Spectral entropy-based
category ( C 5 )
Layer 1 C N 1 78.076.577.280.170.365.868.069.6
Layer 2 C N 2 80.379.780.081.163.660.061.763.0
Layer 3 C N 3 81.781.381.581.156.056.056.055.4
C N 4 84.581.883.186.663.156.759.776.5
Layer 4 C N 5 89.489.889.689.669.169.169.170.5
C N 6 73.574.674.073.937.737.337.540.5
Overall average81.280.680.982.160.057.558.762.6
Table 5. Results of the channel-based analysis obtained using the SDTP and SITP for the amputated subjects in D B 2 .
Table 5. Results of the channel-based analysis obtained using the SDTP and SITP for the amputated subjects in D B 2 .
Group of EEG
Electrodes
Classification
Layer
Classification
Node
Results of SDTPResults of SITP
PRCRCL F 1 -ScoreACCPRCRCL F 1 -ScoreACC
G 1 Layer 1 C N 1 79.777.378.580.574.372.773.575.1
Layer 2 C N 2 81.579.380.481.569.868.068.970.8
Layer 3 C N 3 85.284.684.984.676.876.576.676.4
C N 4 88.886.087.389.177.074.975.981.7
Layer 4 C N 5 92.192.392.292.282.782.382.582.0
C N 6 81.381.481.481.257.457.657.557.5
Overall average84.883.584.184.873.072.072.573.9
G 2 Layer 1 C N 1 76.774.975.877.766.664.965.871.4
Layer 2 C N 2 77.576.877.278.465.058.961.864.0
Layer 3 C N 3 83.981.782.781.258.057.857.958.1
C N 4 86.583.885.185.367.663.965.765.0
Layer 4 C N 5 85.085.885.484.777.778.177.977.9
C N 6 72.172.672.373.545.744.445.045.6
Overall average80.379.379.880.263.461.362.363.6
G 3 Layer 1 C N 1 77.075.976.578.768.467.367.870.0
Layer 2 C N 2 79.778.379.080.466.763.865.267.6
Layer 3 C N 3 85.283.384.383.862.262.462.362.2
C N 4 85.584.485.085.270.059.064.067.6
Layer 4 C N 5 87.486.386.985.075.575.475.575.4
C N 6 76.476.876.676.446.546.146.346.9
Overall average81.980.981.481.664.962.363.564.9
G 4 Layer 1 C N 1 76.774.475.675.266.762.064.367.6
Layer 2 C N 2 80.779.179.980.766.262.564.367.7
Layer 3 C N 3 84.381.382.882.769.869.969.867.0
C N 4 87.986.887.387.375.568.371.779.8
Layer 4 C N 5 90.590.790.689.170.670.370.570.5
C N 6 78.177.177.677.753.052.152.552.2
Overall average83.081.682.382.167.064.265.567.5
Table 6. Results of the TFF-based analysis obtained using the SDTP and SITP for the amputated subjects in D B 2 .
Table 6. Results of the TFF-based analysis obtained using the SDTP and SITP for the amputated subjects in D B 2 .
Category of TFFsClassification
Layer
Classification
Node
Results of the SDTPResults of the SITP
PRCRCL F 1 -ScoreACCPRCRCL F 1 -ScoreACC
Log-amplitude-based
category ( C 1 )
Layer 1 C N 1 79.378.078.677.876.574.875.677.6
Layer 2 C N 2 90.788.689.790.088.888.488.689.1
Layer 3 C N 3 93.093.093.093.190.790.590.690.6
C N 4 93.691.792.794.692.485.588.890.5
Layer 4 C N 5 97.497.397.497.493.393.593.493.4
C N 6 89.388.488.888.386.085.685.885.5
Overall average90.589.590.090.287.986.487.187.8
Amplitude-based
category ( C 2 )
Layer 1 C N 1 72.371.672.074.871.869.070.474.0
Layer 2 C N 2 84.782.383.584.077.673.975.776.7
Layer 3 C N 3 84.883.484.183.873.672.873.273.0
C N 4 91.386.889.090.787.178.582.685.7
Layer 4 C N 5 85.384.985.184.985.486.285.885.2
C N 6 72.772.872.872.362.961.762.361.6
Overall average81.880.381.181.776.473.775.076.0
Statistical-based
category ( C 3 )
Layer 1 C N 1 71.268.769.973.767.063.165.068.2
Layer 2 C N 2 72.170.671.473.661.757.459.463.0
Layer 3 C N 3 74.474.074.274.652.651.452.051.3
C N 4 78.778.678.782.967.763.365.474.0
Layer 4 C N 5 83.983.483.683.972.573.072.872.1
C N 6 60.860.360.560.538.536.137.337.1
Overall average73.572.673.174.960.057.458.761.0
Spectral-based
category ( C 4 )
Layer 1 C N 1 76.475.175.775.674.470.872.674.7
Layer 2 C N 2 85.684.585.085.577.876.877.378.3
Layer 3 C N 3 92.191.591.891.983.882.983.382.8
C N 4 92.791.892.393.185.176.980.885.0
Layer 4 C N 5 92.792.792.792.887.487.687.587.7
C N 6 87.586.587.086.970.269.970.170.4
Overall average87.887.087.487.679.877.578.679.8
Spectral entropy-based
category ( C 5 )
Layer 1 C N 1 75.472.373.877.569.166.767.971.8
Layer 2 C N 2 76.775.476.177.665.463.064.266.4
Layer 3 C N 3 86.385.886.186.467.966.967.467.8
C N 4 87.484.585.988.676.073.374.679.3
Layer 4 C N 5 88.889.389.088.682.482.382.482.8
C N 6 77.276.777.076.451.351.051.251.3
Overall average82.080.781.382.568.767.267.969.9

Share and Cite

MDPI and ACS Style

Alazrai, R.; Alwanni, H.; Baslan, Y.; Alnuman, N.; Daoud, M.I. EEG-Based Brain-Computer Interface for Decoding Motor Imagery Tasks within the Same Hand Using Choi-Williams Time-Frequency Distribution. Sensors 2017, 17, 1937. https://doi.org/10.3390/s17091937

AMA Style

Alazrai R, Alwanni H, Baslan Y, Alnuman N, Daoud MI. EEG-Based Brain-Computer Interface for Decoding Motor Imagery Tasks within the Same Hand Using Choi-Williams Time-Frequency Distribution. Sensors. 2017; 17(9):1937. https://doi.org/10.3390/s17091937

Chicago/Turabian Style

Alazrai, Rami, Hisham Alwanni, Yara Baslan, Nasim Alnuman, and Mohammad I. Daoud. 2017. "EEG-Based Brain-Computer Interface for Decoding Motor Imagery Tasks within the Same Hand Using Choi-Williams Time-Frequency Distribution" Sensors 17, no. 9: 1937. https://doi.org/10.3390/s17091937

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop