Next Article in Journal
A Study on Predicting the Deviation of Jet Trajectory Falling Point under the Influence of Random Wind
Previous Article in Journal
Directivity and Excitability of Ultrasonic Shear Waves Using Piezoceramic Transducers—Numerical Modeling and Experimental Investigations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

EEG Emotion Recognition Network Based on Attention and Spatiotemporal Convolution

National Engineering Research Center of Educational Big Data, Central China Normal University, Wuhan 430079, China
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(11), 3464; https://doi.org/10.3390/s24113464
Submission received: 22 April 2024 / Revised: 24 May 2024 / Accepted: 26 May 2024 / Published: 27 May 2024
(This article belongs to the Section Biosensors)

Abstract

:
Human emotions are complex psychological and physiological responses to external stimuli. Correctly identifying and providing feedback on emotions is an important goal in human–computer interaction research. Compared to facial expressions, speech, or other physiological signals, using electroencephalogram (EEG) signals for the task of emotion recognition has advantages in terms of authenticity, objectivity, and high reliability; thus, it is attracting increasing attention from researchers. However, the current methods have significant room for improvement in terms of the combination of information exchange between different brain regions and time–frequency feature extraction. Therefore, this paper proposes an EEG emotion recognition network, namely, self-organized graph pesudo-3D convolution (SOGPCN), based on attention and spatiotemporal convolution. Unlike previous methods that directly construct graph structures for brain channels, the proposed SOGPCN method considers that the spatial relationships between electrodes in each frequency band differ. First, a self-organizing map is constructed for each channel in each frequency band to obtain the 10 most relevant channels to the current channel, and graph convolution is employed to capture the spatial relationships between all channels in the self-organizing map constructed for each channel in each frequency band. Then, pseudo-three-dimensional convolution combined with partial dot product attention is implemented to extract the temporal features of the EEG sequence. Finally, LSTM is employed to learn the contextual information between adjacent time-series data. Subject-dependent and subject-independent experiments are conducted on the SEED dataset to evaluate the performance of the proposed SOGPCN method, which achieves recognition accuracies of 95.26% and 94.22%, respectively, indicating that the proposed method outperforms several baseline methods.

1. Introduction

Human emotions are complex psychological and physiological responses, which are natural reactions to external stimuli that can cause various physiological changes in the body, e.g., accelerated heartbeat and sweating. Emotions play a crucial role in human cognition, especially in rational decision-making processes, perception, and interpersonal communication.
Currently, emotion recognition can primarily be divided into two categories. The first approach is the recognition of non-physiological signals, e.g., facial expressions, voice tone, and body posture. However, these signals can be controlled artificially through disguise and other means; thus, it is difficult to identify true emotional states accurately. The second approach employs physiological signals, e.g., EEG, electrooculogram, electrocardiogram, and electromyography signals, to recognize human emotions [1,2,3]. Physiological signals are difficult to disguise and can reflect human emotional states accurately; thus, such signals can be used to obtain more objective and realistic results. Among these physiological signals, EEG signals are spatially discrete nonstationary random signals generated by the central nervous system that can record changes in scalp potential directly. Compared with other physiological signals, EEG signals can more accurately and reliably reflect a person’s emotional state [4].
The first challenge in emotion recognition is determining how to categorize the emotions. There are two main types of emotion quantification models, i.e., discrete models and continuous models. Discrete models consider complex emotions as combinations of a limited number of basic emotions. For example, renowned psychologist Ekman proposed six basic emotion categories, i.e., anger, disgust, fear, happiness, sadness, and surprise, which have been widely accepted in the emotion recognition field [5]. However, as emotional research deepens, it is recognized that emotions are a continuous process. Thus, a dimensional model has been proposed, and this model divides the emotional space into two, three, or more dimensions based on cognitive evaluations to describe human emotions in a continuous form. Different emotional states are distributed in different positions in the dimensional space according to their attributes. The most widely used dimensional model is the valence arousal (VA) two-dimensional (2D) spatial model [6]. Here, the valence dimension reflects the positive or negative degree of emotions, transitioning from negative to positive from low to high, and the arousal dimension reflects the intensity of the emotions. In addition, there is a mapping relationship between the discrete model and the dimensional model. The distribution of five emotions, i.e., excitement, anxiety, depression, fatigue, and satisfaction, in the VA model is shown in Figure 1.
The input forms of EEG emotion recognition models can generally be divided into four categories, i.e., manually designed traditional features, raw or preprocessed signals, encoded image features, and other features.
Currently, most studies employ manually designed EEG features as inputs to the model, and these features can be categorized into three groups, i.e., time domain, frequency domain, and time–frequency domain features.
Typically, EEG signals are collected in the time domain, which makes time-domain features the most intuitive and easily obtainable. The time-domain characteristics of EEG signals have been studied extensively in EEG research. Representative examples of commonly used time-domain features include event-related potentials, signal statistics, Hjorth parameter features, and fractal dimension features. For example, Kashihara obtained event-related potentials by stimulating subjects and used statistical features, e.g., the signal mean and standard deviation, as EEG features [7]. Tripathi et al. extracted features, e.g., skewness and kurtosis, from EEG signals in the DEAP dataset and conducted sentiment recognition research using deep neural networks (DNNs) and CNNs in both the valence and arousal dimensions, achieving good recognition results [8].
Frequency domain features involve converting the original EEG signal from the time domain to the frequency domain and extracting relevant characteristics. Frequency domain analysis typically divides the EEG signals into the delta frequency band (1–3 Hz), theta frequency band (4–7 Hz), alpha frequency band (8–13 Hz), beta frequency band (14–30 Hz), and gamma frequency band (31–50 Hz) for feature extraction. Common frequency domain features include power spectral density and differential entropy (DE). For example, Al Nafjan combined power spectral density features extracted from EEG signals and employed DNNs to classify emotions [9]. In addition, Li et al. employed short-time Fourier transform (STFT) for time–frequency conversion, calculated power spectral density features and facial expression features in the theta, alpha, beta, and gamma frequency bands, fused them, and then used a long short-term memory network for emotion recognition, achieving good recognition results [10].
Time–frequency characteristics involve segmenting signals with sliding windows and performing time–frequency domain signal transformations on each sub-signal segment. This method combines the advantages of time-domain and frequency-domain analyses to improve the processing ability of unstable signals. Commonly used time–frequency analysis methods include STFT, wavelet transform (WT), and Hilbert Huang transform (HHT). Chen et al. improved classification and recognition accuracy by employing a time–frequency domain sentiment feature analysis method based on the reconstruction of EEG signal sources [2].
Early recognition algorithms were primarily based on traditional machine learning methods, e.g., support vector machines (SVM), k-nearest neighbors, random forests, and naive Bayes. However, these methods frequently involve using manually extracted features as inputs, which is a time-consuming and labor-intensive process that can result in information loss. In addition, EEG signals are susceptible to noise interference during the signal acquisition process. EEG signals also have a low signal-to-noise ratio and exhibit time-based asymmetry and instability. These unique features present challenges for traditional machine learning methods that are heavily dependent on manual feature extraction processes and prior knowledge.
Given the remarkable success of deep learning methods in various recognition tasks, researchers have begun utilizing various deep learning frameworks for the task of EEG-based emotion recognition. Compared with traditional machine learning methods, deep learning techniques offer stronger capabilities in terms of describing and fitting problems. Current research efforts are focused on investigating the impact of different brain functional regions, frequency band combinations, and time-domain features on emotion classification performance. In addition, neurological research has revealed complex functional connections among different brain regions during emotion generation, which has prompted researchers to investigate the relationships between these regions. For example, Ding et al. introduced the Local Global Graph Representation Learning (LGGNet) technique for brain–computer interfaces which employs both local and global graph filtering layers to gain insight into the brain activity in and between different functional regions [11] Experimental results indicated emotional asymmetry and abundant emotional information in the frontal lobe. Subsequently, Ding et al. proposed a multiscale convolutional neural network (CNN) called Tsception, which effectively captures the temporal dynamics and spatial asymmetry of EEG signals by learning discriminative representations in both the time and channel dimensions [12]. The experimental findings suggested that the right hemisphere of the brain plays a unique role in emotional processes. In addition, Li et al. proposed a self-organizing graph neural network (SOGNN) for cross-subject EEG emotion recognition [13]. This SOGNN comprises three convolutional pooling blocks, three self-organizing layers, three graph convolutional layers, a fully connected layer, and an output layer. The experimental results demonstrate the effectiveness of models with sparse neighbor matrices. Zhong et al. proposed a regularized graph neural network (RGNN) for emotion recognition based on electroencephalography [14]. This model considers the biological topology between different brain regions to capture local and global relationships among different EEG signal channels, and the experimental results demonstrate the importance of global connectivity when modeling differential asymmetry in electroencephalography. Wang et al. proposed a feature fusion method that can effectively reflect emotional states [15]. This method is characterized by multichannel weighted multiscale permutation entropy, which considers the time–frequency and spatial information of EEG signals and eliminates the inherent volume effect of EEG. Many previous studies have investigated the relationship between EEG frequency bands and emotion types. For example, Zhu et al. proposed an EEG signal emotion classification network based on multichannel frequency band feature attention fusion, which focuses on the relationship between frequency bands, channels, and time-series features [16]. The experimental results indicated that the proposed attention fusion unit generally has better performance for more frequency band combinations. Li et al. proposed a spatial frequency convolutional self-attention network (SFCSAN) that combines spatial and frequency domain feature learning from EEG signals [17]. This model employs intraband self-attention to learn the frequency information for each frequency band, and interband mapping further maps the smallest attention representation to learn supplementary frequency information. As a temporal signal, some researchers have found that the spatial information of multiple electrodes on a time slice and the contextual dependence of EEG signals are crucial for effective emotion recognition. In addition, Tao et al. proposed an attention-based convolutional recurrent neural network (ACRNN) that uses a channel attention mechanism to allocate weights adaptively to different channels and uses a CNN to extract the spatial information of the encoded EEG signal [18]. The results demonstrate that the neural network combines channel attention modules with extended self-attention, which can utilize more discriminative information for EEG-based emotion recognition. Wang et al. proposed a spatiotemporal recurrent neural network (STRNN) that integrates spatiotemporal information feature learning of signal sources into a unified spatiotemporal dependency model [15]. This STRNN can learn the spatial correlation of multiple electrode or image contexts and long-term memory information in time series.
Based on the above research, it has been determined that both spatial correlation and temporal contextual information play a crucial role in analyzing emotions using EEG signals. In line with the relevant research, we propose an EEG emotion recognition network that incorporates attention and spatiotemporal convolution. The spatial relationships between electrodes vary across frequency bands; thus, the proposed method first constructs a self-organizing graph for each channel in each frequency band. This graph identifies the 10 most relevant channels to the current channel and employs graph convolution to capture the spatial relationships between channels within each frequency band. Then, the proposed method employs pseudo-3D convolution and partial dot product attention to extract temporal features. Finally, LSTM is employed to learn the contextual information between adjacent time series. To evaluate the performance of the proposed SOGPCN method, we compare it with several state-of-the-art deep and nondeep methods in the BCI field, including SVM, TCA, SA, DGCNN, DAN, BiDANN-S, BiHDM, RGNN, and other methods on the SEED dataset.
The primary contributions of this study are summarized as follows.
1. To obtain the spatial features of the brain and make the spatial relationships of each frequency band relatively independent, we construct a self-organizing map for each channel within each frequency band. This map retains the 10 most relevant channels for each channel, and then we extract the spatial relationships between all electrodes in each frequency band through graph convolution.
2. To reduce model complexity while maintaining sufficient performance, we employ pseudo-3D convolution to extract temporal features from the extracted spatial features of the EEG signals. Here, we select 12 channels suitable for emotion classification.
3. To make the model focus more on the parts of the sequence that are relevant to emotion recognition, we employ a combination of dot product attention and pseudo-3D convolution to assign weights to the relevant information of the EEG sequence.
The remainder of this paper is organized as follows. Section 2 provides an overview of SOPCN-related technologies and Section 3 describes the architecture and implementation of the proposed SOPCN model. Section 4 discusses experiments and analyzes the results and performance of the proposed SOPCN model. Finally, the paper is concluded in Section 5, including a discussion on prospects for future research.

2. Related Works

2.1. Input Features of EEG Data

Previous research suggests that DE features are particularly discriminative for the task of emotion recognition. Therefore, for the proposed model, we employ DE features as input features. A DE feature is a frequency domain feature calculated through 512-point STFT, with a 1 s nonoverlapping Hanning window averaged across five frequency bands. Thus, the output DE features can be represented as a 5 × T matrix, where T represents the time window dependent on the stimulated film clip. The time window T of the SEED dataset is 185–265(in s). For the normalized reference SOGNN, features with short time windows are filled with zeros to the length of 265 in the SEED dataset. The input EEG signal feature size of the model is calculated as electrode × frequency band × time frame, and the input form of the data in the SEED dataset is 12 × 5 × 265.

2.2. CNNs for EEG Data

Recently, CNNs have demonstrated good performance in EEG research. Tao et al. proposed an ACRNN that utilizes a channel attention mechanism to assign weights adaptively to different channels and uses a CNN to extract the spatial information from the encoded EEG signals [18]. However, a limitation of the spatial feature extraction module is that the use of attention mechanisms to assign weights to different channels only considers the importance of different channels in the brain’s emotional generation process. In other words, the connectivity between brain channels is not considered. Xiao et al. proposed an attention-based four-dimensional (4D) neural network EEG emotion recognition method [19]. The overall structure of this method comprises a 4D spatial spectral temporal representation, an attention-based CNN, an attention-based bidirectional LSTM network, and a classifier. To utilize the spatial relationships of electrodes, the authors organized all channels into a 2D map, which limits the dimensionality of the data and hinders full utilization of the complex topological structure of brain signals. Graph neural networks (GNN) attempt to construct neural networks using graph theory to process data in the graph domain. The graph CNN (GCNN) is an extension of traditional CNN methods that combines a CNN with spectral theory. Compared to conventional CNN methods, the GCNN has advantages in terms of extracting discriminative features of the signals in the discrete spatial domain. More importantly, the GCNN method provides an effective mechanism to describe the intrinsic relationships between different nodes in the graph, which may provide a potential method to explore the relationships between multiple EEG channels in the process of EEG emotion recognition.
Two-dimensional CNNs are commonly used for image recognition tasks because they can extract spatial features effectively. In comparison, 3D CNNs can capture both spatial and temporal features simultaneously. In the biological signal analysis field, some 3D CNN models have been applied to epilepsy seizure prediction and motion image analysis tasks. However, there is relatively little research on applying 3D CNNs to the task of EEG-based emotion recognition. We use pseudo-3D convolution to extract the temporal and spatial features of EEG signals. Compared with ordinary 3D convolution, pseudo-3D convolution saves a lot of computational resources.
The GNN is a deep learning model used to process graph-structured data, e.g., molecular networks, social networks, and knowledge graphs [20]. EEG signals can be considered a typical type of graph-structured data. Recently, many studies have demonstrated the effectiveness of GNNs in processing brain signals. For example, Song et al. proposed a dynamic graph CNN (DGCNN) for emotion recognition from EEG signals [21]. This model employs a graph to capture the features of multichannel EEG signals. The graph structure is determined by a dynamic adjacency matrix that reflects the intrinsic relationships between the different EEG electrodes. The proposed method effectively characterizes the intrinsic relationships between EEG channels and achieves an accuracy of 79.95% on the SEED dataset. Zhong et al. proposed a regularized GNN (RGNN) for emotion recognition based on electroencephalography [14] This model considers the biological topology between different brain regions to capture the local and global relationships between different EEG signal channels. This previous model achieved an accuracy of 85.30% on the SEED dataset. This paper uses a self-organizing graph structure to explore the connectivity structure between different pathways in the brain.

2.3. Channel Selection

Most existing EEG data are collected using as many electrodes as possible to ensure comprehensive signal collection. These signals may include both emotion-related information and some interference signals or noise. According to Ding et al., the frontal lobe of the brain is rich in emotional information, and the temporal lobe is associated with emotional processes [11]. Zhong et al. discovered that the prefrontal, parietal, and occipital regions might contain the most abundant emotional recognition information [14]. Observing experimental results, Li et al. found that the left and right temporal lobes contribute more to emotion recognition [22]. Zheng et al. designed different electrode arrangements based on the peak and mismatch characteristics of the weight distribution in emotional processing [23]. These include four channels (FT7, FT8, T7, and T8), six channels (FT7, FT8, T7, T8, TP7, and TP8), nine channels (FP1, FPZ, FP2, FT7, FT8, T7, T8, TP7, and TP8), and 12 channels (FT7, T7, TP7, P7, C5, CP5, FT8, T8, TP8, P8, C6, and CP6). By employing DE features, an SVM method achieved good classification results on 12 channels, with an average accuracy of 86.65%. In the proposed method, we refer to the selection method of Zheng [23] for 12 channels and conduct comparative validation experiments on the SEED dataset using both 12 and 62 channels.

3. Proposed Methodology

The architecture of the proposed model, namely SOGPCN, is shown in Figure 2. SOGPCN primarily comprises a spatial feature extraction module (Module 1) and a temporal feature extraction module (Modules 2 and 3). The input of the spatial feature extraction module utilizes DE features from the SEED dataset, where the input form of the data is 12 × 5 × 265. Initially, the data from the five frequency bands in the original data are separated, and a self-organizing graph structure is constructed for each frequency band. Here, graph convolution is employed to extract the spatial relationship between the electrodes in each frequency band, which ensures that the spatial features of the different frequency bands are not mixed, thereby preserving the respective graph structures. Then, the data from the five frequency bands are combined and reorganized into a batch size × channels × 5 × 64 × 1 format prior to being input to the temporal feature extraction module. In the temporal feature extraction module, the input data undergo one 3D convolution and two pseudo-3D convolutions. In the 3D convolution process, a 3 × 3 × 5 convolution kernel is employed with a convolution stride of 1 × 1 × 3 using zero padding and ReLU activation functions. Note that the operations of the two pseudo-3D convolutions are identical, utilizing a 3 × 3 × 1 convolution kernel and a 1 × 1 × 3 convolution kernel, respectively, with both employing zero padding and ReLU activation functions. After each pseudo-3D convolution, partial dot product attention is utilized, and average pooling with a size of 2 × 2 × 2 is performed. Following the extraction of the temporal features, the data are transformed into a one-dimensional (1D) tensor, and an LSTM network is used to learn contextual connections. Finally, the softmax function is employed for classification.
As shown in Figure 2, SOGPCN primarily comprises four modules: (1) a self-organizing graph convolution module is employed to extract the spatial features from each frequency band; (2) 3D-CNN layers are employed to extract temporal features from the multichannel EEG signals; (3) for each 3D-CNN layer, several dot product attention layers are implemented to help the model focus on valuable information; and (4) LSTM layers are implemented to learn the contextual information between adjacent time-series data. The input and output forms of each part of the data are shown in Table 1.

3.1. Dataset

The proposed SOGPCN method was evaluated on the publicly available SEED dataset, which was used for sentiment analysis using physiological signals. The SEED dataset [23] contains EEG data from 15 subjects (seven males and eight females) obtained using the ESI NeuroScan System at a sampling rate of 1 kHz. The EEG data were acquired from 62 channels. These data were collected while the participants were watching movies that evoked negative, neutral, and positive emotions. Each movie segment lasted approximately four minutes. The data collection process was divided into three sessions, with each session consisting of 15 segments of EEG data. Thus, a total of 45 segments of EEG data were collected for each participant. The data were downsampled to 200 Hz. A bandpass frequency filter from 0–75 Hz was applied. In the “Extracted_Features” folder, there are data on the differential entropy (DE) features of the extracted EEG signals, which we use for mood classification.

3.2. Self-Organizing GNN

In fact, EEG signals can be considered a typical type of structured data and are defined on a graph. Graph representation techniques and graph neural networks have been found to be very effective in processing EEG signals. The representation of EEG signals in the graph structure is expressed as follows.
G = ( V , E , A ) V = v i i = 1 , , N E = e i j v i , v j V A = a i j
Here, V represents the N nodes in graph G , E represents the set of edges between nodes in V , A R N × N represents the adjacency matrix, and a i j represents the connection weights of nodes v i and v j , which indicates the relationship between the electrodes.
Li et al. proposed the SFCSAN, which considers that the spatial relationships between electrodes in each frequency differ [17]. The SFCSAN employs intraband self-attention to learn the frequency information of each frequency band, and interband mapping is employed to further map the smallest attention representation learning to learn additional frequency information. This network can model the intrinsic dependency relationships between EEG signal features in different frequency bands.
Thus, we utilize a self-organizing graph construction module to model the EEG sentiment features for each frequency band. Due to the diverse spatial and functional connectivity relationships between EEG signal channels, the closer the spatial relationship, the more it becomes less likely that there is a functional relationship. In this paper, we refer to the self-organizing map used by SOGNN [13], where the adjacency weights of the self-organizing map are defined as follows, by the function f v i , v j .
a i j = f v i , v j = exp   θ v i W θ v j W T i = 1 N   exp   θ v i W θ v j W T
Here, v R 1 × F is the eigenvector of a node in V R N × F , and W R F × L and θ are the weights of the linear layer and the tanh activation function, respectively. The exponential function is a part of the softmax normalization activation function, which obtains a positive bounded adjacent weight.
The self-organizing adjacency matrix is calculated as follows.
G = Tanh   ( V W ) A = Softmax   G G T
Here, V R N × F is the input feature matrix of the node and W R F × L is the weight matrix. The tanh activation function is employed to obtain the output G R N × L of the linear layer. Then, the softmax function is applied to obtain a positive bounded adjacent matrix A . To reduce computational cost, sparse graphs are constructed using the top-k technique. Here, only the k largest weights of the adjacent matrix are preserved and small connection weights are set to zero. The top-k operation is performed as follows.
  for   i = 1 , 2 , , N index = a r g t o p k   ( A [ i , : ] ) A [ i , i n d e x ¯ ] = 0
Here, the a r g t o p k   ( · ) function calculates the indices of the top-k maximum values in each vector A [ i , : ] of the adjacency matrix A . In this case, the i n d e x ¯ represents indices that do not belong to the top-k maximum values. Only the maximum k values are retained in each row vector of the adjacency matrix A and the remaining values are set to 0. In this paper, k is set to 10.
Based on the above analysis, the first step involves employing a self-organizing graph module to construct a graph structure for each frequency band based on the input EEG features. Then, a graph convolution layer is applied to process the constructed graph to extract the local and global connectivity features for emotion recognition. The dependency relationship between electrodes in different frequency bands can be modeled using self-organizing graphs and graph convolution modules. After extracting the spatial relationship of each frequency band, the vectors of the different frequency bands are connected and reorganized to obtain the output χ R N × M × T × 1 , where N is the number of electrodes, M is the number of frequency bands, and T is the time step calculation.

3.3. Pseudo-3D Convolution

3D convolution is achieved by applying a 3D convolution kernel to a 3D structure formed by stacking multiple continuous 2D feature maps over time, as shown in Figure 3. The 2D feature maps at time points t t 2 , t t 1 , t t , t t + 1 , and t t + 2 are stacked together using 3D convolution for feature extraction. Thus, compared to 1D temporal convolution and 2D spatial convolution, 3D spatiotemporal convolution has advantages in terms of characterizing brain connections and their activities. The values of the i -th and j-th layer feature maps at x , y , z are calculated as follows [24].
v i j x y z = t a n h   b i j + m   p = 0 P i 1   q = 0 Q i 1   r = 0 R i 1   w i j m p q r v ( i 1 ) m ( x + p ) ( y + q ) ( z + r )
Here, P i , Q i , and R i are the sizes of the 3D convolution kernels in three dimensions, and w i j m p q r is the value of the (p, q, r)-th kernel connected to the m-th feature map in the previous layer.
Although 3D convolution has considerable advantages in extracting temporal and spatial features, it incurs high computational costs. To address this issue, the proposed model adopts pseudo-3D convolution [25] to reduce computational complexity. The kernel size of standard 3D convolution is (P, Q, R), where P and Q denote the kernel sizes of the 2D spatial convolution, and R is the kernel size along the time dimension. In pseudo-3D convolution, the kernels (P, Q, R) are decoupled into P × Q × 1 and 1 × 1 × R, where P × Q × 1 represents a convolutional filter that is equivalent to 2D convolution in the spatial domain and 1 × 1 × R represents a convolutional filter that is equivalent to 1D convolution in the time domain. Thus, the output of the pseudo-3D module in the l-th layer can be defined as follows.
χ l = Φ 1 × 1 × R Φ P × Q × 1 χ l 1
Here, χ l 1 is the output of layer l 1 , and Φ P × Q × 1 and Φ 1 × 1 × R are 2D convolution kernels with a P × Q kernel in the spatial domain and 1D convolution kernels with an R kernel in the time domain, respectively.

3.4. Partial Dot Product Attention

An attention mechanism is a commonly used technique in deep learning that can make models more accurate and effective when processing sequence data. In traditional neural networks, the output of each neuron only depends on the output of all neurons in the previous layer. However, in attention mechanisms, the output of each neuron is dependent on the output of all neurons in the previous layer and can be weighted according to different parts of the input data. This means that different weights can be assigned to different parts, which allows the model to focus on key information in the input sequence. As a result, the accuracy and efficiency of the model can be improved. The proposed model utilizes the partial dot product attention mechanism from 3DsleepNet to quantify the importance of the input features. As a result, the most relevant information is assigned higher weights, and less relevant information is assigned lower weights. For a given input χ R N × M × T , the output of the partial dot product attention mechanism is defined as follows [26].
A t t = χ σ χ M 1 M 2 + b
Here, M 1 R T × M , M 2 R M × T , and b R N × M × T are learnable parameters, where is the dot product and is the dot product σ for the softmax function.

3.5. LSTM

An LSTM network is an RNN variant used to process sequence data. LSTM can learn long-term dependency relationships and overcome the gradient vanishing problem in traditional RNNs. As a type of sequence data, most studies have achieved superior performance using LSTM to process EEG signal data. The core concept of an LSTM network is to control the flow of information by introducing a gate structure. These gates include input gates, forget gates, and output gates, which determine the entry of new information, the forgetting of old information, and the output of information, respectively [27]. Through these gates, an LSTM network can determine which information needs to be remembered, which information needs to be forgotten, and how to update the state based on the current information. After performing partial dot product attention, the input χ is flattened as a 1D tensor. For a given input χ x 1 , x 2 , x 3 , , x n , the output of the LSTM layer can be calculated as follows.
i t = σ W q i q t + W h i h t 1 + W c i C t 1 + b i f t = σ W q f q t + W h f h t 1 + W c f C t 1 + b f c t = f t C t 1 + i t tanh   W q c q t + W h c h t 1 + b c o t = σ W q o q t + W h o h t 1 + W c o C t + b o h t = o t tanh   c t y t = W h o h t + b o
Here, σ is the sigmoid function, i , f , o , and c are input gates, forget gates, output gates, and cell activation vectors, respectively, W is the weight matrix, and b is the bias vector.

4. Result

A series of experiments was conducted to evaluate the performance of the proposed model.

4.1. Experimental Setup

The proposed SOGPCN model was trained in Python using an NVIDIA GeForce GTX 1080 Ti GPU. The model’s loss was minimized using the Adam optimizer with a learning rate of 0.001 and a batch size of 15. The model was trained for 200 epochs using an early stop mechanism. During the training process, a dropout operation with a dropout rate of 0.1 was applied, thereby randomly blocking the output units of the inner layer. The experiment was mainly carried out in the following four steps, as shown in Figure 4: First, the SEED data set was obtained and DE features were obtained; second, emotion-related channels and frequency band division were screened; then, the spatial and temporal features between EEG channels were learned using the model; finally, multi-label classification results were obtained according to the trained model.
In this study, two experimental strategies, i.e., subject independence and subject dependence, were employed to validate the performance of the proposed model. The cross-validation strategy of keeping one subject independent was employed to evaluate the emotion recognition performance of each subject. In each experiment, data from 14 participants in the SEED dataset were used as the training dataset, and the data for the remaining participant were used as the validation dataset. We analyzed the average recognition accuracy of different subjects and frequency bands, and we compared and analyzed the impact of different frequency bands and channel combinations on the time complexity and recognition accuracy. The subjects relied on an experimental strategy, shuffling all the data and dividing it evenly into 15 parts, using 14 parts as the training dataset and the remaining part as the validation dataset. We also compared the performance of the model with other models to evaluate its effectiveness. A confusion matrix was provided to analyze data on different emotions. Finally, ablation experiments were conducted to demonstrate the effectiveness of different modules in the model.

4.2. Analysis of Experimental Results

4.2.1. Independent Classification of Subjects

Table 2 shows the independent classification accuracy of the proposed model on the SEED dataset as well as its performance on both single and combined frequency bands. Here, we set the input features as a single frequency band and a combination frequency band. Here, many studies use (θ, α, β, and γ), and we conducted experiments on these four frequency bands [28] and provide corresponding experimental results. As shown in Table 2, the proposed model performs well on the δ, θ, α, β, and γ frequency bands. The accuracy obtained on these five frequency bands reached 91.56%, 91.11%, 82.81%, 89.78%, and 90.52%, respectively, thereby outperforming the baseline model [13,14,21,29,30,31,32] for all frequency bands. The combination of frequency bands (θ, α, β, γ) also demonstrated the best performance. For all emotion recognition methods, the average recognition accuracy was the highest when all five frequency bands were used simultaneously. In addition, on the SEED dataset, the proposed model achieved the lowest standard deviation in accuracy compared to all baseline models, demonstrating the robustness of the proposed model to cross-subject variations.
After obtaining the experimental results, we investigated the classification accuracy for each individual. The experimental results are shown in Figure 5. Except for the fourth participant, the accuracy of all other participants was greater than 90%, which proves that the proposed model has a certain degree of stability across different experimental subjects and has promising potential for practical applications.

4.2.2. Subject Dependency Classification

Table 3 shows the classification accuracy of the proposed model for all subjects on the SEED dataset, as well as its performance on both single and combined frequency bands. Note that our models are located separately in the δ, θ, α, β, and γ frequency bands. As can be seen, the performance obtained on all five frequency bands is better than the baseline model [14,21,23,32,34,35]. In most cases, the recognition classification accuracy of the β and γ frequency bands was better than the other three frequency bands, which is consistent with the results reported by previous studies [14,16,21]. In addition, the proposed model outperformed all baseline models on the SEED dataset when combining the features of the θ, α, β, and γ frequency bands. On the α, β, and γ frequency bands, the standard deviation of the proposed model was much lower than that of the baseline model. By comparing the results shown in Table 2 and Table 3, it is easy to observe that the proposed model achieved nearly the same accuracy for both the subject-independent and subject-dependent experimental strategies. One possible reason for the narrowing of the performance gap is that the structure of the model is relatively simple, with a small number of parameters, which will not cause model overfitting even in the case of a small amount of data, and this results in a significant difference between the subject-dependent and subject-independent experimental results.
The confusion matrix of the subject-dependent experiment is shown in Figure 6, where the horizontal axis represents the predicted labels and the vertical axis represents the true labels. The labels corresponding to −1, 0, and 1 are negative, neutral, and positive, respectively. The recognition rates for the negative, neutral, and positive emotions were 93.78%, 95.56%, and 96.44%, respectively. Consistent with the results of previous studies, the positive and neutral emotions were easier to recognize than the negative emotions [28]. Note that negative and neutral emotions are more easily confused.

4.2.3. Channel Frequency Band Selection

To compare the effects of the number of channels and frequency bands on the time complexity and accuracy of the proposed method, we conducted an experiment to compare performance when using four and five frequency bands, as well as the combination of 12 and 62 channels. The experimental results are shown in Figure 7, where the blue bar chart represents the time (in ms) consumed for each epoch under this combination condition and the orange bar chart represents the accuracy. From the frequency band perspective, under the same number of channels, the time consumption of five frequency band combinations was 1.22–1.28 times that of the four frequency band combinations, and the accuracy was improved by 1%–2%. These results indicate that the combination of five frequency bands retains more information about emotions. From the channel perspective, under the same frequency band, the time consumption of 62 channels was 2.81–2.95 times that of 12 channels; however, the accuracy decreased by approximately 10%. Thus, using all channels does not improve accuracy. The reduced accuracy may be due to the fact that not all channels of information in the EEG signal are related to emotions, and utilizing all channels introduces noisy data.

4.2.4. Top-k Selection

To obtain the sparse adjacency matrix of the graph, we adopted the top-k technique, which involves retaining only the maximum k connection weights of each electrode in the adjacency matrix and setting the remaining weights to 0. In this experiment, we used 12 channels and five frequency bands from the SEED dataset under a single experimental strategy to explore the impact of different top-k values on experimental accuracy. Figure 8 shows the impact of different top-k sparse graphs on the performance of the proposed SOGPCN method. Here, k = 10 indicates that only the maximum 10 connection weights are retained, with all remaining weights set to 0. Similarly, k = 12 indicates that all 12 connections between the electrodes are retained. As can be seen, the model with k = 10 connections performed the best, which further validates the effectiveness of the sparse adjacency matrix model.

4.2.5. Ablation Experiment

Ablation experiments were also conducted on the SEED dataset to validate the effectiveness of the feature extraction modules in the proposed SOGPCN method, i.e., the spatial domain feature extraction and temporal domain feature extraction modules. These experiments involved 12 channels and 5 frequency bands, and they followed a single experimental strategy. As shown in Table 4, the results indicate that removing the self-organizing graph convolution module reduced the average accuracy by 1.92%. Similarly, removing the pseudo-3D convolution module and the partial dot product attention modules resulted in a decrease in average accuracy by 24.15% and 1.19%, respectively. Note that the standard deviation demonstrated improvement. These findings suggest that both the spatial and temporal features of EEG data (being multichannel sequential data) are vital in terms of enhancing the classification results and model stability. In addition, the introduction of time information in the SOGPCN model improved the performance of the model, which also proves that the contribution of time features in EEG emotion recognition tasks is crucial and should not be ignored.

5. Conclusions

This paper has proposed an attention-based spatiotemporal convolutional network for the task of EEG-based emotion recognition. The proposed model employs a self-organizing graph convolution module to learn the relationships between channels of EEG emotional signals dynamically in each frequency band. It also extracts time-domain features using pseudo-3D convolution and partial dot product attention. In a set of experimental evaluations, the proposed model achieved advanced performance on the SEED dataset. Based on the experimental results from different subjects, we found that the proposed model demonstrates a certain degree of robustness and excellent recognition accuracy for various subjects. The experimental results also align with the results reported by previous studies that indicate positive and neutral emotions are easier to identify than negative emotions, while negative and neutral emotions tend to be easily confused. Both experimental strategies utilized in this study highlight the closer relationship between the high-frequency band and emotional activity, and the relationship between the low-frequency band and emotional activity may be less relevant for the target task. The results of an ablation experiment also demonstrate the importance of the spatial and temporal features in EEG data.
In the future, we plan to explore methods to fuse data from different frequency bands and further enhance the performance of the proposed model. In addition, during the experimental process, we encountered significant differences in the data of subjects from different SEED datasets, which resulted in fluctuations in experimental accuracy among different subjects. Thus, reducing the feature distribution between different subjects remains a worthwhile topic for exploration.

Author Contributions

Conceptualization, X.Z. and S.W.; methodology, X.Z., C.L. and L.Z.; software, C.L.; Data collection, C.L.; validation, C.L.; investigation, S.W.; writing-original draft preparation, C.L.; writing-review and editing, X.Z. and L.Z.; supervision, X.Z. and L.Z.; project administration, X.Z. All authors have read and agreed to the published version of the manuscript.

Funding

The authors would like to thank support from the National Social Science Foundation of China for Education Project “Research on non-invasive measurement of key emotional states of online learners” (Grant Number BCA220220).

Institutional Review Board Statement

Our Institutional Review Board approved the study.

Informed Consent Statement

Not applicable.

Data Availability Statement

The open access dataset SEED is used in our study. Its links is as follows, https://bcmi.sjtu.edu.cn/~seed/seed.html (granted on 7 May 2020; accessed on 25 April 2022).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Gao, Z.; Li, Y.; Yang, Y.; Wang, X.; Dong, N.; Chiang, H.-D. A GPSO-Optimized Convolutional Neural Networks for EEG-Based Emotion Recognition. Neurocomputing 2020, 380, 225–235. [Google Scholar] [CrossRef]
  2. Chen, G.; Zhang, X.; Sun, Y.; Zhang, J. Emotion Feature Analysis and Recognition Based on Reconstructed EEG Sources. IEEE Access 2020, 8, 11907–11916. [Google Scholar] [CrossRef]
  3. Wang, Z.; Zhou, X.; Wang, W.; Liang, C. Emotion Recognition Using Multimodal Deep Learning in Multiple Psychophysiological Signals and Video. Int. J. Mach. Learn. Cybern. 2020, 11, 923–934. [Google Scholar] [CrossRef]
  4. Black, M.H.; Chen, N.T.; Iyer, K.K.; Lipp, O.V.; Bölte, S.; Falkmer, M.; Tan, T.; Girdler, S. Mechanisms of Facial Emotion Recognition in Autism Spectrum Disorders: Insights from Eye Tracking and Electroencephalography. Neurosci. Behav. Rev. 2017, 80, 488–515. [Google Scholar] [CrossRef]
  5. Broek, E.L. Ubiquitous Emotion-Aware Computing proceedings of the ubiquitous computing. Pers. Ubiquitous Comput. 2013, 17, 53–67. [Google Scholar] [CrossRef]
  6. Russell, J.A. A Circumplex Model of Affect. J. Personal. Soc. Psychol. 1980, 39, 1161–1178. [Google Scholar] [CrossRef]
  7. Kashihara, K. A Brain-Computer Interface for Potential Nonverbal Facial Communication Based on EEG Signals Related to Specific Emotions. Front. Neurosci. 2014, 8, 244. [Google Scholar] [CrossRef]
  8. Tripathi, S.; Acharya, S.; Sharma, R.; Mittal, S.; Bhattacharya, S. Using Deep and Convolutional Neural Networks for Accurate Emotion Classification on DEAP Dataset. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence(AAAI), San Francisco, CA, USA, 4–9 February 2017; pp. 4746–4752. [Google Scholar]
  9. Al-Nafjan, A.; Hosny, M.; Al-Ohali, Y.; Al-Wabil, A. Recognition of Affective States via Electroencephalogram Analysis and Classification. In Proceedings of the 1st International Conference on Intelligent Human Systems Integration(IHSI), Advances in Intelligent Systems and Computing, Dubai, United Arab Emirates, 7–9 January 2018; Volume 722, pp. 242–248. [Google Scholar]
  10. Li, D.; Wang, Z.; Wang, C.; Liu, S.; Chi, W.; Dong, E.; Song, X.; Gao, Q.; Song, Y. The Fusion of Electroencephalography and Facial Expression for Continuous Emotion Recognition. IEEE Access 2019, 7, 155724–155736. [Google Scholar] [CrossRef]
  11. Ding, Y.; Robinson, N.; Tong, C.; Zeng, Q.; Guan, C. LGGNet: Learning from Local-Global-Graph Representations for Brain-Computer Interface. IEEE Trans. Neural Netw. Learn. Syst. 2023, 1–14, early access. [Google Scholar] [CrossRef]
  12. Ding, Y.; Robinson, N.; Zhang, S.; Zeng, Q.; Guan, C. TSception: Capturing Temporal Dynamics and Spatial Asymmetry from EEG for Emotion Recognition. IEEE Trans. Affect. Comput. 2023, 14, 2238–2250. [Google Scholar] [CrossRef]
  13. Li, J.; Li, S.; Pan, J.; Wang, F. Cross-Subject EEG Emotion Recognition with Self-Organized Graph Neural Network. Front. Neurosci. 2021, 15, 611653. [Google Scholar] [CrossRef]
  14. Zhong, P.; Wang, D.; Miao, C. EEG-Based Emotion Recognition Using Regularized Graph Neural Networks. IEEE Trans. Affect. Comput. 2020, 13, 1290–1301. [Google Scholar] [CrossRef]
  15. Wang, Z.-M.; Zhang, J.-W.; He, Y.; Zhang, J. EEG Emotion Recognition Using Multichannel Weighted Multiscale Permutation Entropy. Appl. Intell. 2022, 52, 12064–12076. [Google Scholar] [CrossRef]
  16. Zhu, X.; Rong, W.; Zhao, L.; He, Z.; Yang, Q.; Sun, J.; Liu, G. EEG Emotion Classification Network Based on Attention Fusion of Multi-channel Band Features. Sensors 2022, 22, 5252. [Google Scholar] [CrossRef]
  17. Li, D.; Xie, L.; Chai, B.; Wang, Z.; Yang, H. Spatial-Frequency Convolutional Self-Attention Network for EEG Emotion Recognition. Appl. Soft Comput. 2022, 122, 108740. [Google Scholar] [CrossRef]
  18. Tao, W.; Li, C.; Song, R.; Cheng, J.; Liu, Y.; Wan, F.; Chen, X. EEG-Based Emotion Recognition via Channel-Wise Attention and Self Attention. IEEE Trans. Affect. Comput. 2020, 14, 382–393. [Google Scholar] [CrossRef]
  19. Xiao, G.; Shi, M.; Ye, M.; Xu, B.; Chen, Z.; Ren, Q. 4D Attention-Based Neural Network for EEG Emotion Recognition. Cogn. Neurodyn. 2022, 16, 805–818. [Google Scholar] [CrossRef]
  20. Wu, Z.; Pan, S.; Chen, F.; Long, G.; Zhang, C.; Philip, S.Y. A Comprehensive Survey on Graph Neural Networks. IEEE Trans. Neural Netw. Learn. Syst. 2021, 32, 4–24. [Google Scholar] [CrossRef]
  21. Song, T.; Zheng, W.; Song, P.; Cui, Z. EEG Emotion Recognition Using Dynamical Graph Convolutional Neural Networks. IEEE Trans. Affect. Comput. 2018, 11, 532–541. [Google Scholar] [CrossRef]
  22. Li, Y.; Fu, B.; Li, F.; Shi, G.; Zheng, W. A Novel Transferability Attention Neural Network Model for EEG Emotion Recognition. Neurocomputing 2021, 447, 92–101. [Google Scholar] [CrossRef]
  23. Zheng, W.L.; Lu, B.L. Investigating Critical Frequency Bands and Channels for EEG-Based Emotion Recognition with Deep Neural Networks. IEEE Trans. Auton. Ment. Dev. 2015, 7, 162–175. [Google Scholar] [CrossRef]
  24. Ji, S.; Yang, M.; Yu, K. 3D Convolutional Neural Networks for Human Action Recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 221–231. [Google Scholar] [CrossRef]
  25. Qiu, Z.; Yao, T.; Mei, T. Learning Spatiotemporal Representation with Pseudo-3D Residual Networks. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 5534–5542. [Google Scholar]
  26. Ji, X.; Li, Y.; Wen, P. 3DSleepNet: A Multi-Channel Bio-Signal Based Sleep Stages Classification Method Using Deep Learning. IEEE Trans. Neural Syst. Rehabil. Eng. 2023, 31, 3513–3523. [Google Scholar] [CrossRef]
  27. Greff, K.; Srivastava, R.K.; Koutnik, J.; Steunebrink, B.R.; Schmidhuber, J. LSTM: A Search Space Odyssey. IEEE Trans. Neural Netw. Learn. Syst. 2017, 28, 2222–2232. [Google Scholar] [CrossRef]
  28. Zhu, X.; Liu, G.; Zhao, L.; Rong, W.; Sun, J.; Liu, R. Emotion Classification from Multi-Band Electroencephalogram Data Using Dynamic Simplifying Graph Convolutional Network and Channel Style Recalibration Module. Sensors 2023, 23, 1917. [Google Scholar] [CrossRef]
  29. Pan, S.J.; Tsang, I.W.; Kwok, J.T.; Yang, Q. Domain Adaptation via Transfer Component Analysis. IEEE Trans. Neural Netw. 2011, 22, 199–210. [Google Scholar] [CrossRef]
  30. Fernando, B.; Habrard, A.; Sebban, M.; Tuytelaars, T. Unsupervised Visual Domain Adaptation Using Subspace Alignment. In Proceedings of the 2013 IEEE International Conference on Computer Vision (ICCV), Sydney, NSW, Australia, 1–8 December 2013; pp. 2960–2967. [Google Scholar]
  31. Li, Y.; Wang, L.; Zheng, W.; Zong, Y.; Qi, L.; Cui, Z.; Zhang, T.; Song, T. A Novel Bi-Hemispheric Discrepancy Model for EEG Emotion Recognition. IEEE Trans. Cogn. Dev. Syst. 2020, 13, 354–367. [Google Scholar] [CrossRef]
  32. Li, Y.; Zheng, W.; Zong, Y.; Cui, Z.; Zhang, T.; Zhou, X. A Bi-Hemisphere Domain Adversarial Neural Network Model for EEG Emotion Recognition. IEEE Trans. Affect. Comput. 2018, 12, 494–504. [Google Scholar] [CrossRef]
  33. Du, X.; Ma, C.; Zhang, G.; Li, J.; Lai, Y.K.; Zhao, G.; Deng, X.; Liu, Y.J.; Wang, H. An Efficient LSTM Network for Emotion Recognition from Multichannel EEG Signals. IEEE Trans. Affect. Comput. 2022, 13, 1528–1540. [Google Scholar] [CrossRef]
  34. Zheng, W. Multichannel EEG-Based Emotion Recognition via Group Sparse Canonical Correlation Analysis. IEEE Trans. Cogn. Dev. Syst. 2017, 9, 281–290. [Google Scholar] [CrossRef]
  35. Zhang, T.; Zheng, W.; Cui, Z.; Zong, Y.; Li, Y. Spatial–Temporal Recurrent Neural Network for Emotion Recognition. IEEE Trans. Cybern. 2019, 49, 839–847. [Google Scholar] [CrossRef]
  36. Shen, F.; Dai, G.; Lin, G.; Zhang, J.; Kong, W.; Zeng, H. EEG-based emotion recognition using 4D convolutional recurrent neural network. Cogn. Neurodyn. 2020, 14, 815–828. [Google Scholar] [CrossRef]
Figure 1. Two-dimensional spatial representation of emotions.
Figure 1. Two-dimensional spatial representation of emotions.
Sensors 24 03464 g001
Figure 2. Overall architecture of the proposed SOGPCN method.
Figure 2. Overall architecture of the proposed SOGPCN method.
Sensors 24 03464 g002
Figure 3. Example of 3D convolution on extracted 3D features.
Figure 3. Example of 3D convolution on extracted 3D features.
Sensors 24 03464 g003
Figure 4. Experiment flowchart.
Figure 4. Experiment flowchart.
Sensors 24 03464 g004
Figure 5. Analysis of each subject.
Figure 5. Analysis of each subject.
Sensors 24 03464 g005
Figure 6. Subject-dependency confusion matrix.
Figure 6. Subject-dependency confusion matrix.
Sensors 24 03464 g006
Figure 7. Comparison of time consumption and accuracy.
Figure 7. Comparison of time consumption and accuracy.
Sensors 24 03464 g007
Figure 8. Effect of top-k on the performance of the proposed SOGPCN.
Figure 8. Effect of top-k on the performance of the proposed SOGPCN.
Sensors 24 03464 g008
Table 1. Data input and output forms.
Table 1. Data input and output forms.
ModelInputOutput
Model1 X i n R b a t c h _ s i z e × 12 × 5 × 265 X o u t R b a t c h _ s i z e × 12 × 5 × 64 × 1
Model2 + Model3 X i n R b a t c h _ s i z e × 12 × 5 × 32 × 1 X o u t R b a t c h _ s i z e × 12 × 5 × 11 × 10
Model2 + Model3 X i n R b a t c h _ s i z e × 12 × 5 × 11 × 10 X o u t R b a t c h _ s i z e × 6 × 2 × 5 × 10
Model4 X i n R b a t c h _ s i z e × 6 × 2 × 5 × 10 X o u t R b a t c h _ s i z e × 5
Table 2. Subject-independent classification accuracy on SEED dataset.
Table 2. Subject-independent classification accuracy on SEED dataset.
SEED
Modelδθαβγ(θ, α, β, γ)All Bands
SVM43.06/8.2740.07/6.5043.97/10.8948.63/10.2951.59/11.83/56.73/16.29
TCA [29]44.10/8.2241.26/9.2142.93/14.3343.93/10.0648.43/9.73/63.64/14.88
SA [30]53.23/7.7450.60/8.3155.06/10.6056.72/10.7864.47/14.96/69.00/10.89
BiHDM [31]//////85.40/7.53
BiDANN-S [32]63.01/7.4963.22/7.5263.50/9.5073.59/9.1273.72/8.67/84.14/6.87
DGCNN [21]49.79/10.9446.36/12.0648.29/12.2856.15/14.0154.87/17.53/79.95/9.02
RGNN [14]64.88/6.8760.69/5.7960.84/7.5774.96/8.9477.50/8.10/85.30/6.72
SOGNN [13]70.37/7.6876.00/6.9266.22/11.5272.54/8.9771.70/8.03/86.81/5.79
ATDD-LSTM [33]//////90.92/1.05
SOGPCN (Our model)91.56/3.5691.11/4.9482.81/7.9389.78/5.8490.52/7.4193.33/4.1394.22/3.42
Table 3. Subject-dependent classification accuracy on SEED dataset.
Table 3. Subject-dependent classification accuracy on SEED dataset.
SEED
Modelδθαβγ(θ, α, β, γ)All Bands
SVM60.50/14.1460.95/10.2066.64/14.4180.76/11.5679.56/11.38/83.99/9.92
GSVCCA [34]63.92/11.1664.64/10.3370.10/14.7676.93/11.0077.98/10.72/82.96/9.95
DBN [23]64.32/12.4560.77/10.4264.01/15.9778.92/12.4879.19/14.58/86.08/8.34
STRNN [35]80.90/12.2783.35/9.1582.69/12.9983.41/10.1669.61/15.65/89.50/7.63
DGCNN [21]74.25/11.4271.52/5.9974.43/12.1683.65/10.1785.73/10.64/90.40/8.49
BiDANN [32]76.97/10.9575.56/7.8881.03/11.7489.65/9.5988.64/9.46/92.38/7.04
RGNN [14]76.17/7.9172.26/7.2575.33/8.8584.25/12.5489.23/8.90/94.24/5.95
4D-CRNN [36]/////94.74/2.32/
SOGPCN (Our model)88.15/12.9085.33/13.4583.26/4.7290.07/5.3791.11/2.5595.26/2.8095.26/3.52
Table 4. Results of ablation experiment.
Table 4. Results of ablation experiment.
ModelSEED
Leave-One-Subject
SOGPCN94.22/3.42
-SOG92.30/4.79
-P-3DCNN70.07/26.24
-PDPAtt93.03/4.65
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhu, X.; Liu, C.; Zhao, L.; Wang, S. EEG Emotion Recognition Network Based on Attention and Spatiotemporal Convolution. Sensors 2024, 24, 3464. https://doi.org/10.3390/s24113464

AMA Style

Zhu X, Liu C, Zhao L, Wang S. EEG Emotion Recognition Network Based on Attention and Spatiotemporal Convolution. Sensors. 2024; 24(11):3464. https://doi.org/10.3390/s24113464

Chicago/Turabian Style

Zhu, Xiaoliang, Chen Liu, Liang Zhao, and Shengming Wang. 2024. "EEG Emotion Recognition Network Based on Attention and Spatiotemporal Convolution" Sensors 24, no. 11: 3464. https://doi.org/10.3390/s24113464

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop