Emotion Recognition Model of EEG Signals Based on Double Attention Mechanism
Abstract
:1. Introduction
1.1. Motivation
1.2. Related Work
1.3. Contribution
2. Materials and Methods
2.1. The Structure of the DACB Model
2.2. Convolutional Neural Network
2.3. Bidirectional Long Short-Term Memory Network
2.4. 1D SE-Block
2.5. Dot Multiplier Attention Mechanism
2.6. Dataset
2.7. Experimental Setup
2.8. Evaluation Index
3. Results
3.1. Single-Test Result of DACB Model
3.2. Ten-Fold Cross Validation Result of DACB Model
3.3. Ablation Experiment
3.4. Feature Visualization
4. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Lautenbach, F. Effects of positive affect and positive emotions on executive functions: A systematic review and meta-analysis. Cogn. Emot. 2024, 38, 1–22. [Google Scholar] [CrossRef]
- Gannouni, S.; Aledaily, A.; Belwafi, K.; Aboalsamh, H. Adaptive emotion detection using the valence-arousal-dominance model and EEG brain rhythmic activity changes in relevant brain lobes. IEEE Access 2020, 8, 67444–67455. [Google Scholar] [CrossRef]
- Mohammad, S. Obtaining reliable human ratings of valence, arousal, and dominance for 20,000 English words. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Melbourne, Australia, 15–20 July 2018; pp. 174–184. [Google Scholar]
- Yıldırım, Ö.; Baloglu, U.B.; Acharya, U.R. A deep convolutional neural network model for automated identification of abnormal EEG signals. Neural Comput. Appl. 2020, 32, 15857–15868. [Google Scholar] [CrossRef]
- Liu, J.; Wu, G.; Luo, Y.; Qiu, S.; Yang, S.; Li, W.; Bi, Y. EEG-based emotion classification using a deep neural network and sparse autoencoder. Front. Syst. Neurosci. 2020, 14, 43. [Google Scholar] [CrossRef] [PubMed]
- Chen, L.; Wu, M.; Zhou, M.; Liu, Z.-T.; She, J.; Hirota, K. Dynamic emotion understanding in human–robot interaction based on two-layer fuzzy SVR-TS model. IEEE Trans. Syst. Man Cybern. Syst. 2017, 50, 490–501. [Google Scholar] [CrossRef]
- Busso, C.; Deng, Z.; Yildirim, S.; Bulut, M.; Lee, C.M.; Kazemzadeh, A.; Lee, S.; Neumann, U.; Narayanan, S. Analysis of emotion recognition using facial expressions, speech and multimodal information. In Proceedings of the 6th International Conference on Multimodal Interfaces, State College, PA, USA, 13–15 October 2004; pp. 205–211. [Google Scholar]
- Emerich, S.; Lupu, E.; Apatean, A. Emotions recognition by speechand facial expressions analysis. In Proceedings of the 2009 17th European Signal Processing Conference, Glasgow, UK, 24–28 August 2009; IEEE: Piscateaway, NJ, USA, 2009; pp. 1617–1621. [Google Scholar]
- Gunes, H.; Pantic, M. Automatic, dimensional and continuous emotion recognition. Int. J. Synth. Emot. IJSE 2010, 1, 68–99. [Google Scholar] [CrossRef]
- Liu, J.; Meng, H.; Nandi, A.; Li, M. Emotion detection from EEG recordings. In Proceedings of the 2016 12th International Conference on Natural Computation, Fuzzy Systems And Knowledge Discovery (ICNC-FSKD), Changsha, China, 13–15 August 2016; IEEE: Piscateaway, NJ, USA, 2016; pp. 1722–1727. [Google Scholar]
- Miao, M.; Zheng, L.; Xu, B.; Yang, Z.; Hu, W. A multiple frequency bands parallel spatial–temporal 3D deep residual learning framework for EEG-based emotion recognition. Biomed. Signal Process. Control. 2023, 79, 104141. [Google Scholar] [CrossRef]
- Ahern, G.L.; Schwartz, G.E. Differential lateralization for positive and negative emotion in the human brain: EEG spectral analysis. Neuropsychologia 1985, 23, 745–755. [Google Scholar] [CrossRef]
- Zhang, Y.; Cheng, C.; Zhang, Y.D. Multimodal emotion recognition based on manifold learning and convolution neural network. Multimed. Tools Appl. 2022, 81, 33253–33268. [Google Scholar] [CrossRef]
- Rahman, A.; Rahman, H.; Biswas, P.; Hossain, S.; Islam, R.; Hannan, A.; Uddin, J.; Rhim, H. Potential Therapeutic Role of Phytochemicals to Mitigate Mitochondrial Dysfunctions in Alzheimer’s Disease. Antioxidants 2020, 10, 23. [Google Scholar] [CrossRef] [PubMed]
- Zhu, J.Y.; Zheng, W.L.; Lu, B.L. Cross-subject and cross-gender emotion classification from EEG. In Proceedings of the World Congress on Medical Physics and Biomedical Engineering, Toronto, ON, Canada, 7–12 June 2015; Springer International Publishing: Berlin/Heidelberg, Germany, 2015; pp. 1188–1191. [Google Scholar]
- Wagh, K.P.; Vasanth, K. Performance evaluation of multi-channel electroencephalogram signal (EEG) based time frequency analysis for human emotion recognition. Biomed. Signal Process. Control. 2022, 78, 103966. [Google Scholar] [CrossRef]
- Liu, Y.; Fu, G. Emotion recognition by deeply learned multi-channel textual and EEG features. Future Gener. Comput. Syst. 2021, 119, 1–6. [Google Scholar] [CrossRef]
- Gannouni, S.; Aledaily, A.; Belwafi, K.; Aboalsamh, H. Emotion detection using electroencephalography signals and a zero-time windowing-based epoch estimation and relevant electrode identification. Sci. Rep. 2021, 11, 7071. [Google Scholar] [CrossRef] [PubMed]
- Naser, D.S.; Saha, G. Influence of music liking on EEG based emotion recognition. Biomed. Signal Process. Control. 2021, 64, 102251. [Google Scholar] [CrossRef]
- Tripathi, S.; Acharya, S.; Sharma, R.; Mittal, S.; Bhattacharya, S. Using deep and convolutional neural networks for accurate emotion classification on DEAP data. In Proceedings of the AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017; Volume 31, pp. 4746–4752. [Google Scholar]
- Kim, Y.; Choi, A. EEG-based emotion classification using long short-term memory network with attention mechanism. Sensors 2020, 20, 6727. [Google Scholar] [CrossRef]
- Wang, F.; Wu, S.; Zhang, W.; Xu, Z.; Zhang, Y.; Wu, C.; Coleman, S. Emotion recognition with convolutional neural network and EEG-based EFDMs. Neuropsychologia 2020, 146, 107506. [Google Scholar] [CrossRef]
- Cui, H.; Liu, A.; Zhang, X.; Chen, X.; Wang, K.; Chen, X. EEG-based emotion recognition using an end-to-end regional-asymmetric convolutional neural network. Knowl. Based Syst. 2020, 205, 106243. [Google Scholar] [CrossRef]
- Li, J.; Hua, H.; Xu, Z.; Shu, L.; Xu, X.; Kuang, F.; Wu, S. Cross-subject EEG emotion recognition combined with connectivity features and meta-transfer learning. Comput. Biol. Med. 2022, 145, 105519. [Google Scholar] [CrossRef]
- Rajpoot, A.S.; Panicker, M.R. Subject independent emotion recognition using EEG signals employing attention driven neural networks. Biomed. Signal Process. Control. 2022, 75, 103547. [Google Scholar]
- Géron, A. Hands-on Machine Learning with Scikit-Learn, Keras, and TensorFlow: Concepts, Tools, and Techniques To Build Intelligent Systems; O’Reilly Media, Inc.: Sebastopol, CA, USA, 2019. [Google Scholar]
- Schirrmeister, R.T.; Springenberg, J.T.; Fiederer, L.D.J.; Glasstetter, M.; Eggensperger, K.; Tangermann, M.; Hutter, F.; Burgard, W.; Ball, T. Deep learning with convolutional neural networks for EEG decoding and visualization. Hum. Brain Mapp. 2017, 38, 5391–5420. [Google Scholar] [CrossRef]
- Hou, Y.; Zhou, L.; Jia, S.; Lun, X. A novel approach of decoding EEG four-class motor imagery tasks via scout ESI and CNN. J. Neural Eng. 2020, 17, 016048. [Google Scholar] [CrossRef] [PubMed]
- Chen, J.; Han, Z.; Qiao, H.; Li, C.; Peng, H. EEG-based sleep staging via self-attention based capsule network with Bi-LSTM model. Biomed. Signal Process. Control. 2023, 86, 105351. [Google Scholar] [CrossRef]
- Hu, J.; Shen, L.; Sun, G. Squeeze-and-excitation networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 7132–7141. [Google Scholar]
- Liu, X.; Xiong, S.; Wang, X.; Liang, T.; Wang, H.; Liu, X. A compact multi-branch 1D convolutional neural network for EEG-based motor imagery classification. Biomed. Signal Process. Control. 2023, 81, 104456. [Google Scholar] [CrossRef]
- Liu, C.; Liu, Y.; Yan, Y.; Wang, J. An intrusion detection model with hierarchical attention mechanism. IEEE Access 2020, 8, 67542–67554. [Google Scholar] [CrossRef]
- Zheng, W.L.; Lu, B.L. Investigating critical frequency bands and channels for EEG-based emotion recognition with deep neural networks. IEEE Trans. Auton. Ment. Dev. 2015, 7, 162–175. [Google Scholar] [CrossRef]
- Duan, R.N.; Zhu, J.Y.; Lu, B.L. Differential entropy feature for EEG-based emotion classification. In Proceedings of the 2013 6th International IEEE/EMBS Conference on Neural Engineering (NER), San Diego, CA, USA, 6–8 November 2013; IEEE: Piscateaway, NJ, USA, 2013; pp. 81–84. [Google Scholar]
- Zheng, W.-L.; Liu, W.; Lu, Y.; Lu, B.-L. Emotionmeter: A multimodal framework for recognizing human emotions. IEEE Trans. Cybern. 2018, 49, 1110–1122. [Google Scholar] [CrossRef]
- Gabert-Quillen, C.A.; Bartolini, E.E.; Abravanel, B.T.; Sanislow, C.A. Ratings for emotion film clips. Behav. Res. Methods 2015, 47, 773–787. [Google Scholar] [CrossRef] [PubMed]
- Katsigiannis, S.; Ramzan, N. DREAMER: A database for emotion recognition through EEG and ECG signals from wireless low-cost off-the-shelf devices. IEEE J. Biomed. Health Inform. 2017, 22, 98–107. [Google Scholar] [CrossRef] [PubMed]
- McRae, K.; Misra, S.; Prasad, A.K.; Pereira, S.C.; Gross, J.J. Bottom-up and top-down emotion generation: Implications for emotion regulation. Soc. Cogn. Affect. Neurosci. 2011, 7, 253–262. [Google Scholar] [CrossRef]
- Ochsner, K.N.; Ray, R.R.; Hughes, B.; McRae, K.; Cooper, J.C.; Weber, J.; Gabrieli, J.D.; Gross, J.J. Bottom-up and top-down processes in emotion generation: Common and distinct neural mechanisms. Psychol. Sci. 2009, 20, 1322–1331. [Google Scholar] [CrossRef]
- Comte, M.; Schön, D.; Coull, J.T.; McRae, K.; Cooper, J.C.; Weber, J.; Gabrieli, J.D.; Gross, J.J. Dissociating bottom-up and top-down mechanisms in the cortico-limbic system during emotion processing. Cereb. Cortex 2016, 26, 144–155. [Google Scholar] [CrossRef] [PubMed]
- Desimone, R.; Duncan, J. Neural mechanisms of selective visual attention. Annu. Rev. Neurosci. 1995, 18, 193–222. [Google Scholar] [CrossRef] [PubMed]
- Kajal, D.S.; Fioravanti, C.; Elshahabi, A.; Ruiz, S.; Sitaram, R.; Braun, C. Involvement of top-down networks in the perception of facial emotions: A magnetoencephalographic investigation. NeuroImage 2020, 222, 117075. [Google Scholar] [CrossRef] [PubMed]
- Gannouni, S.; Belwafi, K.; Aledaily, A.; Aboalsamh, H.; Belghith, A. Software Usability Testing Using EEG-Based Emotion Detection and Deep Learning. Sensors 2023, 23, 5147. [Google Scholar] [CrossRef]
- Gannouni, S.; Aledaily, A.; Belwafi, K.; Aboalsamh, H. Electroencephalography based emotion detection using ensemble classification and asymmetric brain activity. J. Affect. Disord. 2022, 319, 416–427. [Google Scholar] [CrossRef] [PubMed]
Module | Type | Number |
---|---|---|
Block1 | Conv1d | 64 |
Flatten | 1 | |
Block2 | Bi-LSTM | 32 |
Flatten | 1 | |
FC1 | Dense | 64 |
FC2 | Dense | 32 |
FC3 | Softmax | 1 |
Serial No. | Film Clips’ Sources | Emotion Label | Start Time Point | End Time Point |
---|---|---|---|---|
01 | Black Keys | sad | 42:32 | 45:41 |
02 | The Eye 3 | fear | 49:25:00 | 51:00:00 |
03 | Rob-B-Hood | happy | 41:07:00 | 45:06 |
04 | A Bite of China | neutral | 30:29 | 32:48 |
05 | The Child’s Eye | fear | 41:00 | 42:37 |
06 | A Bite of China | neutral | 5:19 | 8:05 |
07 | A Bite of China | neutral | 24:42 | 26:41 |
08 | Very Happy | sad | 17:09 | 21:13 |
09 | A Bite of China | neutral | 31:18 | 33:44 |
10 | A Wedding Invitation | sad | 1:34:04 | 1:38:50 |
11 | Bunshinsaba II | fear | 42:24 | 43:33 |
12 | Dearest | sad | 1:31:08 | 1:33:29 |
13 | Aftershock | sad | 20:13 | 24:14 |
14 | Foster Father | sad | 24:29 | 27:10 |
15 | Bunshinsaba III | fear | 1:04:52 | 1:09:49 |
16 | Promo for applying the Olympic Winter Games | happy | 0:00 | 2:54 |
17 | Hungry Ghost Ritual | fear | 45:07 | 46:48 |
18 | Hungry Ghost Ritual | fear | 1:10:21 | 1:13:33 |
19 | Very Happy | happy | 34:30 | 37:15 |
20 | You are my life more complete | happy | 39:32 | 40:44 |
21 | A Bite of China | neutral | 18:59 | 20:56 |
22 | Hear Me | happy | 1:33:27 | 96:10 |
23 | A Bite of China | neutral | 16:28 | 19:24 |
24 | Very Happy | happy | 12:48 | 15:31 |
Serial No. | Film Clips’ Sources | Valence | Arousal | Dominance |
---|---|---|---|---|
01 | Searching for Bobby Fischer | 3.17 ± 0.72 | 2.26 ± 0.75 | 2.09 ± 0.73 |
02 | D.O.A. | 3.04 ± 0.88 | 3.00 ± 1.00 | 2.70 ± 0.88 |
03 | The Hangover | 4.57 ± 0.73 | 3.83 ± 0.83 | 3.83 ± 0.72 |
04 | The Ring | 2.04 ± 1.02 | 4.26 ± 0.69 | 4.13 ±0.87 |
05 | 300 | 3.22 ± 1.17 | 3.70 ± 0.70 | 3.52 ± 0.95 |
06 | National Lampoon’s VanWilder | 2.70 ±1.55 | 3.83 ± 0.83 | 4.04 ± 0.98 |
07 | Wall-E | 4.52 ± 0.59 | 3.17 ± 0.98 | 3.57 ± 0.99 |
08 | Crash | 1.35 ± 0.65 | 3.96 ± 0.77 | 4.35 ± 0.65 |
09 | My Girl | 1.39 ± 0.66 | 3.00 ± 1.09 | 3.48 ± 0.95 |
10 | The Fly | 2.17 ± 1.15 | 3.30 ± 1.02 | 3.61 ± 0.89 |
11 | Pride and Prejudice | 3.96 ± 0.64 | 1.96 ± 0.82 | 2.61 ± 0.89 |
12 | Modern Times | 3.96 ± 0.56 | 2.61 ± 0.89 | 2.70 ± 0.82 |
13 | Remember the Titans | 4.39 ± 0.66 | 3.70 ± 0.97 | 3.74 ± 0.96 |
14 | Gentleman’s Agreement | 2.35 ± 0.65 | 2.22 ± 0.85 | 2.39 ± 0.72 |
15 | Psycho | 2.48 ± 0.85 | 3.09 ± 1.00 | 3.22 ± 0.9 |
16 | The Bourne Identity | 3.65 ± 0.65 | 3.35 ± 1.07 | 3.26 ± 1.14 |
17 | The Shawshank Redemption | 1.52 ± 0.59 | 3.00 ± 0.74 | 3.96 ± 0.77 |
18 | The Departed | 2.65 ± 0.78 | 3.91 ± 0.85 | 3.57 ± 1.04 |
Parameters | Number |
---|---|
epoch number | 150 |
learning rate | 0.001 |
batch size | 1024 |
optimizer | Adam |
loss function | categorical_crossentropy |
Random seed | 42 |
Methods | Accuracy (%) | Precision (%) | Recall (%) | F1-Score (%) | MCC (%) |
---|---|---|---|---|---|
CNN-RNN | 61.53 | 62.94 | 61.53 | 60.72 | 49.50 |
CNN-Bi-LSTM | 96.15 | 96.16 | 96.15 | 96.15 | 94.88 |
1D CAE | 90.28 | 90.28 | 90.28 | 90.28 | 87.05 |
1D InceptionV1 | 79.28 | 79.73 | 79.28 | 79.33 | 72.49 |
Adaboost | 37.49 | 37.52 | 37.49 | 37.41 | 16.69 |
Bayes | 26.10 | 30.44 | 26.10 | 17.39 | 2.46 |
XGBoost | 87.25 | 87.35 | 87.24 | 87.26 | 83.02 |
DACB | 99.96 | 99.96 | 99.96 | 99.96 | 99.95 |
Methods | Accuracy (%) | Precision (%) | Recall (%) | F1-Score (%) | MCC (%) |
---|---|---|---|---|---|
CNN-RNN | 58.25 | 59.13 | 58.25 | 58.46 | 47.52 |
CNN-Bi-LSTM | 75.68 | 76.16 | 75.29 | 75.65 | 69.31 |
1D CAE | 67.38 | 67.45 | 67.24 | 67.24 | 58.89 |
1D InceptionV1 | 34.82 | 39.30 | 32.29 | 31.93 | 16.82 |
Adaboost | 34.66 | 34.63 | 32.70 | 32.64 | 16.76 |
Bayes | 22.53 | 23.94 | 22.28 | 15.96 | 3.96 |
XGBoost | 81.16 | 82.08 | 80.86 | 81.35 | 76.24 |
DACB | 87.52 | 87.74 | 87.41 | 87.52 | 84.28 |
Methods | Accuracy (%) | Precision (%) | Recall (%) | F1-Score (%) | MCC (%) |
---|---|---|---|---|---|
CNN-RNN | 63.38 | 66.64 | 58.71 | 61.44 | 50.56 |
CNN-Bi-LSTM | 76.47 | 77.89 | 74.50 | 76.00 | 68.49 |
1D CAE | 71.38 | 74.15 | 68.34 | 70.68 | 61.56 |
1D InceptionV1 | 40.59 | 49.41 | 31.42 | 32.20 | 17.85 |
Adaboost | 38.69 | 37.20 | 30.62 | 31.22 | 15.72 |
Bayes | 15.26 | 25.45 | 25.29 | 15.54 | 4.78 |
XGBoost | 81.95 | 85.29 | 80.56 | 82.62 | 75.81 |
DACB | 90.06 | 90.15 | 89.40 | 89.76 | 86.75 |
Methods | Accuracy (%) | Precision (%) | Recall (%) | F1-Score (%) | MCC (%) |
---|---|---|---|---|---|
CNN-RNN | 65.20 | 67.11 | 62.17 | 64.06 | 52.73 |
CNN-Bi-LSTM | 73.21 | 76.76 | 70.68 | 73.01 | 63.67 |
1D CAE | 70.46 | 71.91 | 71.04 | 70.94 | 60.56 |
1D InceptionV1 | 40.21 | 47.56 | 29.78 | 29.66 | 15.74 |
Adaboost | 37.30 | 33.68 | 29.27 | 29.69 | 12.86 |
Bayes | 21.36 | 25.75 | 26.19 | 18.09 | 6.37 |
XGBoost | 82.33 | 85.88 | 82.27 | 83.85 | 76.10 |
DACB | 89.05 | 89.93 | 88.69 | 89.28 | 85.24 |
Methods | Accuracy (%) | Precision (%) | Recall (%) | F1-Score (%) | MCC (%) |
---|---|---|---|---|---|
CNN-RNN | 62.31 | 62.53 | 62.30 | 62.19 | 49.87 |
CNN-Bi-LSTM | 92.87 | 92.89 | 92.87 | 92.87 | 90.50 |
1D CAE | 85.21 | 85.28 | 85.21 | 85.22 | 80.30 |
1D InceptionV1 | 77.60 | 77.66 | 77.60 | 77.59 | 70.15 |
Adaboost | 35.93 | 36.00 | 35.93 | 35.82 | 14.61 |
Bayes | 25.77 | 28.84 | 25.77 | 17.34 | 1.66 |
XGBoost | 80.57 | 80.81 | 80.57 | 80.61 | 74.15 |
DACB | 99.73 | 99.72 | 99.72 | 99.72 | 99.64 |
Methods | Accuracy (%) | Precision (%) | Recall (%) | F1-Score (%) | MCC (%) |
---|---|---|---|---|---|
CNN-RNN | 26.18 | 24.98 | 20.09 | 8.50 | 1.76 |
CNN-Bi-LSTM | 33.09 | 37.75 | 30.25 | 28.77 | 14.60 |
1D CAE | 50.68 | 52.91 | 49.31 | 49.91 | 37.53 |
1D InceptionV1 | 28.13 | 37.67 | 24.19 | 19.11 | 7.76 |
Adaboost | 32.95 | 32.72 | 29.73 | 28.87 | 13.72 |
Bayes | 21.88 | 21.18 | 21.11 | 13.77 | 2.24 |
XGBoost | 75.90 | 77.08 | 75.15 | 75.89 | 69.48 |
DACB | 84.26 | 84.39 | 84.03 | 84.18 | 80.12 |
Methods | Accuracy (%) | Precision (%) | Recall (%) | F1-Score (%) | MCC (%) |
---|---|---|---|---|---|
CNN-RNN | 32.11 | 19.21 | 18.56 | 10.67 | 2.17 |
CNN-Bi-LSTM | 40.53 | 44.69 | 30.57 | 29.97 | 17.06 |
1D CAE | 51.95 | 54.01 | 45.38 | 46.47 | 33.74 |
1D InceptionV1 | 34.45 | 47.99 | 24.24 | 20.67 | 9.38 |
Adaboost | 38.00 | 35.31 | 29.00 | 29.09 | 13.91 |
Bayes | 12.42 | 24.11 | 24.79 | 12.92 | 4.36 |
XGBoost | 76.96 | 81.75 | 73.75 | 76.89 | 68.97 |
DACB | 85.40 | 85.54 | 84.63 | 85.04 | 80.49 |
Methods | Accuracy (%) | Precision (%) | Recall (%) | F1-Score (%) | MCC (%) |
---|---|---|---|---|---|
CNN-RNN | 32.41 | 19.17 | 18.26 | 9.94 | 1.23 |
CNN-Bi-LSTM | 37.45 | 40.21 | 26.96 | 24.46 | 12.73 |
1D CAE | 53.16 | 58.68 | 46.76 | 48.76 | 35.76 |
1D InceptionV1 | 34.03 | 45.28 | 22.51 | 16.78 | 7.00 |
Adaboost | 36.55 | 32.08 | 26.66 | 25.97 | 11.13 |
Bayes | 10.44 | 29.42 | 22.62 | 11.36 | 4.30 |
XGBoost | 77.56 | 81.58 | 76.03 | 78.35 | 69.79 |
DACB | 85.02 | 85.74 | 84.47 | 85.10 | 79.85 |
DataSet | Methods | Accuracy (%) | Precision (%) | Recall (%) | F1-Score (%) | MCC (%) |
---|---|---|---|---|---|---|
SEED-IV | Block1 | 80.88 | 85.34 | 80.91 | 79.79 | 75.80 |
SEED-IV | Block2 | 90.68 | 99.68 | 90.68 | 99.68 | 99.58 |
SEED-IV | DACB | 99.73 | 99.72 | 99.72 | 99.72 | 99.64 |
DataSet | Methods | Accuracy (%) | Precision (%) | Recall (%) | F1-Score (%) | MCC (%) |
---|---|---|---|---|---|---|
DREAMER (valence) | Block1 | 26.08 | 7.21 | 20.00 | 8.27 | 0.11 |
DREAMER (valence) | Block2 | 32.16 | 5.21 | 20.00 | 8.27 | 0 |
DREAMER (valence) | DACB | 84.26 | 84.39 | 84.03 | 84.18 | 80.12 |
DREAMER (arousal) | Block1 | 31.88 | 10.37 | 20.00 | 9.67 | 0.31 |
DREAMER (arousal) | Block2 | 31.87 | 12.11 | 20.00 | 9.66 | 0 |
DREAMER (arousal) | DACB | 85.40 | 85.54 | 84.63 | 85.04 | 80.49 |
DREAMER (dominance) | Block1 | 32.38 | 21.80 | 20.01 | 9.81 | 0.84 |
DREAMER (dominance) | Block2 | 32.36 | 6.46 | 20.00 | 9.77 | 0 |
DREAMER (dominance) | DACB | 85.02 | 85.74 | 84.47 | 85.10 | 79.85 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Ma, Y.; Huang, Z.; Yang, Y.; Zhang, S.; Dong, Q.; Wang, R.; Hu, L. Emotion Recognition Model of EEG Signals Based on Double Attention Mechanism. Brain Sci. 2024, 14, 1289. https://doi.org/10.3390/brainsci14121289
Ma Y, Huang Z, Yang Y, Zhang S, Dong Q, Wang R, Hu L. Emotion Recognition Model of EEG Signals Based on Double Attention Mechanism. Brain Sciences. 2024; 14(12):1289. https://doi.org/10.3390/brainsci14121289
Chicago/Turabian StyleMa, Yahong, Zhentao Huang, Yuyao Yang, Shanwen Zhang, Qi Dong, Rongrong Wang, and Liangliang Hu. 2024. "Emotion Recognition Model of EEG Signals Based on Double Attention Mechanism" Brain Sciences 14, no. 12: 1289. https://doi.org/10.3390/brainsci14121289
APA StyleMa, Y., Huang, Z., Yang, Y., Zhang, S., Dong, Q., Wang, R., & Hu, L. (2024). Emotion Recognition Model of EEG Signals Based on Double Attention Mechanism. Brain Sciences, 14(12), 1289. https://doi.org/10.3390/brainsci14121289