Predicting Activity in Brain Areas Associated with Emotion Processing Using Multimodal Behavioral Signals
Abstract
:1. Introduction
2. Related Work
3. Methods
3.1. The Importance of Predicting Brain Activity for Emotion Recognition
Abbrev | ROIs | Brainnetome Code |
---|---|---|
l,r vA | left and right Ventral Agranular insula | 165/166 |
l,r dA | left and right Dorsal Agranular insula | 167/168 |
l,r vD | left and right Ventral Dysgranular insula | 169/170 |
l,r dD | left and right Dorsal Dysgranular insula | 173/174 |
l,r dG | left and right Dorsal Granular insula | 171/172 |
l,r H | left and right Hypergranular insula | 163/164 |
l,r MA | left and right Medial Amygdala | 211/212 |
l,r LA | left and right Lateral Amygdala | 213/214 |
Hy | Hypothalamus | [42] |
3.2. Proposed Approach
3.2.1. Embedding
- Audio embedding: Spectral features are extracted from the raw audio signal using Mel-Frequency Cepstral Coefficients. These techniques aim to capture the characteristics of the vocal tract that are relevant for distinguishing different sounds, speech, or speakers.
- Video embedding: We extract visual features using FaceNet, a deep learning-based facial recognition system developed by Schroff et al. (2015) [29]. The architecture consists of a CNN model followed by a few fully connected layers. The CNN is designed to learn a representation of facial images by extracting key features, such as facial landmarks, textures, and distinctive patterns, which are essential to differentiate between faces. The fully connected layers project these features into an embedding that is invariant to variations in lighting, pose, and facial expressions.
3.2.2. Network Formulation
4. Experiments and Results
4.1. Dataset Description
4.2. Experimental Setup
4.3. Evaluated Models
- MULTILAYER PERCEPTRON (MLP): Baseline model that consists of two hidden layers with dropout, and ReLU activation functions applied after each layer. The third layer with the linear activation function and 17 output neurons represents the targets of the model. The dimensionality of the hidden layers is considered as a hyperparameter to be tuned.
- MODEL 1 [27]: This model applies MFCCs and DenseFace to extract acoustic and visual features, respectively. It employs LSTM networks with early fusion to represent the hidden multimodal features and feed-forward layers to predict emotions from seven labels (anger, disgust, fear, happiness, neutral, sad, and surprise). Originally, it was trained on a dataset of short emotional video clips taken from movies and TV shows.
- MODEL 2 [31]: This approach integrates audio, video, and text recorded from video frames with speech and subtitles. It incorporates a CNN network to extract visual features, evaluated using three classical classification backbones (VGGNet, ResNet, and GoogleNet). It also incorporates Bidirectional Encoder Representations from Transformers (BERTs) and spectral analysis to extract features from text and audio, respectively. Finally, a fusion step uses Bidirectional LSTM networks to capture the temporal dependencies and combine the extracted hidden features.
- MODEL 3 [30]: The proposed model combines CNN and LSTM, along with self-attention, to extract acoustic features from speech signals. Simultaneously, a BiLSTM network is employed to extract textual features from transcripts. The extracted features are then fused and fed into a deep fully connected network to predict the probabilities of the four target emotions (sad, excited, neutral, and angry).
4.4. Statistical Test
4.5. Results
5. Discussion
5.1. Models Comparison
5.2. Ablation Study
5.3. Implications for Affective Neuroscience
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Fu, T.; Gao, S.; Zhao, X.; Wen, J.r.; Yan, R. Learning towards conversational AI: A survey. AI Open 2022, 3, 14–28. [Google Scholar] [CrossRef]
- Spezialetti, M.; Placidi, G.; Rossi, S. Emotion Recognition for Human-Robot Interaction: Recent Advances and Future Perspectives. Front. Robot. AI 2020, 7, 532279. [Google Scholar] [CrossRef]
- Bouhlal, M.; Aarika, K.; Abdelouahid, R.A.; Elfilali, S.; Benlahmar, E. Emotions recognition as innovative tool for improving students’ performance and learning approaches. Procedia Comput. Sci. 2020, 175, 597–602. [Google Scholar] [CrossRef]
- Johansson, R.; Skantze, G.; Jönsson, A. A Psychotherapy Training Environment with Virtual Patients Implemented Using the Furhat Robot Platform. In Intelligent Virtual Agents; Springer International Publishing: Cham, Switzerland, 2017; Volume 10498, pp. 184–187. [Google Scholar] [CrossRef]
- Barros, P.; Weber, C.; Wermter, S. Emotional expression recognition with a cross-channel convolutional neural network for human-robot interaction. In Proceedings of the 2015 IEEE-RAS 15th International Conference on Humanoid Robots (Humanoids), Seoul, Republic of Korea, 3–5 November 2015; pp. 582–587. [Google Scholar] [CrossRef]
- Liu, W.; Qiu, J.L.; Zheng, W.L.; Lu, B.L. Comparing Recognition Performance and Robustness of Multimodal Deep Learning Models for Multimodal Emotion Recognition. IEEE Trans. Cogn. Dev. Syst. 2022, 14, 715–729. [Google Scholar] [CrossRef]
- Zhang, S.; Yang, Y.; Chen, C.; Zhang, X.; Leng, Q.; Zhao, X. Deep learning-based multimodal emotion recognition from audio, visual, and text modalities: A systematic review of recent advancements and future prospects. Expert Syst. Appl. 2024, 237, 121692. [Google Scholar] [CrossRef]
- Dixit, C.; Satapathy, S.M. Deep CNN with late fusion for real time multimodal emotion recognition. Expert Syst. Appl. 2024, 240, 122579. [Google Scholar] [CrossRef]
- Xu, C.; Du, Y.; Wang, J.; Zheng, W.; Li, T.; Yuan, Z. A joint hierarchical cross attention graph convolutional network for multimodal facial expression recognition. Comput. Intell. 2024, 40, e12607. [Google Scholar] [CrossRef]
- Bilotti, U.; Bisogni, C.; De Marsico, M.; Tramonte, S. Multimodal Emotion Recognition via Convolutional Neural Networks: Comparison of different strategies on two multimodal datasets. Eng. Appl. Artif. Intell. 2024, 130, 107708. [Google Scholar] [CrossRef]
- Pessoa, L. A Network Model of the Emotional Brain. Trends Cogn. Sci. 2017, 21, 357–371. [Google Scholar] [CrossRef]
- Rauchbauer, B.; Nazarian, B.; Bourhis, M.; Ochs, M.; Prévot, L.; Chaminade, T. Brain activity during reciprocal social interaction investigated using conversational robots as control condition. Phil. Trans. R. Soc. B 2019, 374, 20180033. [Google Scholar] [CrossRef]
- Damasio, A.R. Emotion in the perspective of an integrated nervous system. Brain Res. Rev. 1998, 26, 83–86. [Google Scholar] [CrossRef]
- Liang, P.P.; Zadeh, A.; Morency, L.P. Multimodal Local-Global Ranking Fusion for Emotion Recognition. In Proceedings of the 20th ACM International Conference on Multimodal Interaction, Boulder, CO, USA, 16–20 October 2018; ACM: New York, NY, USA, 2018; pp. 472–476. [Google Scholar] [CrossRef]
- Ngiam, J.; Khosla, A.; Kim, M.; Nam, J.; Lee, H.; Ng, A.Y. Multimodal deep learning. In Proceedings of the 28th International Conference on International Conference on Machine Learning, ICML’11, Madison, WI, USA, 28 June 2011; pp. 689–696. [Google Scholar]
- Srivastava, N.; Salakhutdinov, R.R. Multimodal Learning with Deep Boltzmann Machines. In Advances in Neural Information Processing Systems; Pereira, F., Burges, C., Bottou, L., Weinberger, K., Eds.; Curran Associates, Inc.: Red Hook, NY, USA, 2012; Volume 25. [Google Scholar]
- Gao, J.; Li, P.; Chen, Z.; Zhang, J. A Survey on Deep Learning for Multimodal Data Fusion. Neural Comput. 2020, 32, 829–864. [Google Scholar] [CrossRef] [PubMed]
- Akkus, C.; Chu, L.; Djakovic, V.; Jauch-Walser, S.; Koch, P.; Loss, G.; Marquardt, C.; Moldovan, M.; Sauter, N.; Schneider, M.; et al. Multimodal Deep Learning. arXiv 2023, arXiv:2301.04856. [Google Scholar] [CrossRef]
- Pan, B.; Hirota, K.; Jia, Z.; Dai, Y. A review of multimodal emotion recognition from datasets, preprocessing, features, and fusion methods. Neurocomputing 2023, 561, 126866. [Google Scholar] [CrossRef]
- Sosea, T.; Caragea, C. CancerEmo: A Dataset for Fine-Grained Emotion Detection. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, Online, 16–20 November 2020; pp. 8892–8904. [Google Scholar] [CrossRef]
- Li, J.; Dong, Z.; Lu, S.; Wang, S.J.; Yan, W.J.; Ma, Y.; Liu, Y.; Huang, C.; Fu, X. CAS(ME)3: A Third Generation Facial Spontaneous Micro-Expression Database with Depth Information and High Ecological Validity. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 45, 2782–2800. [Google Scholar] [CrossRef]
- Middya, A.I.; Nag, B.; Roy, S. Deep learning based multimodal emotion recognition using model-level fusion of audio–visual modalities. Knowl.-Based Syst. 2022, 244, 108580. [Google Scholar] [CrossRef]
- Deschamps-Berger, T.; Lamel, L.; Devillers, L. Investigating Transformer Encoders and Fusion Strategies for Speech Emotion Recognition in Emergency Call Center Conversations. In Proceedings of the International Conference on Multimodal Interaction, New York, NY, USA, 7–11 November 2022; ACM: New York, NY, USA, 2022; pp. 144–153. [Google Scholar] [CrossRef]
- Ranchordas, A.; Araujo, H.J. VISAPP 2008: Proceedings of the Third International Conference on Vision Theory and Applications; Funchal, Madeira, Portugal, January 22–25, 2008. In VISIGRAPP 2008; Ranchordas, A.N., Ed.; INSTICC Press: Lisboa, Portugal, 2008. [Google Scholar]
- Etienne, C.; Fidanza, G.; Petrovskii, A.; Devillers, L.; Schmauch, B. CNN+LSTM Architecture for Speech Emotion Recognition with Data Augmentation. In Proceedings of the Workshop on Speech, Music and Mind (SMM 2018), Hyderabad, India, 1 September 2018; pp. 21–25. [Google Scholar] [CrossRef]
- Chen, Z.; Lin, M.; Wang, Z.; Zheng, Q.; Liu, C. Spatio-temporal representation learning enhanced speech emotion recognition with multi-head attention mechanisms. Knowl.-Based Syst. 2023, 281, 111077. [Google Scholar] [CrossRef]
- Wang, S.; Wang, W.; Zhao, J.; Chen, S.; Jin, Q.; Zhang, S.; Qin, Y. Emotion recognition with multimodal features and temporal models. In Proceedings of the 19th ACM International Conference on Multimodal Interaction, Glasgow, UK, 13–17 November 2017; ACM: New York, NY, USA, 2017; pp. 598–602. [Google Scholar] [CrossRef]
- Krishna, D.N.; Patil, A. Multimodal Emotion Recognition Using Cross-Modal Attention and 1D Convolutional Neural Networks. In Proceedings of the Interspeech 2020, ISCA, Shanghai, China, 25–29 October 2020; pp. 4243–4247. [Google Scholar] [CrossRef]
- Schroff, F.; Kalenichenko, D.; Philbin, J. FaceNet: A Unified Embedding for Face Recognition and Clustering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015. [Google Scholar]
- Cai, L.; Hu, Y.; Dong, J.; Zhou, S. Audio-Textual Emotion Recognition Based on Improved Neural Networks. Math. Probl. Eng. 2019, 2019, 2593036. [Google Scholar] [CrossRef]
- Ghauri, J.A.; Hakimov, S.; Ewerth, R. Classification of Important Segments in Educational Videos using Multimodal Features. arXiv 2020, arXiv:2010.13626. [Google Scholar] [CrossRef]
- Poria, S.; Cambria, E.; Hazarika, D.; Mazumder, N.; Zadeh, A.; Morency, L.P. Multi-level Multiple Attentions for Contextual Multimodal Sentiment Analysis. In Proceedings of the 2017 IEEE International Conference on Data Mining (ICDM), New Orleans, LA, USA, 18–21 November 2017; pp. 1033–1038. [Google Scholar] [CrossRef]
- Tsai, Y.H.H.; Bai, S.; Liang, P.P.; Kolter, J.Z.; Morency, L.P.; Salakhutdinov, R. Multimodal Transformer for Unaligned Multimodal Language Sequences. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Florence, Italy, 28 July–2 August 2019; pp. 6558–6569. [Google Scholar] [CrossRef]
- Akbari, H.; Yuan, L.; Qian, R.; Chuang, W.H.; Chang, S.F.; Cui, Y.; Gong, B. VATT: Transformers for Multimodal Self-Supervised Learning from Raw Video, Audio and Text. In Advances in Neural Information Processing Systems; Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P., Vaughan, J.W., Eds.; Curran Associates, Inc.: Red Hook, NY, USA, 2021; Volume 34, pp. 24206–24221. [Google Scholar]
- Lindquist, K.A.; Wager, T.D.; Kober, H.; Bliss-Moreau, E.; Barrett, L.F. The brain basis of emotion: A meta-analytic review. Behav. Brain. Sci. 2012, 35, 121–143. [Google Scholar] [CrossRef]
- Aybek, S.; Nicholson, T.R.; O’Daly, O.; Zelaya, F.; Kanaan, R.A.; David, A.S. Emotion-Motion Interactions in Conversion Disorder: An fMRI Study. PLoS ONE 2015, 10, e0123273. [Google Scholar] [CrossRef] [PubMed]
- Tang, J.; LeBel, A.; Jain, S.; Huth, A.G. Semantic reconstruction of continuous language from non-invasive brain recordings. Nat. Neurosci. 2023, 26, 858–866. [Google Scholar] [CrossRef] [PubMed]
- Rogers, J. Non-invasive continuous language decoding. Nat. Rev. Neurosci. 2023, 24, 393. [Google Scholar] [CrossRef] [PubMed]
- Takagi, Y.; Nishimoto, S. High-Resolution Image Reconstruction with Latent Diffusion Models From Human Brain Activity. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 18–22 June 2023; pp. 14453–14463. [Google Scholar]
- Zhang, J.; Li, C.; Liu, G.; Min, M.; Wang, C.; Li, J.; Wang, Y.; Yan, H.; Zuo, Z.; Huang, W.; et al. A CNN-transformer hybrid approach for decoding visual neural activity into text. Comput. Methods Programs Biomed. 2022, 214, 106586. [Google Scholar] [CrossRef] [PubMed]
- Luo, S.; Rabbani, Q.; Crone, N.E. Brain-Computer Interface: Applications to Speech Decoding and Synthesis to Augment Communication. Neurotherapeutics 2022, 19, 263–273. [Google Scholar] [CrossRef] [PubMed]
- Wolfe, F.H.; Auzias, G.; Deruelle, C.; Chaminade, T. Focal atrophy of the hypothalamus associated with third ventricle enlargement in autism spectrum disorder. NeuroReport 2015, 26, 1017–1022. [Google Scholar] [CrossRef]
- Gössl, C.; Fahrmeir, L.; Auer, D.P. Bayesian modeling of the hemodynamic response function in BOLD fMRI. NeuroImage 2001, 14, 140–148. [Google Scholar] [CrossRef]
- Fan, L.; Li, H.; Zhuo, J.; Zhang, Y.; Wang, J.; Chen, L.; Yang, Z.; Chu, C.; Xie, S.; Laird, A.R.; et al. The Human Brainnetome Atlas: A New Brain Atlas Based on Connectional Architecture. Cereb. Cortex 2016, 26, 3508–3526. [Google Scholar] [CrossRef]
- Sun, M.; Li, J.; Feng, H.; Gou, W.; Shen, H.; Tang, J.; Yang, Y.; Ye, J. Multi-modal Fusion Using Spatio-temporal and Static Features for Group Emotion Recognition. In Proceedings of the 2020 International Conference on Multimodal Interaction, Online, 25–29 October 2020; ACM: New York, NY, USA, 2020; pp. 835–840. [Google Scholar] [CrossRef]
- Zhu, B.; Lan, X.; Guo, X.; Barner, K.E.; Boncelet, C. Multi-rate Attention Based GRU Model for Engagement Prediction. In Proceedings of the 2020 International Conference on Multimodal Interaction, Online, 25–29 October 2020; ACM: New York, NY, USA, 2020; pp. 841–848. [Google Scholar] [CrossRef]
- Oliveira, L.M.R.; Shuen, L.C.; Da Cruz, A.K.B.S.; Soares, C.D.S. Summarization of Educational Videos with Transformers Networks. In Proceedings of the 29th Brazilian Symposium on Multimedia and the Web, Online, 25–29 October 2023; ACM: New York, NY, USA, 2023; pp. 137–143. [Google Scholar] [CrossRef]
- Dror, R.; Shlomov, S.; Reichart, R. Deep Dominance - How to Properly Compare Deep Neural Models. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy, 28 July–2 August 2019; Association for Computational Linguistics: Stroudsburg, PA, USA, 2019; pp. 2773–2785. [Google Scholar] [CrossRef]
- Del Barrio, E.; Cuesta-Albertos, J.A.; Matrán, C. An Optimal Transportation Approach for Assessing Almost Stochastic Order. In The Mathematics of the Uncertain; Gil, E., Gil, E., Gil, J., Gil, M.Á., Eds.; Series Title: Studies in Systems, Decision and Control; Springer International Publishing: Cham, Switzerland, 2018; Volume 142, pp. 33–44. [Google Scholar] [CrossRef]
- Ulmer, D.; Hardmeier, C.; Frellsen, J. deep-significance - Easy and Meaningful Statistical Significance Testing in the Age of Neural Networks. arXiv 2022, arXiv:2204.06815. [Google Scholar] [CrossRef]
- Hosseini, S.S.; Yamaghani, M.R.; Poorzaker Arabani, S. Multimodal modelling of human emotion using sound, image and text fusion. SIViP Signal Image Video Process. 2024, 18, 71–79. [Google Scholar] [CrossRef]
- Chaminade, T.; Spatola, N. Perceived facial happiness during conversation correlates with insular and hypothalamus activity for humans, not robots. Front. Psychol. 2022, 13, 871676. [Google Scholar] [CrossRef] [PubMed]
- Decety, J.; Chaminade, T. When the self represents the other: A new cognitive neuroscience view on psychological identification. Conscious. Cogn. 2003, 12, 577–596. [Google Scholar] [CrossRef] [PubMed]
- Alsabhan, W. Human–Computer Interaction with a Real-Time Speech Emotion Recognition with Ensembling Techniques 1D Convolution Neural Network and Attention. Sensors 2023, 23, 1386. [Google Scholar] [CrossRef] [PubMed]
Models | MSE per ROI | ||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
lvA | rrA | ldA | rdA | lvD | rVD | ldD | rdD | ldG | rdG | lH | rH | lMA | rMA | lLA | rLA | Hy | |
CURRENT | 0.0321 | 0.0318 | 0.0323 | 0.0322 | 0.0334 | 0.0324 | 0.028 | 0.0252 | 0.0334 | 0.0328 | 0.0322 | 0.0286 | 0.0256 | 0.0304 | 0.0304 | 0.0326 | 0.0324 |
MLP | 0.0365 | 0.0356 | 0.0337 | 0.0342 | 0.040 | 0.0360 | 0.0294 | 0.0321 | 0.0321 | 0.0338 | 0.0332 | 0.0311 | 0.0306 | 0.0289 | 0.0287 | 0.0337 | 0.0338 |
MODEL 1 | 0.0328 | 0.0329 | 0.0329 | 0.0332 | 0.0353 | 0.0337 | 0.0289 | 0.0313 | 0.0338 | 0.0340 | 0.0343 | 0.0297 | 0.0270 | 0.0298 | 0.0296 | 0.0335 | 0.0339 |
MODEL 2 | 0.0322 | 0.0320 | 0.0326 | 0.0324 | 0.0337 | 0.0331 | 0.0284 | 0.0309 | 0.0328 | 0.0320 | 0.0313 | 0.0287 | 0.0254 | 0.0293 | 0.0285 | 0.0340 | 0.0333 |
MODEL 3 | 0.0331 | 0.0325 | 0.0327 | 0.0326 | 0.0346 | 0.0337 | 0.0286 | 0.0309 | 0.0326 | 0.0325 | 0.0322 | 0.0289 | 0.0254 | 0.0287 | 0.0285 | 0.0347 | 0.0332 |
Models | MSE per ROI | ||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
lvA | rrA | ldA | rdA | lvD | rVD | ldD | rdD | ldG | rdG | lH | rH | lMA | rMA | lLA | rLA | Hy | |
CURRENT | 0.0342 | 0.0313 | 0.0318 | 0.0298 | 0.0335 | 0.0301 | 0.0286 | 0.0302 | 0.0315 | 0.0348 | 0.0317 | 0.0271 | 0.0290 | 0.0306 | 0.0354 | 0.0318 | 0.0314 |
MLP | 0.0577 | 0.0499 | 0.0458 | 0.0424 | 0.0414 | 0.0387 | 0.0364 | 0.0320 | 0.0341 | 0.0371 | 0.0309 | 0.0519 | 0.0695 | 0.0258 | 0.0304 | 0.0387 | 0.0396 |
MODEL 1 | 0.0404 | 0.0361 | 0.0414 | 0.0349 | 0.0350 | 0.0317 | 0.0315 | 0.0310 | 0.0306 | 0.0324 | 0.0304 | 0.0362 | 0.0424 | 0.0242 | 0.0283 | 0.0340 | 0.0355 |
MODEL 2 | 0.0365 | 0.0329 | 0.0332 | 0.0302 | 0.0330 | 0.0304 | 0.0293 | 0.0303 | 0.0311 | 0.0322 | 0.0313 | 0.0293 | 0.0373 | 0.0257 | 0.0289 | 0.0325 | 0.0316 |
MODEL 3 | 0.0352 | 0.0324 | 0.0322 | 0.0300 | 0.0336 | 0.0305 | 0.0283 | 0.0306 | 0.0318 | 0.0324 | 0.0309 | 0.0291 | 0.0334 | 0.0257 | 0.0282 | 0.0324 | 0.0317 |
Modalities | MSE | |||||
---|---|---|---|---|---|---|
MODELS | A | V | B | HHI | HRI | |
AVB | ✓ | ✓ | ✓ | 0.0317 | 0.0314 | |
Bimodal | ✓ | ✓ | ✗ | 0.0337 | 0.0352 | |
✓ | ✗ | ✓ | 0.0370 | 0.0328 | ||
✗ | ✓ | ✓ | 0.0339 | 0.0367 | ||
Unimodal | ✓ | ✗ | ✗ | 0.0347 | 0.0338 | |
✗ | ✓ | ✗ | 0.0426 | 0.0614 | ||
✗ | ✗ | ✓ | 0.0321 | 0.0319 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Kdouri, L.; Hmamouche, Y.; El Fallah Seghrouchni, A.; Chaminade, T. Predicting Activity in Brain Areas Associated with Emotion Processing Using Multimodal Behavioral Signals. Multimodal Technol. Interact. 2025, 9, 31. https://doi.org/10.3390/mti9040031
Kdouri L, Hmamouche Y, El Fallah Seghrouchni A, Chaminade T. Predicting Activity in Brain Areas Associated with Emotion Processing Using Multimodal Behavioral Signals. Multimodal Technologies and Interaction. 2025; 9(4):31. https://doi.org/10.3390/mti9040031
Chicago/Turabian StyleKdouri, Lahoucine, Youssef Hmamouche, Amal El Fallah Seghrouchni, and Thierry Chaminade. 2025. "Predicting Activity in Brain Areas Associated with Emotion Processing Using Multimodal Behavioral Signals" Multimodal Technologies and Interaction 9, no. 4: 31. https://doi.org/10.3390/mti9040031
APA StyleKdouri, L., Hmamouche, Y., El Fallah Seghrouchni, A., & Chaminade, T. (2025). Predicting Activity in Brain Areas Associated with Emotion Processing Using Multimodal Behavioral Signals. Multimodal Technologies and Interaction, 9(4), 31. https://doi.org/10.3390/mti9040031