Next Article in Journal
Relevant Factors in the Schooling of Children with Autism Spectrum Disorder in Early Childhood Education
Previous Article in Journal
Reliability and Validity of the KFORCE Sens® Inertial Sensor for Measuring Cervical Spine Proprioception in Patients with Non-Specific Chronic Neck Pain
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Emotion Recognition Based on a EEG–fNIRS Hybrid Brain Network in the Source Space

by
Mingxing Hou
1,2,
Xueying Zhang
3,*,
Guijun Chen
3,
Lixia Huang
3 and
Ying Sun
3
1
College of Integrated Circuits, Taiyuan University of Technology, Taiyuan 030600, China
2
College of Computer Science and Technology, Taiyuan Normal University, Taiyuan 030619, China
3
College of Electronic Information Engineering, Taiyuan University of Technology, Taiyuan 030600, China
*
Author to whom correspondence should be addressed.
Brain Sci. 2024, 14(12), 1166; https://doi.org/10.3390/brainsci14121166
Submission received: 11 October 2024 / Revised: 7 November 2024 / Accepted: 19 November 2024 / Published: 22 November 2024
(This article belongs to the Section Neurotechnology and Neuroimaging)

Abstract

:
Background/Objectives: Studies have shown that emotion recognition based on electroencephalogram (EEG) and functional near-infrared spectroscopy (fNIRS) multimodal physiological signals exhibits superior performance compared to that of unimodal approaches. Nonetheless, there remains a paucity of in-depth investigations analyzing the inherent relationship between EEG and fNIRS and constructing brain networks to improve the performance of emotion recognition. Methods: In this study, we introduce an innovative method to construct hybrid brain networks in the source space based on simultaneous EEG-fNIRS signals for emotion recognition. Specifically, we perform source localization on EEG signals to derive the EEG source signals. Subsequently, causal brain networks are established in the source space by analyzing the Granger causality between the EEG source signals, while coupled brain networks in the source space are formed by assessing the coupling strength between the EEG source signals and the fNIRS signals. The resultant causal brain networks and coupled brain networks are integrated to create hybrid brain networks in the source space, which serve as features for emotion recognition. Results: The effectiveness of our proposed method is validated on multiple emotion datasets. The experimental results indicate that the recognition performance of our approach significantly surpasses that of the baseline method. Conclusions: This work offers a novel perspective on the fusion of EEG and fNIRS signals in an emotion-evoked experimental paradigm and provides a feasible solution for enhancing emotion recognition performance.

1. Introduction

Emotion is an expression of human intelligence, exerting a crucial influence on social interactions. Emotion recognition is a key issue in affective computing and is attracting increasing attention in the field of artificial intelligence [1,2,3], which is widely used in many domains, such as human–computer interaction, intelligent education, transportation safety, and healthcare. Physiological signals, regulated by the nervous system, are inherently difficult to conceal and disguise, rendering recognition based on these signals more dependable than that derived from non-physiological signals such as facial expressions, body postures, and voice tones [4]. Electroencephalogram (EEG), which captures the electrical field caused by neural activity through electrodes placed on the scalp [5], has been extensively explored in emotion recognition by virtue of its high temporal resolution, as well as its non-invasive and low-cost characteristics. Meanwhile, functional near-infrared spectroscopy (fNIRS) measures the concentration change of oxygenated hemoglobin (HbO) and deoxygenated hemoglobin (HbR) associated with brain activities through optodes (transmitter and receiver) positioned on the scalp, which reflects the hemodynamic activity of the cerebral cortex and boasts high spatial resolution and robust interference resistance. Consequently, fNIRS effectively mitigates the spatial resolution limitations of EEG signals and furnishes supplementary insights into neuronal activity [6]. As a result, research on emotion recognition based on the joint analysis of simultaneous EEG–fNIRS signals is garnering interest among researchers.
In current studies on EEG-based emotion recognition, researchers have proposed many features centered on the intrinsic attributes of EEG signals, including temporal, spectral, time–frequency, and spatial features [7,8,9,10]. However, these features cannot characterize the information transmission and interaction between brain regions during emotional cognition. To address this issue, researchers proposed EEG brain network features, which facilitated the examination of coordination mechanisms among brain regions from a macroscopic perspective and enhanced the performance of emotion recognition [11,12,13,14,15]. An EEG brain network is constructed by treating EEG signals as nodes and the connectivity between nodes as edges. Depending on the connectivity metrics, brain networks are categorized into functional brain networks, established by undirected metrics, such as correlation and mutual information, and effective brain networks, constructed through directed measures like transfer entropy and Granger causality [16]. Effective brain networks not only reflect the interactions between nodes, but also indicate the information flow direction, thereby elucidating the information exchange pattern among brain regions. Granger causality (GC) analysis, with no prior information required, could calculate the causal relationship between time series, which was then used to construct the EEG causal brain networks, garnering some achievements in emotion recognition [17,18,19,20]. Nevertheless, existing EEG-based causal brain networks have overlooked the volume conduction effect inherent in EEG signals [21], which reduced the precision in reflecting the information transmission pattern across brain regions under various emotions, thus constraining further improvements in emotion recognition performance. To mitigate the impact of the volume conduction effect, researchers have applied the source localization technique to EEG signals, yielding EEG source signals that represent cortical electrical activity more accurately, and have conducted subsequent analyses in the source space. Chen et al. [22] extracted six temporal and spectral features from EEG source signals for emotion recognition, realizing notable improvements compared to the features derived from EEG signals. Similarly, Becker et al. [23] extracted high-order cross features, statistical features, and spectral features from both EEG source signals and EEG signals for emotion recognition, further substantiating the superiority of features from EEG source signals. In this paper, we propose to construct more precise causal brain networks from EEG source signals using Granger causality analysis, termed as causal brain networks in the source space, which hold significant promise for advancing the performance of emotion recognition.
Additionally, owing to the good compatibility and complementarity, EEG and fNIRS multimodal fusion has become a research hotspot in many fields [24,25,26,27]. In recent years, the joint analysis of EEG–fNIRS for emotion recognition has also drawn much attention from researchers. Currently, the absence of publicly available emotional datasets containing concurrent EEG–fNIRS signals has limited research to a select few who have conducted preliminary investigations using self-built datasets. These studies have demonstrated that emotion recognition utilizing EEG–fNIRS multimodal signals outperforms recognition performed using unimodality [28,29,30,31]. Nevertheless, the existing EEG–fNIRS fusion methods are mainly confined to data-level or feature-level fusion by machine learning models, wherein features are independently extracted from each modality. There remains a lack of deep exploration into the intrinsic relationship between EEG and fNIRS signals. Therefore, the pursuit of a novel EEG–fNIRS fusion method for emotion recognition is of profound significance.
Research on the intrinsic relationship between EEG and fNIRS is anchored in the concept of neurovascular coupling, a well-regulated physiological process wherein neural activity in the brain is inherently accompanied by fluctuations in blood flow [32]. In particular, upon neuronal activation, blood flow is directed towards the active region to satisfy the heightened demand for glucose and oxygen, thereby inducing detectable fluctuations in hemoglobin concentration through fNIRS. This phenomenon reflects the close relationship between neuronal activity and hemodynamic changes in the brain, providing a new perspective for EEG–fNIRS fusion. Current studies on the coupling relationship between EEG and fNIRS primarily concentrate on motor imagery tasks, where the experimental stimuli of short duration and constant intensity can be modeled as square wave functions carried by EEG information. By convolving the modeling function with the canonical hemodynamic response function (HRF), the predicted fNIRS signal is obtained and used as a design matrix for fitting the measured fNIRS signals within a general linear model (GLM), yielding regression coefficients that reflect the EEG–fNIRS coupling relationship. Li et al. [33] employed event-related potentials (ERPs) from specific frequency EEG signals to model the experimental stimuli, while Gao et al. [34] modeled the experimental stimuli using the peak and latency of the time-varying power of the channel-averaged EEG signal. However, in the emotion-evoked experimental paradigm, the stimuli are generally in the form of audio or video clips, with relatively prolonged duration (1–2 min) and continuously changing intensity, precluding their exact description by square wave functions. Therefore, this paper introduces a new method for modeling the stimuli in an emotion-evoked experimental paradigm. By leveraging the neurovascular coupling characteristic, the coupling strength between the EEG source signal and the fNIRS signal is calculated, serving as a new connectivity metric to construct a coupled brain network in the source space for emotion recognition.
In summary, in this paper, we propose an innovative method for constructing hybrid brain networks in the source space from concurrent EEG–fNIRS signals for emotion recognition. First, we impose source localization on EEG signals to gain more precise cortical electrical activity, termed EEG source signals, aiming at alleviating the impact of the volume conduction effect. Then, Granger causality analysis is performed on the EEG source signals to construct causal brain networks in the source space. Furthermore, according to the neurovascular coupling characteristic, we introduce a novel approach for calculating coupling strengths between EEG source signals and fNIRS signals, thereby constructing coupled brain networks in the source space under an emotion-evoked experimental paradigm. Finally, by merging the two brain networks, hybrid brain networks are generated in the source space, making the most of the causal relationship among the EEG source signals and the coupling relationship between the EEG source signals and fNIRS signals to promote emotion recognition performance.
The main contributions are summarized as follows:
(1)
A novel EEG–fNIRS fusion method for constructing coupled brain networks in an emotion-evoked experimental paradigm is proposed.
(2)
Emotion recognition based on hybrid brain networks, achieved by integrating causal brain networks and coupled brain networks in the source space, is explored for the first time in this paper.
(3)
Evaluations on our self-built dataset (ENTER) and public datasets (SEED-IV, DEAP) show the superior performance of the proposed method.
The rest of the paper is organized as follows. Section 2 introduces EEG–fNIRS data acquisition and preprocessing; Section 3 elaborates on the proposed method; the experimental results and analysis are presented in Section 4. Finally, Section 5 concludes the paper and suggests promising directions for future research.

2. Data Acquisition and Preprocessing

In this section, we provide descriptions regarding the process of data acquisition for the ENTER dataset and data preprocessing.

2.1. Data Acquisition

To leverage the complementary advantages of EEG and fNIRS for emotion recognition research, our team simultaneously collected EEG–fNIRS signals in an emotion-evoked experiment and built an emotional dataset, named ENTER [31]. A more detailed description of and access to the dataset is available online at https://gitee.com/tycgj/enter (accessed on 10 August 2024). The data for each subject are stored in a separate folder, with the name as the subject ID. In each folder, there are two subfolders, named “EEG” and “FNIRS”, containing EEG and fNIRS data saved as MAT files from 60 trails, respectively.
  • Emotion-inducing materials: 60 videos (1–2 min long) were carefully selected to induce four types of emotions, including sadness, happiness, calm, and fear (there are 15 videos pertaining to each emotion).
  • Subjects: 50 college students, 25 male and 25 female, were recruited for emotion data collection. Prior to the experiment, all subjects were informed of the experimental purpose, procedures, and important notes, and all subjects provided written informed consent.
  • Signal acquisition equipment: EEG signals were acquired at 1000 Hz using the ESI NeuroScan system (Compumedics Ltd., Victoria, Australia), which comprises 62 channels placed across the entire brain region. Concurrently, a portable near-infrared brain functional imaging system, NirSmart, was used to collect fNIRS signals at 11 Hz, with 18 channels created by adjacent transmitter–receiver pairs, which are distributed only in the frontal and temporal lobes. The experimental scenario is shown in Figure 1a, and a schematic illustration of the positions of the EEG electrodes and fNIRS optodes is shown in Figure 1b.
Figure 1. Experimental setup for EEG–fNIRS data acquisition. (a) Experimental scenario; (b) positions of EEG electrodes and fNIRS optodes.
Figure 1. Experimental setup for EEG–fNIRS data acquisition. (a) Experimental scenario; (b) positions of EEG electrodes and fNIRS optodes.
Brainsci 14 01166 g001
All data acquisition experiments were executed in a screened chamber. During the experiment, a subject was seated in a comfortable chair and engaged in emotion-evoked tasks, remaining quiet and relaxed while endeavoring to minimize body movements and blinking. The experimental paradigm is shown in Figure 2. Each subject completed 60 trials. In each trial, a 5 s start prompt was followed by a continuous video clip designed to induce a specific emotion. Once the video ended, the subject was given 30 s for self-assessment, rating the emotional experience using a nine-point scale regarding arousal and valence to confirm whether the corresponding emotion was successfully induced. After a trial, the subject took a break for 2–3 min. Upon completion of all 60 trials, each subject yielded 60 sets of concurrent EEG data from 62 channels and fNIRS data from 18 channels.

2.2. Data Preprocessing

The collected EEG data typically includes irrelevant signals such as ocular movement artifacts, muscle artifacts, power line noise, and electromagnetic disturbances. Therefore, data preprocessing is indispensable. The EEG signals undergo re-referencing and baseline correction, succeeded by artifact removal through independent component analysis (ICA). Thereafter, the signals are processed with a bandpass filter from 0.5 to 45 Hz and downsampled to 200 Hz. The recorded fNIRS optical density signals are subjected to baseline correction and filtered with a bandpass of 0.01–0.2 Hz. Subsequently, these signals are converted to concentration changes of HbO, according to the Modified Beer–Lambert law [35], and finally upsampled to 200 Hz to match the EEG signals.

3. Proposed Method

In this paper, we propose an innovative method for constructing hybrid brain networks in the source space from concurrent EEG–fNIRS signals for emotion recognition. The overall flowchart of our approach is shown in Figure 3. First, we apply Granger causality analysis to EEG source signals, obtained through the source localization technique, to establish causal brain networks. Then, the coupling strengths between the EEG source signals and the fNIRS signals are calculated to generate coupled brain networks. Finally, we integrate causal and coupled brain networks to create hybrid brain networks in the source space, which are fed into a recognition model as features for emotion classification.

3.1. Causal Brain Networks Construction in the Source Space

This section depicts the source localization technique for estimating EEG source signals from EEG signals [36], as well as the process of constructing causal brain networks from EEG source signals, namely causal brain networks in the source space.

3.1.1. Source Localization

When cortical electrical activities, modeled as dipoles, occur, EEG signals can be detected on the scalp. The relationship between dipoles J and EEG signals X can be described as follows:
X = L J + δ
where X u × n represents EEG signals with u channels and n time samples, J v × n indicates that v dipoles with n time samples exist in the cerebral cortex, δ represents noise, and L u × v represents the lead field matrix, which describes the electric field generated by the unit dipole and can be calculated based on the parameters of the head model. In application, the EEG signal X is measurable; hence, J can be estimated from X , which is referred to as inverse problem. Generally, u v , rendering Equation (1) highly underdetermined, implies that numerous different combinations of dipoles can produce the identical electric field distribution on the scalp. Therefore, additional constraints are requisite for the solution. A prevalent approach is to minimize the residual function, as follows:
R ( J ) = α X L J 2 + J T Σ 0 J
where the first term on the right side represents the reconstruction error, quantifying the difference between the measured EEG signals and those reconstructed by dipoles; the second term is regularization term used to address the underdetermined problem; Σ 0 is a regularization matrix; α controls the weight between the reconstruction error and the regularization term.
Different regularization matrices will produce different solutions for source localization. When Σ 0 is the identity matrix, the MNE (minimum norm estimate) [37] solution of Equation (1) is given by the following:
J = Σ 0 1 L T ( L Σ 0 1 L T + α 1 I ) 1 X
wherein I denotes the identity matrix.
This study employs the MNE algorithm to determine the strengths of the dipoles and generate time series. Subsequently, the standard Desikan–Killiany–Tourville (DKT) atlas [38] is adopted to partition the cortical surface into 62 regions. The schematic diagram of the DKT atlas is shown in Figure 4.
After averaging the time series of all the dipoles within a region, an EEG source signal is generated. These EEG source signals from all regions can be represented as S = [ s 1 ( t ) , s 2 ( t ) , , s c ( t ) ] n × c , where c and n represent the number and sample times of the EEG source signals, respectively. It can be seen that the time resolution of the EEG and EEG source signals is consistent because of the same sample times n . However, the EEG source signals have a higher spatial resolution than does the EEG, primarily due to the reduced impact of the volume conduction effect.

3.1.2. Causal Brain Networks Construction

Before constructing causal brain networks, EEG source signals are first segmented to generate samples via a 3 s rectangular window, with a contiguous sliding overlap of 1.5 s. It has been proven that the above segmentation mode is able to achieve superior performance in emotion recognition [20]. In this subsection, a sample including EEG source signals is expressed as S = [ s 1 ( t ) , s 2 ( t ) , , s c ( t ) ] r × c , where r denotes the signal length. Next, Granger causality analysis is performed on all samples to construct causal brain networks in the source space. It is worth noting that Granger causality analysis operates under the assumption of time series stationarity, meaning that the statistical attributes of a time series remain invariable over time (i.e., the mean and variance are constant). Hence, a detrending operation is necessary before conducting the analysis.
Granger causality analysis employs regression models to determine the predictive capability of one time series on another time series [39]. Specifically, the presence of a causal relationship between two time series is indicated when the inclusion of one series is beneficial for predicting the other series. To further illustrate, we suppose that two time series s 1 ( t ) and s 2 ( t ) can be described as follows:
s 1 ( t ) = k = 1 p a 1 , k s 1 ( t k ) + ξ 1 ( t )
s 1 ( t ) = k = 1 p b 1 , k s 1 ( t k ) + k = 1 p b 2 , k s 2 ( t k ) + η 1 ( t )
where t denotes time, k denotes time lag, a and b denote regression coefficients, ξ 1 and η 1 denote residuals (prediction errors), and p is called the model order, which denotes the maximum time lag and can be determined by Bayesian information criterion (BIC) [40].
In an autoregressive model described by Equation (4), the current observation of s 1 ( t ) is predicted by its own past observations. Conversely, in the vector regression model described by Equation (5), the current observation of s 1 ( t ) is predicted by the past observations of its own and s 2 ( t ) . If the variance of η 1 is less than ξ 1 , meaning the joining of s 2 ( t ) enhancing the prediction accuracy of s 1 ( t ) , s 2 ( t ) is said to Granger-cause s 1 ( t ) , expressed as s 2 s 1 . The magnitude of causality is quantifiable through the ratio of the variances of the residual from two models, as follows:
F s 2 s 1 = ln var ( ξ 1 ) var ( η 1 )
where var ( ) denotes variance.
After conducting a Granger causality analysis on any two EEG source signals within a sample, a causal matrix G s is obtained.
G s = g 11 g 12 g 1 c g 21 g 22 g 2 c g c 1 g c 2 g c c
where g i j represents the causality measurement of s i s j , indicating the direction of information transmission. Generally, g i j is not equal to g j i , thereby resulting in asymmetric G s . The pseudocode for calculating a causal matrix in the source space is provided in Algorithm 1.
Algorithm 1: Calculation of a causal matrix in the source space.
  • Input: A sample including EEG source signals S = [ s 1 ( t ) , s 2 ( t ) , , s c ( t ) ] r × c
Output: G s
1: for  i = 1 , , c , do
2:       for  j = 1 , , c , do
3:             Calculate the residual ξ 1 in autoregressive model by Equation (4).
4:             Calculate the residual η 1 in vector regression model by Equation (5).
5:             Calculate the Granger causality between the i th and j th EEG source sign by Equation (6).
6:       end for
7: end for
When G s is used as an adjacency matrix, a causal brain network in the source space can be constructed, with the i th and j th EEG source signals serving as nodes, and g i j serving as a directed edge connecting the two nodes.

3.2. Coupled Brain Networks Construction in the Source Space

Here, we introduce an innovative EEG–fNIRS fusion method founded on the intrinsic neurovascular coupling relationship between two signals. Given the fact that emotional intensity is fluctuating temporally, and the brain response to the same stimulus exhibits regional disparities in the emotion-evoked paradigm, we use the time-varying powers from all EEG source signals to model the experimental stimuli. Thereafter, the coupling strengths between the EEG source signals and the fNIRS signals are calculated to derive a coupling matrix, facilitating the construction of a coupled brain network in the source space.
For the EEG source signals S = [ s 1 ( t ) , s 2 ( t ) , , s c ( t ) ] n × c and the measured fNIRS signals Y = [ y 1 ( t ) , y 2 ( t ) , , y d ( t ) ] n × d , where d denotes the number of fNIRS channels, the overall process of calculating a coupling matrix in the source space is shown in Figure 5.
Initially, we compute the time-frequency power spectrum of each EEG source signal s i ( t ) n × 1   ( i = 1 , 2 , , c ) using a short-term Fourier transform (STFT).
P i ( t , f ) = s i ( τ ) h ( τ t ) e j 2 π f t d τ 2
where t denotes time; f denotes frequency; h ( τ t ) denotes window function.
Following the power addition in the frequency dimension on P i ( t , f ) and normalization, the normalized time-varying power P i ( t ) is obtained and deemed as the i th modeling signal for the corresponding experimental stimulus.
P i ( t ) = norm ( f = f min f max P i ( t , f ) )
where f max and f min denote the upper and lower limit of the frequency range of the EEG source signal; norm ( ) denotes normalization operator.
Subsequently, P i ( t ) is convolved with the canonical hemodynamic response function (HRF) to derive the predicted fNIRS signal y ˜ i ( t ) for the i th modeling signal, which can be described as follows:
y ˜ i ( t ) = P i ( t ) H R F ( t )
Take the first n values to derive y ˜ i ( t ) n × 1 to ensure the aligned signal length. The expression of H R F ( t ) is as follows:
H R F ( t ) = g ( t , 6 ) 1 6 g ( t , 16 )
where
g ( t , d ) = t d 1 e t Γ ( d )
When all y ˜ i ( t ) are obtained, we create a matrix Y ˜ = [ y ˜ 1 ( t ) ,   y ˜ 2 ( t ) , , y ˜ c ( t ) ] n × c to represent all the predicted fNIRS signals of the current experimental stimulus. Whereafter, we segment both the predicted and measured fNIRS signals into l samples using a slide rectangular window of 3 s with an overlap of 1.5 s. Next, the coupling strengths are computed on all samples to construct coupled brain networks in the source space. Specifically, for a sample including the predicted fNIRS signals Y ˜ = [ y ˜ 1 ( t ) ,   y ˜ 2 ( t ) , , y ˜ c ( t ) ] r × c and the measured fNIRS signals Y = [ y 1 ( t ) , y 2 ( t ) , , y d ( t ) ] r × d , Y ˜ is used as the design matrix D to fit Y within a general linear model, which can be formulated as follows:
Y = D β + ε = y ˜ 11 y ˜ 1 c y ˜ r 1 y ˜ r c β 11 β 1 d β c 1 β c d + ε 11 ε 1 d ε r 1 ε r d
where β c × d represents the fitting coefficient matrix; ε represents the fitting error. Since the i th column of design matrix D contains information from the i th EEG source signal, the element β i j of β represents the coupling coefficient between the i th EEG source signal and the j th fNIRS signal.
Ultimately, we take the absolute value of the coupling coefficient as the coupling strength. Thus, the EEG–fNIRS coupling matrix C S in the source space can be described as follows:
C S = β
where β i j represents the coupling strength between the i th EEG source signal and the j th fNIRS signal.
The pseudocode for calculating a coupling matrix in the source space is given in Algorithm 2.
Algorithm 2: Calculation of a coupling matrix in the source space.
  • Input: EEG source signals S = [ s 1 ( t ) , s 2 ( t ) , , s c ( t ) ] n × c and measured fNIRS signals Y = [ y 1 ( t ) , y 2 ( t ) , , y d ( t ) ] n × d
Output: C S
1: for  i = 1 , , c , do
2:       Calculate the time-frequency power spectrum P i ( t , f ) for s i ( t ) by Equation (8).
3:       Calculate the normalized time-varying power P i ( t ) for s i ( t ) by Equation (9).
4:       Calculate the predicted fNIRS signal y ˜ i ( t ) by Equation (10).
5: end for
6: Create matrix Y ˜ by utilizing all y ˜ i ( t ) .
7: Segment Y ˜ and Y into l samples; each sample contains Y ˜ and Y .
8: for  i = 1 , , l , do
9:          Fit Y within general linear model by Equation (13).
10:       Calculate the coupling matrix in the source space C S by Equation (14).
11: end for
When C S is used as an adjacency matrix, an EEG–fNIRS coupled brain network in the source space can be constructed, with the i th EEG source signal and the j th fNIRS signal serving as nodes and β i j serving as the edge connecting the two nodes.

3.3. Hybrid Brain Networks Construction in the Source Space

By concatenating the EEG causal brain network G S and the EEG–fNIRS coupled brain network C S , it is feasible to derive a hybrid brain network in the source space, as follows:
H S = concat ( G S , C S )
where concat ( ) represents the matrix concatenation. The hybrid brain network in the source space perfectly integrates the causal brain network and the coupled brain network by treating EEG source signals as intermediate nodes, not only considering the information interaction among the brain regions during emotional cognition but also encompassing the intrinsic relationship between two signals associated with neuronal activity in the brain, providing rich emotional information to enhance recognition performance.
During emotion recognition, each matrix representing a hybrid brain network is vectorized to constitute the feature vector. Combining the feature vectors of all samples, we can obtain a feature set, which will be employed to train and evaluate the emotion recognition model.

4. Experimental Results and Analysis

This section will present abundant experimental results to validate the effectiveness of the proposed method from two aspects: performance evaluation and performance comparison. All experiments were conducted on MATLAB R2023b. To mitigate the influence of dataset partitioning, a five-fold cross-validation strategy was adopted. The mean classification accuracy per subject from five repetitions was designated as the subject-specific recognition result. The average recognition result across 50 subjects was determined as the final emotion recognition accuracy. Furthermore, the experimental outcomes were compared with other methods extant in the literature to demonstrate the superiority of the proposed method.

4.1. Performance Evaluation

We first conduct a comparative analysis of the emotion recognition accuracy by utilizing different brain network features on our self-built emotion dataset ENTER, including the following: (1) EG: EEG Granger causal brain network; (2) SG: Granger causal brain network in the source space, described by G S ; (3) EC: EEG–fNIRS coupled brain network; (4) SC: coupled brain network in the source space, described by C S ; (5) EG_EC: hybrid brain network created by integrating EG and EC; (6) SG_SC: hybrid brain network H S created by integrating SG and SC. To eliminate the impact of a particular classifier on recognition performance, two classifiers, i.e., support vector machine (SVM), with a linear kernel, and k-nearest neighbor (KNN), with a predefined 10 neighbors, are utilized for emotion classification. Emotion recognition accuracies (%) based on different brain network features are listed in Table 1.
From Table 1, it can be observed that for SVM, the accuracy of SG achieves an 8.1% improvement over that of EG; meanwhile, the accuracy of SC outperforms the EC network by 4.8%. A similar situation regarding improvements can also be observed when KNN is used. Moreover, when we use the hybrid brain network for emotion recognition, both SVM and KNN exhibit a consistent trend where the accuracies of EG_EC surpass the results of EG or EC, and the accuracies of SG_SC show a marked improvement over the performance of SG or SC. Moreover, we also observe that the brain networks constructed in the source space (SG, SC, SG_SC) all achieve lower recognition standard deviations than their corresponding counterparts (EG, EC, EG_EC). These findings suggest that (1) constructing brain networks in the source space greatly contributes to the promotion of recognition performance; (2) the EEG–fNIRS hybrid brain networks can achieve superior recognition performance over that of causal or coupled brain networks. Notably, the results comparison shows that the proposed method (SG_SC) achieves the highest recognition accuracy and the lowest standard deviation (96.6 ± 2.08% for SVM and 91.7 ± 3.02% for KNN), verifying the superiority of our method.
To provide insight into the recognition results across different emotions, we present the SVM recognition confusion matrices for three brain networks (SG, SC, and SG_SC) in Figure 6. As shown in Figure 6a, the SG-based results reveal that the results for happiness exhibit the lowest accuracy (91.7%), with the highest misclassification rate of 3.7% for the misidentification of happiness as sadness, manifesting the relatively poor ability of SG to distinguish between sadness and happiness. The SC-based results presented in Figure 6b show a similar trend, with the results for happiness again presenting the lowest recognition accuracy of 78.4%. The highest misclassification rate of 9.0% towards calm suggests the limited discriminative power of SC between happiness and calm. The above findings indicate a heightened susceptibility of happiness to misclassification and the discrepancy between the ability of SG and SC to distinguish between different emotions. By integrating SG and SC, the proposed SG_SC greatly diminishes the inter-emotional confusion, where the highest misclassification rate of happiness is reduced to 2.0%, as shown in Figure 6c, demonstrating an excellent discriminative capability by capitalizing on the respective advantages of each.
To intuitively explore the emotion discrimination ability of different brain network features, we further visualize the sample distribution of SG, SC, and SG_SC in 2-D feature space using t-distributed stochastic neighbor embedding (t-SNE), as shown in Figure 7. From Figure 7a, we observe that for SG, the discrimination among emotions is evident, albeit with relatively sparse intra-class distances. Conversely, as depicted in Figure 7b, SC shows a compact intra-class distribution, although it is accompanied by severe overlap among emotions, resulting in lower inter-class distinction. These findings manifest the obvious complementarity of SG and SC. In Figure 7c, SG_SC, despite retaining some inter-class overlap, obviously achieves a more compact intra-class distribution compared to that of SG and a superior inter-class distinction relative to SC, which further confirms the effectiveness of our method in promoting emotion recognition performance.

4.2. Performance Comparison

In this subsection, we validate the advantages of the proposed method from two perspectives: recognition performance in different datasets and comparison with existing methods.

4.2.1. Recognition Performance in Different Datasets

Since there are no publicly available emotion dataset that include concurrent EEG and fNIRS signals, except our self-built ENTER, we only take the proposed SG constructed from EEG source signals for performance validation on three emotion datasets: ENTER, SEED-IV [41], and DEAP [42]. The SEED-IV dataset is an EEG emotional dataset comprised of 15 subjects, with emotions including happiness, sadness, fear, and neutral. For each subject, three sessions were conducted on different days, and each session contained 24 trials. In one trial, EEG signals were recorded through 62 channels while the subject watched a film clip. The DEAP dataset consists of EEG signals and peripheral physiological signals from 32 subjects in regards to four emotions: high arousal-high valence (HAHV), high arousal-low valence (HALV), low arousal-high valence (LAHV), and low arousal-low valence (LALV). Every subject completed 60 trials, and the EEG signals were recorded through 32 channels. Owing to the fact that no fNIRS data are contained in the SEED-IV and DEAP datasets, we only used the EEG signal to validate the recognition performance of SG in this paper. Table 2 lists the detailed information for the three datasets.
The emotion recognition accuracies of SG and its counterpart EG for the three datasets are illustrated in Figure 8. From the results, we can clearly observe a consistent superiority of SG-based emotion recognition over those based on EG for all three datasets. These comparative results successfully affirm the effectiveness of the proposed SG feature extracted from EEG source signals rather than for EG extracted from EEG signals. The poor performance of both EG and SG in the DEAP dataset may result from its fewer EEG channels, which only provide limited emotional information that is not adequate to support accurate source localization.

4.2.2. Comparison with Existing Methods

To further highlight the performance superiority, we conducted comparative experiments comparing our proposed features with the following: differential asymmetry (DASM) [9], rational asymmetry (RASM) [9], diagonal non-zero GC matrix (DGC) [43], differential entropy (DE) [30], power spectral density (PSD) [30], and a suite of statistical features (including mean, maximum, minimum, linear regression slope, and variance) [29]. To ensure fair comparison, all involved features were uniformly extracted on our self-built dataset ENTER and fed into the same SVM classifier for emotion recognition. A comparison of the results with those for existing methods is listed in Table 3.
From Table 3, we notice that the proposed SG_SC exhibits a marked advantage over the methods reported in the existing literature, thereby highlighting its outstanding discriminative power for emotion recognition. Its superiority can be attributed to: (1) the utilization of EEG source signals, which effectively alleviate the influence of the volume conduction effect; (2) the development of a novel EEG–fNIRS coupled brain network, which leverages the complementarity and coupling relationship of EEG and fNIRS signals.

5. Conclusions

In this paper, we proposed an innovative hybrid brain network in the source space as method of emotion recognition based on concurrent EEG–fNIRS signals. We initially imposed source localization on the EEG to obtain the EEG source signals, and then causal brain networks, constructed by Granger causality analysis, and coupled brain networks, constructed by coupling strengths, are integrated to constitute hybrid brain networks in the source space for emotion recognition. Through the benefits of the utilization of EEG source signals and the complementary fusion of the EEG–FNIRS dual-mode signals, the proposed method achieves recognition accuracies of 96.6% for SVM and 91.7% for KNN in the classification task for four emotions. Extensive qualitative and quantitative results reveal the superiority of the proposed method. Moreover, our method provides a novel perspective on EEG–fNIRS fusion, taking advantage of their complementarity and intrinsic relationship. However, in this paper, the source localization is relatively time-consuming, and only the neurovascular coupling relationship between EEG and fNIRS is investigated. Future work will focus on optimizing the source localization process and exploring the causality between EEG and fNIRS, which may contribute to better efficiency and performance for emotion recognition.

Author Contributions

Conceptualization, M.H. and X.Z.; methodology, M.H.; software, M.H.; validation, M.H., X.Z., and G.C.; formal analysis, M.H. and L.H.; investigation, M.H.; resources, X.Z.; data curation, X.Z.; writing—original draft preparation, M.H.; writing—review and editing, M.H., X.Z., and G.C.; visualization, M.H.; supervision, X.Z. and Y.S.; project administration, X.Z.; funding acquisition, X.Z. and G.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China, grant numbers 62271342 and 62201377; the Fundamental Research Program of Shanxi Province, China, grant number 202203021211174; the Research Project of Shanxi Scholarship Council, China, grant number 2022-072; and was supported by the Scientific and Technological Innovation Programs of Higher Education Institutions in Shanxi, China, grant number 2022L403.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the Institutional Review Board of Taiyuan University of Technology (TYUT-20220309, approved on 9 March 2022).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are available on request from the corresponding author due to privacy and ethical reasons. If you are interested in the dataset and want to use it, please download and fill out the license agreement (https://gitee.com/tycgj/enter) and send it to chenguijun@tyut.edu.cn. We will send you a download link through email after a review of your application.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Hortensius, R.; Hekele, F.; Cross, E.S. The perception of emotion in artificial agents. IEEE Trans. Cogn. Dev. Syst. 2018, 10, 852–864. [Google Scholar] [CrossRef]
  2. Pepa, L.; Spalazzi, L.; Capecci, M.; Ceravolo, M.G. Automatic emotion recognition in clinical scenario: A systematic review of methods. IEEE Trans. Affect Comput. 2021, 14, 1675–1695. [Google Scholar] [CrossRef]
  3. Al-Saadawi; Hussein, F.T.; Das, B.; Das, R. A systematic review of trimodal affective computing approaches: Text, audio, and visual integration in emotion recognition and sentiment analysis. Expert Syst. Appl. 2024, 255, 124852. [Google Scholar] [CrossRef]
  4. Alarcao, S.M.; Fonseca, M.J. Emotions Recognition Using EEG Signals: A Survey. IEEE Trans. Affect Comput. 2017, 10, 374–393. [Google Scholar] [CrossRef]
  5. García-Martínez, B.; Martinez-Rodrigo, A.; Alcaraz, R.; Fernández-Caballero, A. A review on nonlinear methods using electroencephalographic recordings for emotion recognition. IEEE Trans. Affect Comput. 2019, 12, 801–820. [Google Scholar] [CrossRef]
  6. Nguyen, T.; Babawale, O.; Kim, T.; Jo, H.J.; Liu, H.; Kim, J.G. Exploring brain functional connectivity in rest and sleep states: A fNIRS study. Sci. Rep. 2018, 8, 16144. [Google Scholar] [CrossRef]
  7. Petrantonakis, P.C.; Hadjileontiadis, L.J. Emotion recognition from EEG using higher order crossings. IEEE Trans. Inf. Technol. Biomed. 2009, 14, 186–197. [Google Scholar] [CrossRef] [PubMed]
  8. Lan, Z.; Sourina, O.; Wang, L.; Scherer, R.; Müller-Putz, G.R. Domain adaptation techniques for EEG-based emotion recognition: A comparative study on two public datasets. IEEE Trans. Cogn. Dev. Syst. 2018, 11, 85–94. [Google Scholar] [CrossRef]
  9. Zheng, W.L.; Zhu, J.Y.; Lu, B.L. Identifying stable patterns over time for emotion recognition from EEG. IEEE Trans. Affect Comput. 2017, 10, 417–429. [Google Scholar] [CrossRef]
  10. Zheng, W.L.; Lu, B.L. Investigating critical frequency bands and channels for EEG-based emotion recognition with deep neural networks. IEEE Trans. Auton. Ment. Dev. 2015, 7, 162–175. [Google Scholar] [CrossRef]
  11. Li, C.; Li, P.; Zhang, Y.; Li, N.; Si, Y.; Li, F.; Xu, P. Effective emotion recognition by learning discriminative graph topologies in EEG brain networks. IEEE Trans. Neural Netw. Learn Syst. 2023, 35, 10258–10272. [Google Scholar] [CrossRef] [PubMed]
  12. Li, P.; Liu, H.; Si, Y.; Li, C.; Li, F.; Zhu, X.; Xu, P. EEG based emotion recognition by combining functional connectivity network and local activations. IEEE Trans. Biomed. Eng. 2019, 66, 2869–2881. [Google Scholar] [CrossRef] [PubMed]
  13. Liu, B.; Guo, J.; Chen, C.P.; Wu, X.; Zhang, T. Fine-Grained Interpretability for EEG Emotion Recognition: Concat-Aided Grad-CAM and Systematic Brain Functional Network. IEEE Trans. Affect Comput. 2023, 15, 671–684. [Google Scholar] [CrossRef]
  14. Wang, Z.M.; Zhou, R.; He, Y.; Guo, X.M. Functional integration and separation of brain network based on phase locking value during emotion processing. IEEE Trans. Cogn. Dev. Syst. 2020, 15, 444–453. [Google Scholar] [CrossRef]
  15. Chen, C.; Li, Z.; Wan, F.; Xu, L.; Bezerianos, A.; Wang, H. Fusing frequency-domain features and brain connectivity features for cross-subject emotion recognition. IEEE Trans. Instrum. Meas. 2022, 71, 1–15. [Google Scholar] [CrossRef]
  16. Cao, J.; Zhao, Y.; Shan, X.; Wei, H.L.; Guo, Y.; Chen, L.; Sarrigiannis, P.G. Brain functional and effective connectivity based on electroencephalography recordings: A review. Hum. Brain. Mapp. 2022, 43, 860–879. [Google Scholar] [CrossRef]
  17. Pugh, Z.H.; Choo, S.; Leshin, J.C.; Lindquist, K.A.; Nam, C.S. Emotion depends on context, culture and their interaction: Evidence from effective connectivity. Soc. Cogn. Affect Neurosci. 2022, 17, 206–217. [Google Scholar] [CrossRef]
  18. Kong, W.; Qiu, M.; Li, M.; Jin, X.; Zhu, L. Causal graph convolutional neural network for emotion recognition. IIEEE Trans. Cogn. Dev. Syst. 2022, 15, 1686–1693. [Google Scholar] [CrossRef]
  19. Gao, X.; Huang, W.; Liu, Y.; Zhang, Y.; Zhang, J.; Li, C.; Li, P. A novel robust Student’s t-based Granger causality for EEG based brain network analysis. Biomed Signal Proces. 2023, 80, 104321. [Google Scholar] [CrossRef]
  20. Zhang, J.; Zhang, X.; Chen, G.; Huang, L.; Sun, Y. EEG emotion recognition based on cross-frequency granger causality feature extraction and fusion in the left and right hemispheres. Front Neurosci. 2022, 16, 974673. [Google Scholar] [CrossRef]
  21. van den Broek, S.P.; Reinders, F.; Donderwinkel, M.; Peters, M.J. Volume conduction effects in EEG and MEG. Electroencephalogr. Clin. Neurophysiol. 1998, 106, 522–534. [Google Scholar] [CrossRef] [PubMed]
  22. Chen, G.; Zhang, X.; Sun, Y.; Zhang, J. Emotion feature analysis and recognition based on reconstructed EEG sources. IEEE Access 2020, 8, 11907–11916. [Google Scholar] [CrossRef]
  23. Becker, H.; Fleureau, J.; Guillotel, P.; Wendling, F.; Merlet, I.; Albera, L. Emotion recognition based on high-resolution EEG recordings and reconstructed brain sources. IEEE Trans. Affect Comput. 2017, 11, 244–257. [Google Scholar] [CrossRef]
  24. Kwak, Y.; Song, W.J.; Kim, S.E. FGANet: fNIRS-guided attention network for hybrid EEG-fNIRS brain-computer interfaces. IEEE Trans. Neural Syst. Rehabil. Eng. 2022, 30, 329–339. [Google Scholar] [CrossRef]
  25. Al-Shargie, F.; Tang, T.B.; Kiguchi, M. Assessment of mental stress effects on prefrontal cortical activities using canonical correlation analysis: An fNIRS-EEG study. Biomed. Opt. Express 2017, 8, 2583–2598. [Google Scholar] [CrossRef] [PubMed]
  26. Sun, X.; Zheng, X.; Li, T.; Li, Y.; Cui, L. Multimodal emotion classification method and analysis of brain functional connectivity networks. IEEE Trans. Neural Syst. Rehabil. Eng. 2022, 30, 2022–2031. [Google Scholar] [CrossRef]
  27. Liu, Z.; Shore, J.; Wang, M.; Yuan, F.; Buss, A.; Zhao, X. A systematic review on hybrid EEG/fNIRS in brain-computer interface. Biomed. Signal Process Control 2021, 68, 102595. [Google Scholar] [CrossRef]
  28. Chen, J.; Yu, K.; Wang, F.; Zhou, Z.; Bi, Y.; Zhuang, S.; Zhang, D. Temporal convolutional network-enhanced real-time implicit emotion recognition with an innovative wearable fNIRS-EEG dual-modal system. Electronics 2024, 13, 1310. [Google Scholar] [CrossRef]
  29. Zhao, Q.; Zhang, X.; Chen, G.; Zhang, J. EEG and fNIRS emotion recognition based on modal attention graph convolutional feature fusion. J. Zhejiang Univ. Sci. 2023, 57, 1987–1997. [Google Scholar]
  30. Nia, A.F.; Tang, V.; Malyshau, V.; Barde, A.; Talou, G.M.; Billinghurst, M. FEAD: Introduction to the fNIRS-EEG affective database-video stimuli. IEEE Trans. Affect Comput. 2024, 1–13. [Google Scholar] [CrossRef]
  31. Chen, G.; Liu, Y.; Zhang, X. EEG–fNIRS-Based emotion recognition using graph convolution and capsule attention network. Brain Sci. 2024, 14, 820. [Google Scholar] [CrossRef] [PubMed]
  32. He, B.; Liu, Z. Multimodal functional neuroimaging: Integrating functional MRI and EEG/MEG. IEEE Rev. Biomed. Eng. 2008, 1, 23–40. [Google Scholar] [CrossRef]
  33. Li, R.; Zhao, C.; Wang, C.; Wang, J.; Zhang, Y. Enhancing fNIRS analysis using EEG rhythmic signatures: An EEG-informed fNIRS analysis study. IEEE Trans. Biomed. Eng. 2020, 6, 2789–2797. [Google Scholar] [CrossRef]
  34. Gao, Y.; Jia, B.; Houston, M.; Zhang, Y. Hybrid EEG-fNIRS Brain computer interface based on common spatial pattern by using EEG-informed general linear model. IEEE Trans. Instrum. Meas. 2023, 72, 1–10. [Google Scholar] [CrossRef]
  35. Hou, X.; Zhang, Z.; Zhao, C.; Duan, L.; Gong, Y.; Li, Z.; Zhu, C. NIRS-KIT: A MATLAB toolbox for both resting-state and task fNIRS data analysis. Neurophotonics 2021, 8, 010802. [Google Scholar] [CrossRef] [PubMed]
  36. Michel, C.M.; Murray, M.M.; Lantz, G.; Gonzalez, S.; Spinelli, L.; De Peralta, R.G. EEG source imaging. Clin. Neurophysiol. 2004, 115, 2195–2222. [Google Scholar] [CrossRef]
  37. Sato, M.A.; Yoshioka, T.; Kajihara, S.; Toyama, K.; Goda, N.; Doya, K.; Kawato, M. Hierarchical Bayesian estimation for MEG inverse problem. NeuroImage 2004, 23, 806–826. [Google Scholar] [CrossRef] [PubMed]
  38. Potvin, O.; Dieumegarde, L.; Duchesne, S.; Alzheimer’s Disease Neuroimaging Initiative. Freesurfer cortical normative data for adults using Desikan-Killiany-Tourville and ex vivo protocols. Neuroimage 2017, 156, 43–64. [Google Scholar] [CrossRef]
  39. Granger, C.W. Investigating causal relations by econometric models and cross-spectral methods. Econometrica 1969, 37, 424–438. [Google Scholar] [CrossRef]
  40. Neath, A.A.; Cavanaugh, J.E. The Bayesian information criterion: Background, derivation, and applications. Wiley Interdiscip. Rev. Comput. Stat. 2012, 4, 199–203. [Google Scholar]
  41. Zheng, W.L.; Liu, W.; Lu, Y.; Lu, B.L.; Cichocki, A. EmotionMeter: A multimodal framework for recognizing human emotions. IEEE Trans. Cybern. 2019, 49, 1110–1122. [Google Scholar] [CrossRef] [PubMed]
  42. Koelstra, S.; Muhl, C.; Soleymani, M.; Lee, J.S.; Yazdani, A.; Ebrahimi, T.; Patras, I. Deap: A database for emotion analysis; using physiological signals. IEEE Trans. Affect Comput. 2011, 3, 18–31. [Google Scholar] [CrossRef]
  43. Zhang, R.; Zhang, X.; Chen, G.; Huang, L. EEG emotion recognition based on GC features and brain region frequency band Transformer model. Comput. Eng. 2024, 1–10. [Google Scholar] [CrossRef]
Figure 2. Emotion-evoked experimental paradigm.
Figure 2. Emotion-evoked experimental paradigm.
Brainsci 14 01166 g002
Figure 3. The overall flowchart of our approach.
Figure 3. The overall flowchart of our approach.
Brainsci 14 01166 g003
Figure 4. The schematic diagram of the DKT atlas. (a) Superior view; (b) basal view; (c) lateral view.
Figure 4. The schematic diagram of the DKT atlas. (a) Superior view; (b) basal view; (c) lateral view.
Brainsci 14 01166 g004
Figure 5. The overall process of calculating a coupling matrix.
Figure 5. The overall process of calculating a coupling matrix.
Brainsci 14 01166 g005
Figure 6. SVM recognition confusion matrices of three brain networks. (a) SG; (b) SC; (c) SG_SC.
Figure 6. SVM recognition confusion matrices of three brain networks. (a) SG; (b) SC; (c) SG_SC.
Brainsci 14 01166 g006
Figure 7. Samples distribution in 2-D feature space. (a) SG; (b) SC; (c) SG_SC.
Figure 7. Samples distribution in 2-D feature space. (a) SG; (b) SC; (c) SG_SC.
Brainsci 14 01166 g007
Figure 8. Emotion recognition accuracies (%) of SG and EG from three datasets.
Figure 8. Emotion recognition accuracies (%) of SG and EG from three datasets.
Brainsci 14 01166 g008
Table 1. Emotion recognition accuracies (%) based on different brain network features.
Table 1. Emotion recognition accuracies (%) based on different brain network features.
MethodFeatureClassifierCalmFearHappinessSadnessAccuracy
Causal brain networkEGSVM86.984.685.187.186.0 ± 6.54
KNN75.971.473.167.271.9 ± 8.78
SGSVM94.894.191.695.894.1 ± 3.32
KNN84.279.877.088.782.5 ± 6.30
Coupled brain networkECSVM73.174.376.477.875.4 ± 7.04
KNN70.574.074.175.573.5 ± 8.51
SCSVM81.079.778.481.680.2 ± 5.12
KNN82.883.682.986.283.9 ± 4.73
Hybrid brain networkEG_ECSVM91.590.190.892.691.3 ± 4.79
KNN81.180.582.081.481.3 ± 7.10
SG_SC
(ours)
SVM97.196.695.497.596.6 ± 2.08
KNN91.790.689.395.091.7 ± 3.02
Table 2. The detailed information for the three datasets.
Table 2. The detailed information for the three datasets.
DatasetSignal TypeNo. SubjectsNo. TrailsNo. ChannelsEmotion Type
ENTEREEG/fNIRS506062/18happiness, sadness, fear, calm
SEED-IVEEG152462happiness, sadness, fear, neutral
DEAPEEG324032HAHV, HALV, LAHV, LALV
Table 3. Comparison of the results with those for existing methods.
Table 3. Comparison of the results with those for existing methods.
FeaturesAccuracy (%)
EEG: DASM, RASM [9]90.7
EEG: DGC [43]88.5
EEG: DE; fNIRS: statistical features [29]90.4
EEG: DE, PSD; fNIRS: DE, PSD, mean [30]91.8
EEG–fNIRS: SG_SC96.6
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hou, M.; Zhang, X.; Chen, G.; Huang, L.; Sun, Y. Emotion Recognition Based on a EEG–fNIRS Hybrid Brain Network in the Source Space. Brain Sci. 2024, 14, 1166. https://doi.org/10.3390/brainsci14121166

AMA Style

Hou M, Zhang X, Chen G, Huang L, Sun Y. Emotion Recognition Based on a EEG–fNIRS Hybrid Brain Network in the Source Space. Brain Sciences. 2024; 14(12):1166. https://doi.org/10.3390/brainsci14121166

Chicago/Turabian Style

Hou, Mingxing, Xueying Zhang, Guijun Chen, Lixia Huang, and Ying Sun. 2024. "Emotion Recognition Based on a EEG–fNIRS Hybrid Brain Network in the Source Space" Brain Sciences 14, no. 12: 1166. https://doi.org/10.3390/brainsci14121166

APA Style

Hou, M., Zhang, X., Chen, G., Huang, L., & Sun, Y. (2024). Emotion Recognition Based on a EEG–fNIRS Hybrid Brain Network in the Source Space. Brain Sciences, 14(12), 1166. https://doi.org/10.3390/brainsci14121166

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop