Next Article in Journal
Study of the Oxidation of Phenol in the Presence of a Magnetic Composite Catalyst CoFe2O4/Polyvinylpyrrolidone
Previous Article in Journal
Text Network Analysis to Develop a Search Strategy for a Systematic Review
Previous Article in Special Issue
Markov-Modulated Poisson Process Modeling for Machine-to-Machine Heterogeneous Traffic
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Situational Awareness Classification Based on EEG Signals and Spiking Neural Network

1
School of Electrical and Computer Engineering, Ben-Gurion University of the Negev, Beer-Sheva 8410501, Israel
2
Computer Science Department, Sami Shamoon College of Engineering, Beer-Sheva 8410802, Israel
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Appl. Sci. 2024, 14(19), 8911; https://doi.org/10.3390/app14198911
Submission received: 30 August 2024 / Revised: 27 September 2024 / Accepted: 29 September 2024 / Published: 3 October 2024

Abstract

:
Situational awareness detection and characterization of mental states have a vital role in medicine and many other fields. An electroencephalogram (EEG) is one of the most effective tools for identifying and analyzing cognitive stress. Yet, the measurement, interpretation, and classification of EEG sensors is a challenging task. This study introduces a novel machine learning-based approach to assist in evaluating situational awareness detection using EEG signals and spiking neural networks (SNNs) based on a unique spike continuous-time neuron (SCTN). The implemented biologically inspired SNN architecture is used for effective EEG feature extraction by applying time–frequency analysis techniques and allows adept detection and analysis of the various frequency components embedded in the different EEG sub-bands. The EEG signal undergoes encoding into spikes and is then fed into an SNN model which is well suited to the serial sequence order of the EEG data. We utilize the SCTN-based resonator for EEG feature extraction in the frequency domain which demonstrates high correlation with the classical FFT features. A new SCTN-based 2D neural network is introduced for efficient EEG feature mapping, aiming to achieve a spatial representation of each EEG sub-band. To validate and evaluate the performance of the proposed approach, a common, publicly available EEG dataset is used. The experimental results show that by using the extracted EEG frequencies features and the SCTN-based SNN classifier, the mental state can be accurately classified with an average accuracy of 96.8% for the common EEG dataset. Our proposed method outperforms existing machine learning-based methods and demonstrates the advantages of using SNNs for situational awareness detection and mental state classifications.

1. Introduction

The advancement of neuromorphic computing has amplified the interest in utilizing biologically plausible networks for classifying physiological signals [1,2,3,4]. SNNs are characterized by low energy consumption and a low computational cost, and therefore they are most suitable for applying machine learning for edge application [2]. These include indicators of emotional arousal states such as respiratory activity, cardiac activity, EEG signals, and electrodermal activity response [5,6]. Such neural signals can be obtained using a variety of techniques, including electroencephalography (EEG) and functional magnetic resonance imaging, providing insights into cognitive and mental states [7,8].
Situational awareness (SA) pertains to an individual’s understanding of their complex operating environment. It involves comprehending what has happened, what is currently happening, and what might happen based on the surrounding context. Evaluation of SA is essential in tasks that require processing multiple streams of information, as it helps human operators effectively navigate and respond well to their environment [9,10].
EEG brain activity recordings are highly promising for assessing SA. EEG signals capture signals from all sensory inputs, providing deep insights into an individual’s physiological state. It is widely used to monitor and predict mental states such as workload, mental fatigue, sleepiness, and drowsiness. Given the close relationship between these mental states and SA, many studies utilize EEG to evaluate situational awareness [11,12].
EEG analysis involves a variety of methods for processing and interpreting the electrical activity of the brain. These methods can be categorized into time domain [13], frequency domain [14,15], and time–frequency domain analysis [16,17], along with advanced techniques involving deep neural networks (DNNs), machine learning, and statistical modeling [18,19]. Time domain analysis includes event-related potentials (ERPs) and EEG amplitude and latency measurements [20]. Frequency domain techniques include power spectral density estimation methods based on time-varying parameter modeling [21,22], while time–frequency analysis mainly includes short-time Fourier transforms (STFTs) and wavelet transforms [23]. Filtering techniques and preprocessing are used to remove noise and artifacts from the EEG signal.
Kexin et al. proposed a classification network for EEG signals based on cross-domain features related to brain activities using a multi-domain attention mechanism [7]. Cigdem et al. utilized the support vector machine method for monitoring mental attention states (focused, unfocused, and drowsy) by using EEG brain activity imaging [24].
Evelina at el. evaluated various spike encoding techniques for time-dependent signal classification through a spiking convolutional neural network including signal preprocessing, consisting of a bank of filters [25,26]. S. A. Nasrollahi [27] proposed the use of the well-known Δ Σ encoding scheme in the design of an SNN’s input layer to convert analog sensor voltages into spike trains with firing rates that are linearly proportional to the input voltage. Pulse-density modulation (PDM), also known as delta-sigma modulation (DSM), is a form of modulation used to represent an analog signal with a binary signal. The analog signal’s amplitude is converted into binary sequences corresponding to the amplitude [28]. A PDM bitstream is encoded from an analog signal through the process of one-bit delta-sigma modulation. This process uses a one-bit quantizer which produces either a one or zero, depending on the analog signal’s amplitude. A high density of ones occurs at the peaks of the sine wave, while a low density of ones occurs at the troughs of the sine wave. Leu et al. presented a novel approach utilizing SNN and EEG processing techniques for emotion state recognition. The method employs a discrete wavelet transform (DWT) and fast Fourier transform (FFT) for extracting EEG signals [29]. The results showed an average accuracy of about 80% for classification of the four emotion states (arousal, valence, dominance, and liking). L. Devnath et al. [30] suggested using the DWT to remove noise from the EEG signals while retaining the ECG signal morphology effectively. D. Zhang et al. [31] presented a deep learning-based EEG decoding method to read mental states. They proposed a 1D CNN with different filter lengths to capture different frequency band information, demonstrating 96% accuracy.
Numerous studies have been conducted to classify cognitive load using various methods, including EEG [32,33], ECG [34], and gaze detection [35,36]. Moreover, there is a growing demand for executing cognitive load estimation with edge devices across diverse fields, including healthcare [37,38] and automotive industries [39]. The ability to classify physiological signals using machine learning tools with low-hardware resources and low power allows its integration, especially in wearable sensors [39,40].
A configurable parametric model of a digital spike continuous-time neuron (SCTN) was introduced in [41]. Efficiency of usage of the SCTN for sound signal processing was presented in [42]. In this paper, we adapt the basic architecture of the published SCTN and propose using an SCTN-based bio-inspired network for classifying situational awareness based on EEG signals. First, we introduce the usage of SCTN-based resonators for feature extraction in the frequency domain. A supervised spike timing-dependent plasticity (STDP) learning algorithm is used for training the network and adjusting the weights to detect the frequencies of EEG sub-bands. Then, we propose a method for integrating the extracted features via spatial mapping, leveraging the interrelationships between the various EEG channels using a two-dimensional structure of spiking neurons. Finally, an SCTN-based three-layer network is used for mental state classification.
Our main contributions are listed below:
  • We propose a novel approach for mental states and situation awareness classification based on a biologically inspired spiking neural network using EEG signals.
  • A new SCTN-based 2D neural network is introduced for efficient feature mapping, aiming to achieve a spatial representation of each EEG sub-band.
  • We utilize the SCTN-based resonator for feature extraction in the frequency domain and demonstrate high correlation with the known FFT features.

2. SCTN-Based Resonator

This section describes the SCTN mathematical model and the SCTN-based resonator used for frequency detection.

2.1. Spike Continuous Time Neuron (SCTN) Model

The SCTN model presented in [41] is given by Equation (1):
V m ( t ) = α · ( V m ( t 1 ) + j = 1 N W j · I j ( t ) ) + Θ
where the membrane potential V m ( t ) of a neuron with N input synapses can be further expressed as follows:
V m ( t ) = V m 0 , t = 0 α · ( V m ( t 1 ) + j = 1 N w j · I j ( t ) ) + Θ , t mod ( LP ) = 0 V m ( t 1 ) + j = 1 N w j · I j ( t ) + Θ , t mod ( LP ) 0
where I j ( t ) is the synapse input, w j represents the corresponding weight, and Θ represents the neuron bias. The SCTN cell includes a leaky integrator characterized by the time constant α = 1 2 L F [41], allowing one to delay the input signals. The leakage period ( L P ) corresponds to the neuron’s operation rate, while the leakage factor ( L F ) corresponds to the integrator’s leakage rate. The SCTN emits a spike whenever the potential of the membrane ( V m ) surpasses the threshold similar to the behavior observed in biological neurons.
The SCTN may postpone the incoming signals according to an expected leakage time constant. For an appropriate choice of the L P and L F parameters, the SCTN has the ability to generate a phase shift of π 4 for a given frequency f 0 [42]:
f 0 = f P u l s e s · ( 1 α ) 2 π · ( 1 + L P ) = f P u l s e s 2 L F · 2 π · ( 1 + L P )

2.2. Frequency Detection Using SCTN-Based Resonator

The following section introduces the resonator circuit, designed as a frequency detector which utilizes the fundamental phase-shifting capability of the spiking neuron. We propose employing an SNN architecture to identify the sub-band frequencies embedded in an EEG signal by exploiting the inherent phase-shifting feature.
Figure 1a shows the resonator circuit composed of the SCTN-based network. This circuit consists of four SCTNs, for which each neuron introduces a phase shift of π 4 degrees. The first neuron’s phase shift is denoted as χ . Figure 1b illustrates the output of each of the four neurons in the proposed architecture (for χ = 30°). The amplitude expresses the spike rate within a predefined time window. The spike rate at the output of each SCTN in a predefined time window represents the phase shift and therefore the signal amplitude, as depicted in Figure 1. For a one-second normalized time window, the spike rate is given in Hz. The fourth neuron’s output is fed back to the first neuron, resulting in a feedback path with a phase shift of χ + 3 π 4 . Thus, the first neuron receives two input signals: (a) the original input signal and (b) a shifted signal ( χ + 3 π 4 ).
The resonator is designed to identify a wide range of frequencies within a time series signal by leveraging the neuron’s intrinsic phase-shifting capability. The leakage factor and the leakage period are adjusted to target a specific frequency. The output neuron emits a train of pulses whenever a frequency within the range of f 0 ± Δ f 0 is detected. We employed a supervised STDP algorithm to adjust the network parameters for the desired frequency [43].
A modified supervised STDP learning rule is used for training the SCTN-based resonator circuits. To achieve the desired frequency response, the SCTN’s weights ( W i j ) and biases ( Θ i ) are adjusted during the training phase.

3. Proposed Method

Figure 2 illustrates the proposed architecture for classifying situational awareness based on EEG signals. In the first step, the signals sampled from 14 electrodes located on a subject’s head (as depicted in Figure 2a) are converted into pulses in the timeline using delta-sigma modulation (DSM) coding. Classifying an EEG signal using an SNN requires encoding the raw EEG data into spikes. We utilize the well-known delta-sigma modulation and an SCTN-based integrate-and-fire (IF) neuron to convert the analog input sensor voltages into a binary stream of spike trains with firing rates that are linearly proportional to the input voltage [27]. After that, the frequency characteristics are extracted for each channel using a dedicated array of resonators. The feature extraction phase is carried out using the SNN-based resonator architecture and by applying a supervised STDP learning rule [43]. The weights and biases of the resonator networks are adjusted using the proposed supervised algorithm, aiming to detect the targeted EEG sub-band frequencies. In the next step, we propose a unique mapping of the extracted features using five SCTN-based networks. Finally, another network is used for classification. The proposed process for feature extraction, feature mapping, and classification is detailed below.

3.1. EEG Feature Extraction

Figure 2b depicts the preprocessing stage, which includes feature extraction in the frequency domain. The EEG signal is composed of five sub-bands: delta (0.1–4 Hz), theta (4–8 Hz), alpha (8–14 Hz), beta (14–32 Hz), and gamma (32–60 Hz). The frequency characteristics are extracted for each channel using a dedicated array of resonators. To cover the whole frequency range, each of the five EEG sub-bands is further divided into several specific frequencies. Therefore, for each sub-band, a different number of resonators is assigned as follows: 10, 8, 5, 7, and 7 resonators for delta, theta, alpha, beta, and gamma, respectively. We propose an array of a total of 37 SNN-based resonators for each of the 14 EEG channels to detect all EEG frequency components. The rationale behind selecting different numbers of resonators for each EEG sub-band is as follows. Since each resonator is tuned to detect frequencies in the range of f 0 ± Δ f 0 , the number of required resonators not only depends on the specific EEG sub-band range but also differs for the low and high frequency ranges, as can be seen in Figure 3.
The EEG signal undergoes encoding and conversion into spikes using delta-sigma modulation coding and is then fed into the resonator circuit as a binary string in the timeline. To detect a specific EEG frequency within different sub-bands, the neurons’ parameters (LP and LF) are determined by Equation (3). The resonator’s frequency response to a chirp signal is depicted in Figure 3. The response signal is normalized in terms of amplitude. To capture the energy within a specified frequency band and detect the different frequencies in each EEG sub-band, we propose using an array of SCTN-based resonators.
Figure 4a,c depicts the frequency outputs of the resonators using a spikegram representation method, which demonstrates the occurrence of spikes over time for each EEG sub-band. Each time bin represents the aggregated spikes from each group of resonators, reflecting the power within that bin. The strength of the signal is shown in color, where red represents the maximum spike rate. To evaluate the quality of the proposed frequency features, we compared them to the features extracted in the Fourier domain. Figure 4b,d depicts the spectrogram of the FFT transform of the same EEG signal under testing for the five sub-bands. A comparison between the SNN-based spikegram and the FFT spectrogram demonstrates high correlation between the SNN-based extracted features and the FFT features.

3.2. EEG Feature Mapping

This section describes a unique mapping of the extracted features using five SCTN-based networks (one for each EEG sub-band). We introduce a novel approach for mapping each of the five EEG sub-bands into a unique spacial map using an SNN. Each of the 14 EEG channels was characterized by five sub-bands. For each sub-band, we constructed a unique spiking neural network which maintained the EEG topographic map, as depicted in Figure 5a. Each network was composed of an 11 × 11 matrix of SCTN-type neurons. Every neuron had 14 inputs representing the relevant frequency band derived from each of the 14 channels.
The contribution of each channel to a specific EEG sub-band (i.e., delta, theta, alpha, beta, and gamma) is represented by a unique spacial topographic map. Within each map, 14 specific neurons were assigned in accordance with the placement of the 14 electrodes on the test subject’s head, according to the EEG 10–20 system standard [44]. This representation captured the mutual influences between the channels and their spatial correlation.
The network weights were determined without any learning process according to a Gaussian kernel centered around each of the specific neurons (which represented a specific channel). The channels connected to these neurons received a maximal weight for the said channel (i.e., for the specific channel they represented). Therefore, the proposed topographic mapping reflects the spatial correlation and mutual influence among the 14 EEG channels.
The SCTN network weights are given by
W c b ( x , y ) = 1 2 π σ 2 e ( x x c ) 2 + ( y y c ) 2 2 σ 2
where ( x c , y c ) represents the coordinates of the specific neuron assigned for a specific electrode and for σ = 1 . Figure 5b depicts the weight distribution for each channel for a given sub-band according to the EEG channel map (Figure 2a).
Figure 6 shows the five SCTN-based topographic maps (one for each sub-band) compared to the spacial maps created using an FFT. Again, high correlation can be observed between the two feature maps.

3.3. SNN Classification

Figure 2d depicts the final phase of classification. The classifier was constructed from an SCTN network comprising three layers. The input layer consisted of 121 neurons, representing the spatial mapping of each sub-channel. Each neuron in the input layer received 121 inputs from each of the five EEG sub-band maps, aggregating to 605 input synapses ( 5 × 121 ). Two fully connected hidden layers contained 64 and 32 SCTNs. The ReLU activation function was implemented for all neurons across all layers. The output layer contained three neurons which represented the three mental states: focused, unfocused, and drowsy [24]. The training process was based on an unsupervised STDP learning algorithm [45], where the weights of each SCTN were updated according to the following equation:
Δ W i , j = k S p r e l S p o s t + A · e | t k t l | / τ s , if t k t l A · e | t k t l | / τ s , if t k < t l
where A represents the learning rate and τ s is the time constant used to control the synaptic potentiation and depression. The magnitude of the learning rate decreased exponentially with the absolute value of the timing difference. When multiple spikes were simultaneously fired, the required weight change was calculated by summing the individual changes derived from all possible spike pairs. Here, S p r e and S p o s t are sets of spikes representing the pre- and post-synaptic neurons, respectively, where t k stands for the timing of spike k (from the S p r e set) and t l represents the timing of spike l (from the S p o s t set).
Each neuron in the final layer was assigned to a specific category and, by using unsupervised STDP training, automatically adjusted to detect the patterns tailored to their assigned categories. All of the weights in the classification SNN were initially randomly configured.
The implementation of the proposed SNN-based architecture consisted of only 2776 spiking SCTNs. A total of 2072 ( 14 × 37 × 4 ) neurons were required to implement the feature extraction network (14 channels, each connected to 37 resonators composed of 4 neurons each). The feature mapping network consisted of 605 ( 5 × 11 × 11 ) neurons, while the classification network was composed of 99 ( 64 + 32 + 3 ) neurons.

4. Experimental Results

4.1. Dataset

The dataset used in this study comprised a total of 25 h of EEG recordings collected from five healthy volunteer participants, all of whom were students, engaged in a low-intensity control task [24]. This task involved controlling a computer-simulated train using the “Microsoft Train Simulator” program. Each experiment required the participants to control the train for 35–55 min along a primarily featureless route. The study focused on three mental states: focused but passive attention, unfocused or detached but awake, and drowsy.
The first mental state, “focused”, involved participants passively supervising the train while maintaining continuous concentration and focus without active engagement. The second state, “unfocused”, was characterized by participants being awake but not paying attention to the screen, representing a potentially dangerous state which should trigger an alert. This state, which is difficult to detect through external cues such as video monitoring, requires sophisticated discrimination methods. The third state involved explicit drowsiness [24].
Each participant controlled the simulated train for three distinct 10 min phases during each experiment. Initially, the participants focused closely on the simulator’s controls. In the second phase, they stopped providing control inputs and ceased paying attention to the screen while remaining awake. In the final phase, the participants were allowed to relax, close their eyes, and doze off. Each participant completed seven experiments, with a maximum of one per day. The first two experiments were for habituation, and data collection was conducted during the last five trials. To facilitate the transition to the drowsy state, all experiments were conducted between 7:00 p.m. and 9:00 p.m. The raw data from these experiments are available to researchers on the Kaggle website (https://www.kaggle.com/datasets/inancigdem/eeg-data-for-mental-attention-state-detection).

4.2. Simulation Results

Table 1 shows the classification accuracy results when using an open-source classification tool for six different classification models [46]. The performance of the classifiers was evaluated for both the SCTN-based and FFT features. The FFT features were extracted for various frame durations (for 5, 10, 15, and 20 ms). The best feature set which resulted in the best classification was selected for comparison. The results show that EEG classification, based on our proposed approach (SCTN), outperformed the FFT-based classification for all five models and demonstrated an accuracy improvement of up to 6.9%.
We employed two evaluation methods for situational awareness detection: subject-specific and group-level classification. For the subject-specific classification, the classification network was trained individually for each participant using only their data. For this, 80% of the random samples from the specific subject’s EEG data were used for training. The remaining 20% of the subject data which were not used for training were used for testing.
Similarly, for the group-level classification, an 80–20 data splitting was also used. For this scenario, the training data set included combined data from all participants randomly selected from 80% of the joint EEG recordings of all participants. The testing was carried out on the remaining 20% mixed dataset, which was not used during the training phase.
To further evaluate the SCTN-based classifier, we compared our proposed approach with previous related works applied to EEG classification. Table 2 summarizes the results of four previous studies on detecting mental attention states as described in [24] compared to the results of our proposed SCTN-based classifier. As seen in the table, our method demonstrated better results than all previous studies which used the SVM classifier. Average classification accuracies of 96.8% and 91.7% were achieved for classifying the three mental states while using our method (SCTN) and the SVM method [24], respectively.
The proposed architecture was evaluated for individual training for each participant and a common-subject paradigm where a single generic classifier was jointly trained for all participants. The results show that the generic mental state detector performed slightly worse than the subject-specific detectors, demonstrating accuracy degradation of about 4–6% compared with training with individual subjects. The good generalization ability was achieved due to the proposed feature extraction method and the EEG feature mapping, which successfully mitigated the differences between users and highlighted relevant characteristics.
The performance evaluation is demonstrated for each subject for both the subject-specific model and the transfer learning and generic model.
Table 3 depicts the results of applying a transfer learning (TL) approach (training on one subject and testing on another) for the five participants (P0P4). Additionally, the table also provides the results of the generic model (training with samples from all participants and testing on a specific participant).
The results show a success rate in the range from 94.7% to 98.3% for the subject-specific case (indicated diagonally on the table) and an average accuracy of 92.53% for the transfer learning model. The generic model demonstrated comparable results, with a 92.64% success rate on average. These findings underscore the model’s ability to effectively classify situational awareness based on EEG data for both training paradigms.
Figure 7 depicts the topography maps of the EEG signal magnitude divided into five frequency bands (delta, theta, alpha, beta, and gamma) for each of the three mental states (focused, unfocused, and drowsy). It can be seen that both the focused and unfocused mental states were mainly characterized by the delta and theta channels (i.e., in the low-frequency EEG activity, specifically within the 1–8 Hz range). However, while in the focused state, there was increased activity in the frontal lobe, and in the state of lack of focus, there was a decrease in activity in this area. The drowsing state was characterized mainly by the alpha channel sub-band (8–14 Hz). This outcome is consistent with existing research findings on the correlation between the alpha EEG band and the drowsy mental state [24]. Figure 8 depicts the timely activity examined in the F3 EEG channel located in the frontal lobe for the delta sub-band. The spike rate accurately corresponded to the three mental states and demonstrated an activity decrease for the unfocused state compared with the focused state.
We evaluated the relative contribution of the various EEG electrodes in discerning the three mental states (focused, unfocused, and drowsy). Remarkably, we discovered that the best classification performance could be achieved when using only five EEG electrodes: F7, F3, AF3, F4, and AF4. Three of those five electrodes were significantly situated over the frontal lobe.
Table 4 shows the mental state classification results achieved with a reduced set of only 5 EEG electrodes compared with the 14 available EEG electrodes while employing all five EEG sub-bands or only the three low-frequency sub-bands (delta, theta, and alpha). Although using all 14 EEG channels and the five sub-bands resulted in the best performance with 96.8% accuracy, the accuracy loss while using only 5 EEG channels and three sub-bands was minor, with a degradation of only 3.2%. However, the benefit was reducing the area and power, since only 1202 neurons were required compared with 2776 neurons for the best case scenario. Applying only the three dominant sub-bands (delta, theta, and alpha) for mental classification resulted in 94.1% accuracy compared with 96.8% with all five available sub-bands while reducing the number of required neurons by about 9%. Using all of the frequency sub-bands with only five EEG electrodes resulted in a slight accuracy loss of 1.9% but saved about 50% of the required neurons.

4.3. Performance Evaluation

The performance evaluation of the proposed method was carried out using the four following metrics:
  • Accuracy, calculated as the ratio of the sum of the true positive ( t p ) and true negative ( t n ) predictions to the total number of predictions ( t p , t n , false positive ( f p ), and false negative ( f n )) made by the model:
    Accuracy = t p + t n t p + t n + f p + f n
  • Sensitivity, determined as the ratio of true positive ( t p ) predictions to the sum of true positive ( t p ) and false negative ( f n ) predictionsL
    Sensitivity = t p t p + f n
  • Specificity, calculated as the ratio of true negative ( t n ) predictions to the sum of true negative ( t n ) and false positive ( f p ) predictions:
    Specificity = t n t n + f p
  • False positive rate (FPR), determined as the ratio of false positives ( f p ) to the sum of false positives ( f p ) and true negatives ( t n ):
    FPR = f p f p + t n
Table 5 shows the proposed method’s performance in terms of the four metrics, including the accuracy, sensitivity, specificity, and false positive rate (FPR). An average accuracy of 96.8% was demonstrated for the subject-specific model, and precise identification of all three EEG-based mental states was achieved.
The high average sensitivity (96.9%) highlights the model’s robust ability to detect true positive cases, while an average specificity of 98.33% underscores its proficiency in correctly identifying true negative cases, thereby minimizing the risk of misclassifying normal brain activity as an indication of wrong situational awareness classification.
Furthermore, the low FPR average of 1.63% indicates a low incidence of false positive predictions, ensuring reliable differentiation between different mental states based on the EEG recordings. Table 6 depicts a confusion matrix for evaluating the classification of the three mental states, demonstrating highly accurate classification results.

5. Conclusions

This article suggested utilizing a biologically inspired SNN-based architecture for feature extraction in the frequency domain. We proposed an original approach for classifying mental states and situation awareness based on EEG signals and a biologically inspired SNN using an array of unique SCTN-based resonators and a classification SNN.
Narrowing the frequency bands to three sub-bands achieved area savings with comparable classification results to the best case scenario. Moreover, using only 5 out of the 14 EEG electrodes resulted in sufficient accuracies of 94.9% and 93.6% with five and three EEG sub-bands, respectively, while significantly reducing the number of required neurons by about 50%.
The results demonstrate a strong correlation between the SCTN-based bio-inspired features and the FFT-extracted features. This suggests that the proposed method could be effectively applied to diverse applications. Simulations showed that our approach outperformed previous related works, demonstrating 96.8% accuracy in classifying the three mental states (focused, unfocused, and drowsy). Moreover, we achieved 93.6% accuracy with only five EEG electrodes compared with 91.7% with 14 electrodes in [24].
The proposed SNN-based classifier had a particularly good generalization ability, therefore enabling using individual training paradigms (i.e., for a specific participant) and then successfully applying the trained model to other participants. The proposed approach was evaluated using a limited number of subjects, and therefore, we believe generalizability for a large number of subjects and checking more awareness states would better demonstrate the power of the suggested method.
This study described the potential usage of spiking neural networks for EEG signal analysis. Future works may expand the proposed approach to apply a neuromorphic SCTN-based classification network to further EEG-based applications like early detection of epileptic seizures, among others.

Author Contributions

Conceptualization, M.B. and S.G.; methodology, M.B. and S.G.; software, Y.H.; validation, Y.H. and M.B.; formal analysis, Y.H. and M.B.; investigation, Y.H. and M.B.; writing—original draft preparation, M.B., S.G. and Y.B.-S.; writing—review and editing, M.B., S.G. and Y.B.-S.; visualization, Y.H.; supervision, S.G. and Y.B.-S.; project administration, S.G. and Y.B.-S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not Appicable.

Informed Consent Statement

Not Appicable.

Data Availability Statement

The source code and dataset can be found in the following GitHub Repository: https://github.com/NeuromorphicLabBGU/Situational-Awareness-Classification.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
SCTNSpike continuous-time neuron
SNNSpiking neural network
SNSpiking neuron
IFIntegrate-and-dire
SASituational awareness
LFLeakage factor
LPLeakage period
TLTransfer learning
STDPSpike timing-dependent plasticity
FFTFast Fourier transform
DWTDiscrete wavelet transform
DSMDelta-sigma modulation
PDMPulse-density modulation

References

  1. Clark, K.; Wu, Y. Survey of Neuromorphic Computing: A Data Science Perspective. In Proceedings of the 2023 IEEE 3rd International Conference on Computer Communication and Artificial Intelligence (CCAI), Taiyuan, China, 26–28 May 2023; IEEE: New York, NY, USA, 2023; pp. 81–84. [Google Scholar]
  2. Shrestha, A.; Fang, H.; Mei, Z.; Rider, D.P.; Wu, Q.; Qiu, Q. A survey on neuromorphic computing: Models and hardware. IEEE Circuits Syst. Mag. 2022, 22, 6–35. [Google Scholar] [CrossRef]
  3. Cai, L.; Yu, L.; Yue, W.; Zhu, Y.; Yang, Z.; Li, Y.; Tao, Y.; Yang, Y. Integrated Memristor Network for Physiological Signal Processing. Adv. Electr. Mater. 2023, 9, 2300021. [Google Scholar] [CrossRef]
  4. Yang, G.; Liu, K.; Yu, H.; Li, C.; Zeng, M. Examination and Repair of Technology of Equipment Status Based on SNN in Intelligent Substation. J. Phys. Conf. Ser. 2023, 2666, 012037. [Google Scholar] [CrossRef]
  5. Desai, U.; Shetty, A.D. Electrodermal activity (EDA) for treatment of neurological and psychiatric disorder patients: A review. In Proceedings of the 2021 7th International Conference on Advanced Computing and Communication Systems (ICACCS), Coimbatore, India, 19–20 March 2021; IEEE: New York, NY, USA, 2021; Volume 1, pp. 1424–1430. [Google Scholar]
  6. Yang, Y.; Eshraghian, J.K.; Truong, N.D.; Nikpour, A.; Kavehei, O. Neuromorphic deep spiking neural networks for seizure detection. Neuromorph. Comput. Eng. 2023, 3, 014010. [Google Scholar] [CrossRef]
  7. Zhu, K.; Zhang, X.; Wang, J.; Cheng, N.; Xiao, J. Improving EEG-based Emotion Recognition by Fusing Time-Frequency and Spatial Representations. In Proceedings of the ICASSP 2023—2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Rodos, Greece, 4–10 June 2023; IEEE: New York, NY, USA, 2023; pp. 1–5. [Google Scholar]
  8. Xu, Q.; Shen, J.; Ran, X.; Tang, H.; Pan, G.; Liu, J.K. Robust transcoding sensory information with neural spikes. IEEE Trans. Neural Netw. Learn. Syst. 2021, 33, 1935–1946. [Google Scholar] [CrossRef]
  9. Samuel, S.; Borowsky, A.; Zilberstein, S.; Fisher, D.L. Minimum time to situation awareness in scenarios involving transfer of control from an automated driving suite. Transp. Res. Rec. 2016, 2602, 115–120. [Google Scholar] [CrossRef]
  10. Endsley, M.R. Toward a theory of situation awareness in dynamic systems. Hum. Factors 1995, 37, 32–64. [Google Scholar] [CrossRef]
  11. Kästle, J.L.; Anvari, B.; Krol, J.; Wurdemann, H.A. Correlation between Situational Awareness and EEG signals. Neurocomputing 2021, 432, 70–79. [Google Scholar] [CrossRef]
  12. Catherwood, D.; Edgar, G.K.; Nikolla, D.; Alford, C.; Brookes, D.; Baker, S.; White, S. Mapping brain activity during loss of situation awareness: An EEG investigation of a basis for top-down influence on perception. Hum. Factors 2014, 56, 1428–1452. [Google Scholar] [CrossRef]
  13. Iqbal, S.; Muhammed Shanir, P.; Khan, Y.U.; Farooq, O. Time domain analysis of EEG to classify imagined speech. In Proceedings of the Second International Conference on Computer and Communication Technologies: IC3T 2015, Hyderabad, India, 24–26 July 2015; Springer: New Delhi, India, 2016; Volume 2, pp. 793–800. [Google Scholar]
  14. Qin, X.; Zheng, Y.; Chen, B. Extract EEG features by combining power spectral density and correntropy spectral density. In Proceedings of the 2019 Chinese Automation Congress (CAC), Hangzhou, China, 22–24 November 2019; IEEE: New York, NY, USA, 2019; pp. 2455–2459. [Google Scholar]
  15. Elgandelwar, S.M.; Bairagi, V.K. Power analysis of EEG bands for diagnosis of Alzheimer disease. Int. J. Med. Eng. Inform. 2021, 13, 376–385. [Google Scholar] [CrossRef]
  16. Al-Fahoum, A.S.; Al-Fraihat, A.A. Methods of EEG Signal Features Extraction Using Linear Analysis in Frequency and Time-Frequency Domains. Int. Sch. Res. Not. 2014, 2014, 730218. [Google Scholar] [CrossRef]
  17. Zheng, J.; Liang, M.; Sinha, S.; Ge, L.; Yu, W.; Ekstrom, A.; Hsieh, F. Time-frequency analysis of scalp EEG with Hilbert-Huang transform and deep learning. IEEE J. Biomed. Health Inform. 2021, 26, 1549–1559. [Google Scholar] [CrossRef]
  18. Altaheri, H.; Muhammad, G.; Alsulaiman, M.; Amin, S.U.; Altuwaijri, G.A.; Abdul, W.; Bencherif, M.A.; Faisal, M. Deep learning techniques for classification of electroencephalogram (EEG) motor imagery (MI) signals: A review. Neural Comput. Appl. 2023, 35, 14681–14722. [Google Scholar] [CrossRef]
  19. Pahuja, S.; Veer, K. Recent approaches on classification and feature extraction of EEG signal: A review. Robotica 2022, 40, 77–101. [Google Scholar]
  20. Light, G.A.; Williams, L.E.; Minow, F.; Sprock, J.; Rissling, A.; Sharp, R.; Swerdlow, N.R.; Braff, D.L. Electroencephalography (EEG) and event-related potentials (ERPs) with human participants. Curr. Protoc. Neurosci. 2010, 52, 6–25. [Google Scholar] [CrossRef]
  21. Sahu, R.; Dash, S.R.; Cacha, L.A.; Poznanski, R.R.; Parida, S. Epileptic seizure detection: A comparative study between deep and traditional machine learning techniques. J. Integr. Neurosci. 2020, 19, 1–9. [Google Scholar]
  22. Qing-Hua, W.; Li-Na, W.; Song, X. Classification of EEG signals based on time-frequency analysis and spiking neural network. In Proceedings of the 2020 IEEE International Conference on Signal Processing, Communications and Computing (ICSPCC), Macau, China, 21–24 August 2020; IEEE: New York, NY, USA, 2020; pp. 1–5. [Google Scholar]
  23. Zhang, Z. Spectral and time-frequency analysis. In EEG Signal Processing and Feature Extraction; Springer: Singapore, 2019; pp. 89–116. [Google Scholar]
  24. Aci, Ç.İ.; Kaya, M.; Mishchenko, Y. Distinguishing mental attention states of humans via an EEG-based passive BCI using machine learning methods. Expert Syst. Appl. 2019, 134, 153–166. [Google Scholar] [CrossRef]
  25. Forno, E.; Fra, V.; Pignari, R.; Macii, E.; Urgese, G. Spike encoding techniques for IoT time-varying signals benchmarked on a neuromorphic classification task. Front. Neurosci. 2022, 16, 999029. [Google Scholar] [CrossRef]
  26. Shamma, S.A.; Elhilali, M.; Micheyl, C. Temporal coherence and attention in auditory scene analysis. Trends Neurosci. 2011, 34, 114–123. [Google Scholar] [CrossRef]
  27. Nasrollahi, S.A.; Syutkin, A.; Cowan, G. Input-Layer Neuron Implementation Using Delta-Sigma Modulators. In Proceedings of the 2022 20th IEEE Interregional NEWCAS Conference (NEWCAS), Quebec, QC, Canda, 19–22 June 2022; IEEE: New York, NY, USA, 2022; pp. 533–537. [Google Scholar]
  28. Park, S. Principles of Sigma-Delta Modulation for Analog-to-Digital Converters; Motorolla: Austin, TX, USA, 1999. [Google Scholar]
  29. Luo, Y.; Fu, Q.; Xie, J.; Qin, Y.; Wu, G.; Liu, J.; Jiang, F.; Cao, Y.; Ding, X. EEG-based emotion classification using spiking neural networks. IEEE Access 2020, 8, 46007–46016. [Google Scholar] [CrossRef]
  30. Devnath, L.; Kumer, S.; Nath, D.; Das, A.; Islam, R. Selection of wavelet and thresholding rule for denoising the ECG signals. Ann. Pure Appl. Math. 2015, 10, 65–73. [Google Scholar]
  31. Zhang, D.; Cao, D.; Chen, H. Deep learning decoding of mental state in non-invasive brain computer interface. In Proceedings of the International Conference on Artificial Intelligence, Information Processing and Cloud Computing, Sanya, China, 19–21 December 2019; pp. 1–5. [Google Scholar]
  32. Friedman, N.; Fekete, T.; Gal, K.; Shriki, O. EEG-based prediction of cognitive load in intelligence tests. Front. Hum. Neurosci. 2019, 13, 191. [Google Scholar] [CrossRef]
  33. Xia, L.; Feng, Y.; Guo, Z.; Ding, J.; Li, Y.; Li, Y.; Ma, M.; Gan, G.; Xu, Y.; Luo, J.; et al. MuLHiTA: A novel multiclass classification framework with multibranch LSTM and hierarchical temporal attention for early detection of mental stress. IEEE Trans. Neural Netw. Learn. Syst. 2022, 34, 9657–9670. [Google Scholar] [CrossRef]
  34. Xiong, R.; Kong, F.; Yang, X.; Liu, G.; Wen, W. Pattern recognition of cognitive load using eeg and ecg signals. Sensors 2020, 20, 5122. [Google Scholar] [CrossRef]
  35. Souchet, A.D.; Philippe, S.; Lourdeaux, D.; Leroy, L. Measuring visual fatigue and cognitive load via eye tracking while learning with virtual reality head-mounted displays: A review. Int. J. Hum.–Comput. Interact. 2022, 38, 801–824. [Google Scholar] [CrossRef]
  36. Belwafi, K.; Gannouni, S.; Aboalsamh, H. Embedded brain computer interface: State-of-the-art in research. Sensors 2021, 21, 4293. [Google Scholar] [CrossRef]
  37. Venkatesan, C.; Karthigaikumar, P.; Paul, A.; Satheeskumaran, S.; Kumar, R. ECG signal preprocessing and SVM classifier-based abnormality detection in remote healthcare applications. IEEE Access 2018, 6, 9767–9773. [Google Scholar] [CrossRef]
  38. Zayim, N.; Yıldız, H.; Yüce, Y.K. Estimating Cognitive Load in a Mobile Personal Health Record Application: A Cognitive Task Analysis Approach. Healthc. Inform. Res. 2023, 29, 367. [Google Scholar] [CrossRef]
  39. Prabhakar, G.; Mukhopadhyay, A.; Murthy, L.; Modiksha, M.; Sachin, D.; Biswas, P. Cognitive load estimation using ocular parameters in automotive. Transp. Eng. 2020, 2, 100008. [Google Scholar] [CrossRef]
  40. Gjoreski, M.; Mahesh, B.; Kolenik, T.; Uwe-Garbas, J.; Seuss, D.; Gjoreski, H.; Luštrek, M.; Gams, M.; Pejović, V. Cognitive load monitoring with wearables–lessons learned from a machine learning challenge. IEEE Access 2021, 9, 103325–103336. [Google Scholar] [CrossRef]
  41. Bensimon, M.; Greenberg, S.; Ben-Shimol, Y.; Haiut, M. A New SCTN Digital Low Power Spiking Neuron. IEEE Trans. Circ. Syst. II Express Briefs 2021, 68, 2937–2941. [Google Scholar] [CrossRef]
  42. Bensimon, M.; Greenberg, S.; Haiut, M. Using a low-power spiking continuous time neuron (SCTN) for sound signal processing. Sensors 2021, 21, 1065. [Google Scholar] [CrossRef] [PubMed]
  43. Bensimon, M.; Hadad, Y.; Ben-Shimol, Y.; Greenberg, S. Time-Frequency Analysis for Feature Extraction Using Spiking Neural Network. Authorea Prepr. 2023. [Google Scholar] [CrossRef]
  44. Malmivuo, J.; Plonsey, R. Bioelectromagnetism: Principles and Applications of Bioelectric and Biomagnetic Fields; Oxford University Press: New York, NY, USA, 1995. [Google Scholar]
  45. Song, S.; Miller, K.D.; Abbott, L.F. Competitive Hebbian learning through spike-timing-dependent synaptic plasticity. Nat. Neurosci. 2000, 3, 919–926. [Google Scholar] [CrossRef]
  46. Erickson, N.; Mueller, J.; Shirkov, A.; Zhang, H.; Larroy, P.; Li, M.; Smola, A. Autogluon-tabular: Robust and accurate automl for structured data. arXiv 2020, arXiv:2003.06505. [Google Scholar]
  47. Myrden, A.; Chau, T. A passive EEG-BCI for single-trial detection of changes in mental state. IEEE Trans. Neural Syst. Rehabil. Eng. 2017, 25, 345–356. [Google Scholar] [CrossRef]
  48. Alirezaei, M.; Sardouie, S.H. Detection of human attention using EEG signals. In Proceedings of the 2017 24th National and 2nd International Iranian Conference on Biomedical Engineering (ICBME), Tehran, Iran, 30 November–1 December 2017; IEEE: New York, NY, USA, 2017; pp. 1–5. [Google Scholar]
  49. Nuamah, J.; Seong, Y. Support vector machine (SVM) classification of cognitive tasks based on electroencephalography (EEG) engagement index. Brain-Comput. Interfaces 2018, 5, 1–12. [Google Scholar] [CrossRef]
Figure 1. (a) SNN-based resonator architecture and (b) the neurons’ output.
Figure 1. (a) SNN-based resonator architecture and (b) the neurons’ output.
Applsci 14 08911 g001
Figure 2. Overall SNN-based architecture. (a) EEG electrodes positioned according to 10–20 standard. (b) SNN-based resonators used for feature extraction through supervised STDP learning. (c) Feature mapping, with EEG topologic map consisting of 11 × 11 SCTNs. (d) SCTN-based classification network trained with unsupervised STDP.
Figure 2. Overall SNN-based architecture. (a) EEG electrodes positioned according to 10–20 standard. (b) SNN-based resonators used for feature extraction through supervised STDP learning. (c) Feature mapping, with EEG topologic map consisting of 11 × 11 SCTNs. (d) SCTN-based classification network trained with unsupervised STDP.
Applsci 14 08911 g002
Figure 3. SCTN-based resonator frequency response to a chirp signal in the EEG sub-band ranges.
Figure 3. SCTN-based resonator frequency response to a chirp signal in the EEG sub-band ranges.
Applsci 14 08911 g003
Figure 4. EEG frequency features. (a,c) SNN-based spikegram and (b,d) FFT spectrogram.
Figure 4. EEG frequency features. (a,c) SNN-based spikegram and (b,d) FFT spectrogram.
Applsci 14 08911 g004
Figure 5. EEG feature mapping. (a) EEG topologic map consisting of 11 × 11 SCTNs for each sub band. (b) Weight distribution for the 14 synapses of each SCTN, according to an EEG electrode position map.
Figure 5. EEG feature mapping. (a) EEG topologic map consisting of 11 × 11 SCTNs for each sub band. (b) Weight distribution for the 14 synapses of each SCTN, according to an EEG electrode position map.
Applsci 14 08911 g005
Figure 6. EEG sub-band topography maps for FFT vs. SCTN. SCTN-based topographic maps (one for each sub-band) are compared to the spacial maps created using an FFT.
Figure 6. EEG sub-band topography maps for FFT vs. SCTN. SCTN-based topographic maps (one for each sub-band) are compared to the spacial maps created using an FFT.
Applsci 14 08911 g006
Figure 7. EEG topography maps for the five EEG sub-bands (delta, theta, alpha, beta, and gamma).
Figure 7. EEG topography maps for the five EEG sub-bands (delta, theta, alpha, beta, and gamma).
Applsci 14 08911 g007
Figure 8. The activity measured in the F3 EEG electrode for the delta sub-band.
Figure 8. The activity measured in the F3 EEG electrode for the delta sub-band.
Applsci 14 08911 g008
Table 1. Average classification results.
Table 1. Average classification results.
ModelFrame Duration (ms)FFTSCTN
Weighted Ensemble1090.10%93.86%
SVM2091.51%93.18%
XGBoost1088.82%92.64%
Random Forest586.81%92.46%
LightGBM1589.42%91.96%
KNN2063.77%68.98%
Table 2. A comparison of mental state classification results.
Table 2. A comparison of mental state classification results.
ReferenceMental StatesAccuracy
Myrden and Chau [47]3 (fatigue, frustration, attention)84.8%
Alirezaei and Sardouie [48]2 (attentive or inattentive)92.8%
Nuamah and Seong [49]2 (attentive or inattentive)93.33%
Acı et al. [24]3 (focused, unfocused, drowsy)91.72%
Our Method3 (focused, unfocused, drowsy)96.8%
Table 3. Transfer learning results. This table summarizes the simulation results when the network was trained on data from one subject and tested across all participants. The right column shows the performance of the generic model, which was trained using data collected from all participants and then tested using each of the subjects separately.
Table 3. Transfer learning results. This table summarizes the simulation results when the network was trained on data from one subject and tested across all participants. The right column shows the performance of the generic model, which was trained using data collected from all participants and then tested using each of the subjects separately.
Trained
P0P1P2P3P4Avg TLGeneric Model
TestedP098.3%91.1%88.5%93.8%97.3%92.68%93.4%
P191.3%94.7%87.3%92.4%90.2%90.3%89.8%
P293.9%85.1%95.9%88.1%89.3%89.1%94.3%
P391.2%92.8%95.2%97.2%96.3%93.88%93.5%
P489.9%87.5%92.5%96.8%97.9%91.68%92.2%
Avg92.92%90.24%91.88%93.66%94.2%92.53%92.64%
Table 4. A comparison of EEG classification accuracy with five electrodes and three sub-bands.
Table 4. A comparison of EEG classification accuracy with five electrodes and three sub-bands.
EEG ElectrodesFrequency BandsNeuronsAccuracy
All (14)All (5)277696.8%
All (14)Delta, Theta, Alpha253494.1%
F7, F3, AF3, F4, and AF4All (5)144494.9%
F7, F3, AF3, F4, and AF4Delta, Theta, Alpha120293.6%
Table 5. Performance evaluation in terms of the four metrics: accuracy, sensitivity, specificity, and false positive rate (FPR).
Table 5. Performance evaluation in terms of the four metrics: accuracy, sensitivity, specificity, and false positive rate (FPR).
AccuracySensitivitySpecificityFPR
Focus96.8%98.3%99.4%0.9%
Unfocused96.8%96.4%97.4%1.8%
Drowsy96.8%95.7%98.2%2.2%
Table 6. Confusion matrix.
Table 6. Confusion matrix.
FocusUnfocusedDrowsing
Focus98.8% 0.8 % 0.4 %
Unfocused 0.9 % 95.2% 3.9 %
Drowsy 0.8 % 2.8 % 96.4%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hadad, Y.; Bensimon, M.; Ben-Shimol, Y.; Greenberg, S. Situational Awareness Classification Based on EEG Signals and Spiking Neural Network. Appl. Sci. 2024, 14, 8911. https://doi.org/10.3390/app14198911

AMA Style

Hadad Y, Bensimon M, Ben-Shimol Y, Greenberg S. Situational Awareness Classification Based on EEG Signals and Spiking Neural Network. Applied Sciences. 2024; 14(19):8911. https://doi.org/10.3390/app14198911

Chicago/Turabian Style

Hadad, Yakir, Moshe Bensimon, Yehuda Ben-Shimol, and Shlomo Greenberg. 2024. "Situational Awareness Classification Based on EEG Signals and Spiking Neural Network" Applied Sciences 14, no. 19: 8911. https://doi.org/10.3390/app14198911

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop