Next Article in Journal
Multi-Risk Factor and Knowledge Entropy Framework for Alternating Current Arc Fault Detection
Previous Article in Journal
Smart Contract for Relay Verification Collaboration Rewarding in NOMA Wireless Communication Networks
Previous Article in Special Issue
Transformer-Based GAN with Multi-STFT for Rotating Machinery Vibration Data Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Comparison of LSTM- and GRU-Type RNN Networks for Attention and Meditation Prediction on Raw EEG Data from Low-Cost Headsets

by
Fernando Rivas
1,*,
Jesús Enrique Sierra-Garcia
2,* and
Jose María Camara
2
1
Department of Electromechanical Engineering, University of Burgos, 09006 Burgos, Spain
2
Department of Digitalization, University of Burgos, 09006 Burgos, Spain
*
Authors to whom correspondence should be addressed.
Electronics 2025, 14(4), 707; https://doi.org/10.3390/electronics14040707
Submission received: 27 December 2024 / Revised: 5 February 2025 / Accepted: 7 February 2025 / Published: 12 February 2025

Abstract

:
This study bridges neuroscience and artificial intelligence by developing advanced models to predict cognitive states—specifically attention and meditation—using raw EEG data collected from low-cost commercial devices such as NeuroSky and Brainlink. Leveraging the temporal capabilities of recurrent neural networks (RNNs), particularly long short-term memory (LSTM) and gated recurrent units (GRUs), the study evaluates their effectiveness in predicting future cognitive states. These predictions have applications in real-time brain–computer interface (BCI) systems, enhancing responsiveness and adaptability in dynamic environments like robotic control. The proposed LSTM model demonstrated superior predictive accuracy for meditation states, achieving a Root Mean Squared Error (RMSE) of 10.90, while the GRU model excelled in predicting attention states, with an RMSE of 11.79. Both models outperformed the results provided by the proprietary eSense algorithm, reinforcing the potential of raw EEG data in cognitive-state analysis. Notably, inference times were optimized to under 50 milliseconds, making the models suitable for real-time applications. These findings underline the feasibility of using raw EEG signals from affordable devices for robust real-time prediction, offering a significant step forward in applied neuroscience. This research lays the groundwork for further exploration of RNN architectures in BCI applications, enabling safer, more intuitive, and personalized interactions in assistive technologies and beyond.

1. Introduction

Brain–computer interface technology has emerged as a transformative bridge between human cognition and external devices, offering promising solutions for applications ranging from assistive technologies to rehabilitation systems. A critical challenge in BCI development is the reliable detection and interpretation of cognitive states that can serve as robust control signals [1]. Among these states, attention and meditation have garnered particular interest due to their distinctive neurophysiological signatures and practical implications for BCI applications.
The importance of attention and meditation in BCI systems stems from several key factors. First, attention represents a fundamental cognitive mechanism that directly influences task performance, learning efficiency, and error prevention in human–machine interaction [2]. In BCI applications, attentional states can serve as natural control signals, as they can be voluntarily modulated by users and maintain stability over extended periods. Second, meditation states offer complementary advantages through their association with enhanced signal-to-noise ratios in EEG readings and reduced cognitive interference, potentially improving BCI reliability [3].
Current commercial BCI systems, such as those utilizing NeuroSky technology, employ proprietary algorithms to detect these cognitive states. However, these closed systems present several limitations: lack of transparency in signal processing, inability to customize detection parameters for specific applications, and restricted adaptation to individual user characteristics [4]. These constraints have spurred research interest in developing open, adaptable alternatives that can advance both scientific understanding and practical applications.
EEG has proven particularly valuable for studying attention and meditation due to its high temporal resolution and ability to capture rapid cognitive state transitions [5]. Recent advances in EEG signal processing have demonstrated distinct neural signatures associated with diverse levels of attention and meditative states, particularly in the prefrontal cortex regions [6,7]. These findings suggest the potential for developing more sophisticated detection algorithms that can leverage these neural patterns for enhanced BCI control.
The integration of attention and meditation detection in BCIs has significant practical implications. In rehabilitation settings, accurate detection of attention levels can help optimize therapy sessions and provide objective measures of patient engagement [8]. For assistive technologies, meditation states can serve as stable control signals, particularly beneficial for users with limited motor control. These applications demonstrate the practical value of improving cognitive state detection in BCI systems.
The emergence of advanced machine learning techniques, particularly RNNs, offers new opportunities to address current limitations in cognitive state detection. LSTM and GRU networks have demonstrated particular promise in capturing temporal dependencies in EEG signals, yet their application to attention and meditation detection remains relatively unexplored [8].

1.1. Hypothesis and Contributions

The central hypothesis of this study is that it is feasible to predict attention and meditation values derived from EEG signals using neural networks. This hypothesis is founded on the premise that the temporal and non-linear characteristics of EEG signals can be effectively captured and modeled by advanced neural architectures. Specifically, this study explores the applicability of these predictive models in accurately estimating the cognitive states of attention and meditation, which are essential for various human–computer interaction applications.
By addressing this hypothesis, the research aims to contribute to the growing body of knowledge on EEG signal processing and its integration with machine learning techniques. The outcomes of this investigation have significant implications for developing real-time applications in neurofeedback, cognitive training, and brain–computer interface systems, offering a pathway for improved user experiences and technological advancements in the field.
This research makes specific contributions to the field:
  • Development of LSTM and GRU architectures specifically optimized for real-time detection of attention and meditation states from raw EEG signals.
  • Empirical validation of these models’ performance compared to existing proprietary solutions, with detailed analysis of accuracy, latency, and robustness.
  • Introduction of a new methodology for processing raw EEG data that enables greater customization and adaptation of BCI systems.
  • Demonstration of practical applications through case studies in assistive technology and rehabilitation contexts
Our approach addresses several critical limitations in current BCI systems. By working directly with raw EEG signals rather than preprocessed data, we enable greater transparency and customization possibilities. The use of advanced RNN architectures allows for better capture of temporal dynamics in cognitive-state transitions, potentially improving detection accuracy. Furthermore, our models’ ability to operate in real-time makes them suitable for practical BCI applications.
This research not only advances our understanding of cognitive-state detection in BCI systems but also provides practical tools for improving human–machine interaction in critical applications. The combination of advanced machine learning techniques with raw EEG signal processing represents a significant step toward more adaptable and effective BCI systems.

1.2. Paper Structure

The paper is structured as follows: Section 2 reviews related works in EEG signal processing and cognitive-state prediction. Section 3 details the materials and methods, including experimental setup, data acquisition protocols, and the architecture of our LSTM and GRU models. Section 4 presents results and validation metrics, while Section 5 discusses findings in relation to the existing literature. Finally, Section 6 concludes with key contributions and future research directions.

2. Related Works

Neuroscience has experienced a boom in recent decades, especially in exploring the relationship between brain activity and cognitive states such as attention and meditation. EEG has established itself as an essential tool for capturing and analyzing the brain’s electrical activity in real time. As technology advances, researchers have begun to decipher the brainwave patterns associated with sustained attention and meditative states, opening new possibilities for understanding the human mind. These advances not only offer insights into the fundamental nature of consciousness but also have the potential to influence practical applications, from improving cognitive performance to treating neurological disorders.
This article provides a more detailed and elaborate review of EEG-based attention and meditation prediction, incorporating the most recent publications.
In the past decade, the field of EEG has experienced significant advancements, revolutionizing our understanding of brain activity and its applications in various areas of neuroscience. Chaddad et al. (2023) presented a comprehensive review of EEG signal processing methods and techniques, encompassing everything from acquisition to classification and application [2,9]. This review highlights the inherent complexity of EEG signals and underscores the critical need to develop advanced preprocessing and feature extraction methods for their effective analysis. The complexity of these non-invasive signals has spurred researchers to propose innovative approaches to unravel the wealth of information contained in patterns of electrical brain activity. Concurrently, Posner (2023) examined the evolution of attention networks, proposing an integrative approach that combines human and animal studies to address unresolved problems in this field [3]. His work emphasizes the fundamental importance of attention networks in integrating cognitive and neural studies, laying the groundwork for significant advances in cognitive neuroscience. This integrative perspective promises to unveil the mechanisms underlying complex attentional processes and their relationship to other higher cognitive functions.
The integration of emerging technologies with traditional EEG techniques has opened new avenues of research, expanding our understanding of brain processes in more natural and ecologically valid contexts. An innovative 2019 study explored the connections between creative behavior, flow state, and brain activity through the integration of EEG and virtual reality [4]. This research revealed significant correlations between individual creativity levels, flow state, and the quality of creative output, providing valuable insights into the neural substrates of creativity and focused attention. In the realm of meditation, several studies have utilized EEG to investigate the effects of different techniques on brain activity and cognitive performance. A 2022 retrospective analysis compared “internal” versus “external” meditation techniques, shedding light on the relative efficacy of different meditative approaches [5]. Complementarily, a 2020 longitudinal study provided direct evidence of the effectiveness of Focused Attention Meditation (FAM) training in modulating brain activity and improving cognitive performance [6], underscoring the potential of meditative practices in optimizing brain functions.
The convergence of EEG with other emerging technologies has significantly broadened the horizon of neuroscientific research. An innovative 2021 project combined EEG with a brainwave lamp to study real-time attention, meditation, and fatigue values [10], opening new possibilities for monitoring and modulating mental states in various contexts. This multidisciplinary approach not only allows for a more holistic assessment of cognitive and emotional states but also offers promising perspectives for applications in areas such as mental health and cognitive performance. Furthermore, a pioneering 2021 study revealed a significant reorganization of brain network connectivity following intensive meditation training [11]. This research identified changes in key areas such as the right insula, superior temporal gyrus, inferior parietal lobe, and bilateral superior frontal gyrus, providing neurobiological evidence of the long-term effects of meditative practice on the brain’s functional architecture. These collective advances not only demonstrate the immense potential of EEG in understanding brain processes but also lay the foundation for revolutionary applications in various fields of neuroscience, biomedical engineering, and personalized medicine, promising to transform our understanding of the human brain and its functioning in states of health and disease.
These publications provide an in-depth and up-to-date overview of research and advances in the field of EEG-based attention and meditation prediction. The combination of advanced signal-processing techniques, together with innovative approaches to measuring and analyzing attention and meditation, is leading to significant discoveries that may have practical applications in areas such as mental health, education, and general well-being.
One of the primary limitations identified in the current literature is the widespread dependence on NeuroSky’s proprietary algorithm for interpreting EEG signals. This algorithm, designed to determine values such as attention and meditation, has been widely used in numerous studies. For instance, the research conducted by Rușanu et al. (2023) [8] that developed a LabVIEW instrument for brain–computer interface research using the NeuroSky MindWave Mobile headset does not specify whether it relied on NeuroSky’s algorithm for determining certain values. This dependence on a proprietary algorithm raises questions about the reproducibility and comparability of results across different studies, as well as the flexibility in interpreting EEG data for specific applications.
Another significant limitation of the NeuroSky/Brainlink headband lies in its precision and resolution compared to medical-grade or laboratory EEG systems. As a low-cost device designed for the consumer market, the NeuroSky headband may not offer the same level of fidelity in signal acquisition as more expensive professional equipment. This discrepancy in data quality can have important implications for research, especially in studies that require high precision in measuring brain activity. The limitation in spatial resolution, due to the reduced number of electrodes, also restricts the ability to accurately localize sources of neural activity, which can be crucial in certain cognitive and clinical neuroscience applications.
A significant gap in the current literature is the scarcity of research specifically focusing on the use of raw signals from the NeuroSky headband to determine mental states such as attention and meditation [12]. Many studies rely on NeuroSky’s algorithm-processed data, limiting the exploration of raw EEG signals’ full potential. This research addresses this gap by using RNNs, specifically LSTM and GRU models, to analyze raw EEG data. These architectures are ideal for time series like EEG signals, capturing complex patterns and long-term dependencies [13].
By bypassing the proprietary algorithm, this approach enhances flexibility in data interpretation, uncovering patterns and mental states that NeuroSky’s algorithm might overlook. Analyzing raw data also enables the development of personalized models for attention and meditation, tailored to specific applications.
LSTM and GRU networks are particularly effective in handling EEG’s sequential nature. LSTMs retain relevant information over time, while GRUs efficiently update internal states, making them well-suited to detect subtle brain activity patterns linked to cognitive states.
Additionally, deep learning techniques like RNNs can identify new features and relationships in EEG data, offering insights into brain signals and cognitive states [14]. This could reveal biomarkers for neurological or psychological conditions while improving result interpretability compared to NeuroSky’s opaque “black box” algorithm.
Despite the hardware limitations of devices like the NeuroSky headband, advanced signal processing and RNN-based models improve the functional resolution of data, enabling more precise brain-activity inferences. This enhances the headband’s utility and broadens its application to areas like cognitive neuroscience, clinical psychology, and advanced brain–computer interfaces.
This innovative approach not only addresses the current limitations of the NeuroSky headband but also paves the way for more sophisticated and nuanced analyses of EEG data in general. By leveraging the power of deep learning and working directly with raw signals, researchers can potentially uncover subtle patterns and relationships in brain activity that were previously inaccessible. This could lead to breakthroughs in our understanding of cognitive processes, emotions, and various neurological conditions.
Furthermore, the development of custom RNN-based models for EEG analysis could have far-reaching implications beyond the specific context of the NeuroSky headband. The methodologies and insights gained from this research could be applied to other EEG devices and even to more complex multi-channel EEG systems, potentially revolutionizing the field of brain signal analysis.
In conclusion, while the NeuroSky headband has already made significant contributions to democratizing EEG research, the proposed approach of using RNNs to analyze raw signals represents a crucial next step in unlocking its full potential. Although our system is trained with the results of the headset’s own algorithm, the key contribution is in the ability to predict future states of attention and meditation. This extends the functionality of BCI systems, allowing them to anticipate user needs and improve interaction with external devices [14]. This predictive modeling based on recurrent neural networks opens new pathways for real-time applications, such as BCI-controlled robotic arms or wheelchair systems, where immediate response to cognitive states is crucial to ensure above all user safety.
The summary of related works is shown in Table 1.

3. Technologies

3.1. NeuroSky

The NeuroSky headband has emerged as a revolutionary tool in the field of EEG signal acquisition, offering an accessible and versatile alternative to traditional medical-grade EEG systems. Despite its relative simplicity, this device has proven invaluable in a wide range of research and development applications.
Figure 1 shows the location of the EEG potential signal capture points in a healthy brain.
The brain signals and frequency ranges captured by NeuroSky are shown in Table 2.
In the following figure, Figure 2, we can see the typical typology of these signals based on their frequency and waveform.
Its ability to provide raw EEG data has opened new avenues of research and democratized access to applied neuroscience. A pioneering study conducted in 2022 by Vasilescu et al. illustrates the potential for integrating the NeuroSky headband with advanced data acquisition and processing systems [8]. The researchers developed a series of LabVIEW applications that enable real-time acquisition, processing, feature extraction, and classification of EEG signals detected by the integrated sensor of the NeuroSky MindWave Mobile headset. This innovative approach not only enhances the accessibility of EEG data but also facilitates its real-time analysis, opening new possibilities for research in BCI.
The versatility of the NeuroSky/Brainlink headbands is further evidenced by its application in diverse research fields. A study conducted by Mohd Amin et al. in 2020 explored the use of the NeuroSky Smarter Kit in a brain training program for the elderly [21]. That research focused on analyzing changes in attention and meditation levels, providing valuable insights into how EEG technology can contribute to improving cognitive health in ageing populations. Concurrently, an innovative 2023 study by Shrestha et al. leveraged the capabilities of the NeuroSky headband to classify EEG signals based on color stimuli [16]. Using a deep neural network based on attention, the researchers successfully classified raw EEG signals from the NeuroSky MindWave headset based on two and four different colors.
The NeuroSky’s proprietary algorithm, known as the eSense algorithm, is designed to compute values for attention and meditation by analyzing specific EEG signal components. For attention, the algorithm primarily focuses on the power ratio of high and low Beta waves, which are associated with active cognitive engagement. For meditation, it combines Alpha waves, linked to relaxation, with Theta waves, often indicative of a meditative or drowsy state. Figure 3 shows typical attention and mediation signals. Figure 4 shows the brain points for the calculation of attention and meditation based on signals from the prefrontal area of the brain, identified as point FP1 and FP2.
While the eSense algorithm is effective in providing an estimation of these cognitive states, its proprietary nature poses certain limitations. These include restricted transparency in signal processing, limited adaptability to specific applications, and an inability to customize parameters for individual users. Our study addresses these limitations by using raw EEG data and advanced neural network models, offering a more transparent and flexible approach to cognitive-state prediction.
The practical application of NeuroSky technology in assisting people with physical disabilities is evident in the project developed by Sathyanarayanan et al. in 2021 [24]. This brain-controlled EEG system for home automation, specifically designed to aid individuals with physical disabilities and paralysis, achieved an impressive 90% accuracy in detecting attention levels [14]. These results underscore the transformative potential of accessible EEG technology in improving the quality of life for vulnerable populations. Complementing these advancements, an earlier study by Mathur et al. in 2018 explored the classification of EEG-based directional signals using RNN variants [17]. The researchers implemented a sophisticated model using long short-term memory with an attention layer to classify both raw EEG signals and power signals generated by the NeuroSky MindWave device. This approach not only demonstrates the versatility of the data provided by the NeuroSky headband but also illustrates how advanced deep learning techniques can extract meaningful information from these signals, thus expanding the horizon of potential applications in fields such as cognitive neuroscience, human–machine interaction, and neurological rehabilitation.
These studies demonstrate the versatility and applicability of the NeuroSky headset in a variety of research areas, from brain–computer interface to color-based signal classification and home automation. Despite the inherent limitations of a low-cost, single-channel device, researchers have found innovative ways to use it to achieve significant results in their respective fields.

3.2. Predictive Deep Learning Models

In this work, we explore the ability of LSTM and GRU networks to predict attention and meditation signals. LSTM and GRU networks have shown promising results in capturing long-term dependencies and modeling temporal dynamics in sequential data, including EEG signals. These models have been successfully applied in various cognitive state-prediction tasks [25].
LSTM networks are advanced recurrent neural networks designed to handle sequential data by addressing the vanishing-gradient problem. As illustrated in Figure 5, each LSTM cell includes three key gates: the input gate, which controls new information entering the cell; the forget gate, which discards unnecessary information; and the output gate, which forwards relevant information to the next step. This gating system allows LSTMs to manage long-term dependencies effectively, making them ideal for processing EEG signals in our BCI project.
GRU networks represent a simplified and highly efficient variant of recurrent neural networks, designed to capture long-term dependencies in sequential data. As illustrated in Figure 6, a GRU unit comprises two primary gates: the update gate, which determines what portion of the previous information is retained, and the reset gate, which controls how much past information is forgotten. This simpler structure, compared to LSTMs, allows GRUs to process data sequences efficiently whilst maintaining the ability to capture complex temporal relationships, making them particularly suitable for EEG signal analysis in our BCI project.
In the context of our research, GRUs are employed to analyze and classify patterns in EEG signals, a crucial step in translating neural activity into commands for assistive technologies. The GRU’s ability to learn and recognize temporal patterns in EEG data, corresponding to specific neural activities or intentions, is fundamental to the system’s accuracy and responsiveness. This makes GRU networks an essential component of our BCI framework, facilitating a seamless transition between raw EEG signals and actionable outputs in assistive devices. Their integration into our system not only exemplifies cutting-edge neural-processing technologies but also establishes a solid foundation for future advancements in intuitive human–machine interaction within the field of BCIs.

4. Use-Case Architecture

The use-case architecture is shown in Figure 7. The EEG signals are recorded using commercial headsets (NeuroSky/Brainlink) equipped with dry electrodes, capturing brain activity from the prefrontal cortex. These raw signals, containing information from frequency bands such as delta, theta, alpha, and beta, form the input for the neural networks. The target data used to train the neural networks are the attention and meditation signals computed by the patented algorithm of NeuroSky. This algorithm is patented and protected by NeuroSky and is not freely available.
The aim of the neural network is to replicate the outputs of the patented algorithm. This way the attention and meditation signals will be able to be used in low cost BCIs without these capabilities. The right side in Figure 7 shows the deployment of the neural network. It is shown that whatever low cost BCI can be used. NeuroSky is not needed.
This process ensures that the predictive framework remains both flexible and adaptive. By relying on raw EEG signals rather than the fixed outputs of proprietary algorithms, the architecture achieves greater transparency and adaptability. Furthermore, the use of RNNs allows the system to effectively capture the temporal dependencies within the data, resulting in more accurate and dynamic predictions of cognitive states. This design not only addresses existing limitations in EEG-based systems but also ensures its applicability across diverse real-world scenarios.

5. Methodology

The process to record the dataset, analyze, and organize the information, and train the neural models is divided into the following steps:
  • Experimental setup: Dataset are recorded following a standardized procedure.
  • Feature sets: considering the information captured in the dataset different feature sets are identified.
  • Data preprocessing: data are converted into a structure suitable for a time series, as needed to train networks.
  • Training: Data are separated into training and validation sets. RandomSearch is used to find the best hyperparameters.
  • Cross-validation: Cross-validation is used to validate that results are consistent regardless of the subsets in the dataset are considered.

5.1. Experimental Setup

The study employed a structured data collection approach spanning 6 months (June 2023–December 2023). Data were collected from 5 participants (3 male, 2 female, age range of 21–60 years) using both NeuroSky and Brainlink headsets.
Participants were selected based on the following:
-
No history of neurological disorders,
-
Normal or corrected-to-normal vision,
-
No prior experience with BCI devices.
Recording sessions:
-
Two 30-min recording sessions separated by one week,
-
Controlled environment settings (22 °C (±1), 45 dB ambient noise),
-
Talks included free-cognitive-state periods (≥10 min).
Each subject performed the experiments in two sessions separated by several days to ensure reproducibility of the results [26]. Subsequently, the signals from the different subjects were also incorporated into a continuous dataset in order to achieve a sufficient volume of information to guarantee the training process of the LSTM and GRU network.
The age range varies between 21 and 60 years, trying to maintain gender parity. The participants’ data are anonymized, being collected with a consecutive trial number that does not allow for the identification or association of the data with the participant in the trials. It should be noted that, in male adults close to 60 years of age, the process of reading the data in some cases has become unfeasible, as no data can be obtained from the prefrontal region of the subjects, indicating that the use of non-invasive dry electrodes in this case may be a barrier to consistent data capture.
Experiments were conducted in a controlled environment to minimize external distractions. A NeuroSky device and a Brainlink device were used to record EEG signals, as both devices have the same TGAM-based technology.
During the experiments, participants were asked to act naturally, trying to voluntarily maintain high levels of attention and concentration, according to the real-time values that could be seen in the data-capture application.
The data acquisition process was carefully designed to ensure both the authenticity of the collected signals and the comfort of the participants. To closely replicate real-world conditions, participants were given the freedom to engage in any activity of their choice during the sessions, such as watching films, chatting, reading, or simply relaxing. This approach aimed to capture a diverse range of natural cognitive states while minimizing action bias that could otherwise influence the data and limit their generalizability.
The duration of each session was set between 15 and 30 min, providing an optimal balance between data quantity and participant comfort. Given that the EEG headset records one block of data per second, this setup resulted in a minimum of 900 and up to 1800 data points per session, with each block containing 11 signal values. This design ensured that the dataset was both extensive and reflective of natural behavioral conditions, supporting the development of models robust enough to handle dynamic real-life environments.

5.2. Features Sets

The dataset captured during the experiments contains the following columns:
  • Timestamp: Timestamp of the capture.
  • Attention: Attention value.
  • Meditation: Meditation value.
  • Delta, Theta, low Alpha, high Alpha, low Beta, high Beta, low Gamma, and high Gamma: Values of the brain signals.
  • Signal: This column indicates the quality of the signal. In general, a value of 0 indicates a good signal quality, while higher values indicate a poor signal quality or no signal.
NeuroSky’s patented algorithm uses Beta signals to compute the attention, and Alpha and Theta to compute meditation. Thus, we have created two different sets of features. In the complete set, all signals are simultaneously used as inputs to predict the attention and meditation levels. In the partial feature set, only the signals used by the protected algorithm are used. Table 3 shows the relation between input and output signals in each feature set.
Figure 8 shows the attention and meditation data, together with the brain signals, during one of the experiments. Some signals, such as Delta, show much higher values compared to other signals. Attention and meditation signals show fluctuations over time, indicating changes in the levels of attention and meditation.

5.3. Data Preprocessing

The EEG data underwent a series of carefully designed preprocessing steps to prepare them for training the LSTM and GRU networks. Initially, the raw signals were assessed for quality using the headset’s internal metrics, and any segments with poor signal quality were excluded to minimize the impact of noise or artefacts. The remaining data were then normalized to a standard range from 0 to 1, ensuring consistent scaling and facilitating stable model training. To capture the temporal dependencies inherent in EEG signals, the data were organized into sliding look-back windows, where a fixed number of prior time steps (tested with sizes of 3, 5, 7, and 15) were used as input for predicting subsequent values.

5.4. Data Training

The dataset was split into training (65%) and testing (35%) subsets, allowing for robust model evaluation and generalizability testing:
  • Training set with 1118 records (65%),
  • Test set with 602 records (35%).
We used RandomSearch to determine the best hyperparameters and architecture. As part of the RandomSearch process, several hyperparameters were systematically varied to identify the optimal configuration for the LSTM and GRU models. Table 4 provides a comprehensive summary of the hyperparameters explored, including their respective ranges and the best values determined through the experiments. This optimization process was crucial for enhancing model performance and ensuring robust predictions.
These hyperparameters were optimized separately for attention and meditation datasets, with consistent performance improvements observed for both.

5.5. Cross-Validation

Cross-validation is a vital step in machine learning to ensure that a model performs reliably and is not overly tailored to a specific dataset. This approach ensures that the model is evaluated on different subsets of data, improving its reliability and reducing the bias that might occur if a single train–test split was used. By dividing the data into training and testing subsets, it helps validate the model’s ability to generalize, providing confidence that it will work effectively in real-world scenarios.
To evaluate the performance and generalizability of the LSTM and GRU models, we employed a k-fold cross-validation approach with k = 5, using the values of the hyperparameters obtained in the previous RandomSearch process. This methodology ensures robust performance evaluation while minimizing the risk of overfitting. The process is described as follows:
  • Dataset partitioning:
    The dataset was randomly shuffled and divided into 5 equally sized folds.
    At each iteration, 1 fold was used as the test set, while the remaining 4 folds were combined to form the training set.
  • Training and validation:
    The models were trained on the training set and evaluated on the test fold. This process was repeated 10 times, with each fold serving as the test set once.
    For each fold, we recorded metrics such as RMSE, MSE, and MAE to measure prediction accuracy.
  • Performance aggregation:
    After completing the 10 iterations, the evaluation metrics were averaged across all folds to obtain a reliable estimate of the model’s performance.

5.6. Performance Evaluation Metrics

To evaluate the performance of the LSTM, GRU, and CNN models in calculating attentional and meditative states, metrics such as RMSE, MSE, MAE, and SMAPE were used. These metrics are essential to determine the accuracy and reliability of the models in predicting cognitive states from EEG signals. The choice of these metrics is based on previous studies that have demonstrated their effectiveness in evaluating deep learning models in EEG-based prediction tasks [27].
In the context of interpreting the performance of a neural network, the choice of the appropriate metric depends on the specific problem and the characteristics of the data. The mentioned metrics (MAE, MSE, RMSE, and SMAPE) have different properties and are applied in different situations. A detailed and well-argued justification for each is provided below:
  • Mean Absolute Error (MAE)
Definition: The MAE is the mean of the absolute values of the errors between predictions and actual values [28].
MAE Formula (1):
M A E = 1 n   i = 1 n y i y ^ i
where
n is the number of observations,
yi is the actual value,
ŷi is the predicted value.
Advantages:
-
It is easy to interpret, as it represents the average error in the same units as the data.
-
It is robust to outliers, as it does not penalize large errors as much as the MSE.
Disadvantages:
-
It is not differentiable at all points, thus potentially complicating its use in some optimization algorithms.
2.
Mean Squared Error (MSE)
Definition: The MSE is the Mean Squared Error between predictions and actual values [28].
MSE Formula (2):
M S E = 1 n i = 1 n ( y i y ^ i ) 2
where
n is the number of observations,
yi is the actual value,
ŷi is the predicted value.
Advantages:
-
It penalizes large errors more heavily, which can be useful if you want to avoid large deviations.
-
It is always differentiable, which facilitates its use in neural network optimization.
Disadvantages:
-
It is more sensitive to outliers, as large errors have a quadratic impact on the metric.
3.
Root Mean Squared Error (RMSE)
Definition: The RMSE is the square root of the MSE [28].
RMSE Formula (3):
R M S E = M S E R M S E = 1 n i = 1 n ( y i y ^ i ) 2
where
n is the number of observations,
yi is the actual value,
ŷi is the predicted value.
Advantages:
-
Similar to MSE in terms of penalizing large errors but returns errors in the same units as the original data, which can be more intuitive.
-
Useful when a metric is needed that reflects the magnitude of errors more directly than MSE.
Disadvantages:
-
Shares the same sensitivity to outliers as the MSE.
4.
Symmetric Mean Absolute Percentage Error (SMAPE)
Definition: SMAPE is a percentage error metric that is symmetric: it treats overestimation and underestimation errors equally [29,30].
SMAPE Formula (4):
S M A P E = 1 n   t = 1 n A t F t A t + F t / 2
where
At is the actual value,
Ft is the forecast value,
n is the total number of observations.
Advantages:
-
It provides a relative measure of error, which can be useful when comparing errors on different scales.
-
It is symmetrical, which makes it suitable for cases where relative errors are to be treated equally.
Disadvantages:
-
It can be unstable when actual values or predictions are close to zero, due to splitting.
Final recommendation:
The choice of the most recommendable metric depends on the specific context:
MAE is recommended when an easy-to-interpret metric is needed and the impact of outliers is to be minimized.
MSE and RMSE are useful when you want to penalize larger errors more. RMSE is especially recommended if you need a metric in the same units as the data.
SMAPE is preferable when a relative and symmetric metric is needed, especially in problems where the data may vary in magnitude.
In general, for most neural network regression problems, RMSE is usually the most recommended metric because of its balance between penalizing large errors and easy interpretability in the units of the original data. However, the final selection should consider the specific characteristics of the problem and the objectives of the analysis.

6. Results

To assess the accuracy and efficacy of these models, performance metrics were selected, as well as cross-validation techniques to ensure the robustness of the models. This comparison methodology is essential to discern the relative strengths and weaknesses of LSTM and GRU networks in the task of prediction from EEG data, thus enabling a comprehensive assessment of their applicability in neurofeedback and BCI contexts [31]. The evaluation metrics RMSE, MSE, MAE, and SMAPE validate the results, being in line with the results provided by the previous literature on deep learning model evaluation methodologies [32].
For the computational process and calculation of values and metrics with the RandomSearch method, Google Sandbox and Google Colab were used to facilitate a significant reduction in operating times, after selecting the GPU configuration necessary to optimize the process in its execution environment. Python 3.10.11(64-bits) was used as the programming language.

6.1. LSTM Performance

In the following figures, Figure 9 and Figure 10, the complete LSTM model validation process can be observed, as well as the metric values and the optimal hyperparameters for these metric values.
The first analysis performed was the calculation of attention and meditation using the same calculation scheme followed by NeuroSky and Brainlink, segmenting the neural signals and discarding the Delta signal value. The first comparison process was performed for the attention and meditation values using an LSTM network and RandomSearch for the determination of the hyperparameters, as shown in Table 5 for the attention values and Table 6 for the meditation values.
The same process as above, but in this case, with the analysis of 100% of the values of the neural signals obtained from the headband, without replicating the procedure followed by the NeuroSky company, is shown below in Table 7 and Table 8.

6.2. GRU Performance

The same process was performed, but using a GRU network and the RandomSearch calculation structure, as shown in Table 9 for the prediction of attention and Table 10 for the value of meditation.
In the last two tables, repeated look-back values can be seen, since in the testing process the values obtained, especially in the definition of the hyperparameters, showed values far from what was expected or with a greater dispersion than allowed.
As with the previous model, we performed the prediction with the GRU architecture while maintaining the test conditions, meaning that we maintained the analysis on 100% of the neural signals, with the following results shown in Table 11 for the attention value and Table 12 for the meditation value.

6.3. Model Comparison

To compare the prediction performance between the LSTM and GRU networks, we focused on the RMSE metric as the main evaluation metric. The reason for this choice is that RMSE provides a direct measure of error in the same units as the original data, making it easier to interpret. In addition, RMSE penalizes larger errors more heavily, which is crucial in the context of time-series forecasting, where significant errors can affect the practical utility of the model. The comparison will provide information on the strengths and weaknesses of each model in predicting attention and meditation from raw EEG signals [33].
In the process of comparing the above data, the following results can be extracted for the two model architectures and prediction strategies, based on the data obtained in Table 13, Table 14, Table 15 and Table 16.
These values resulting from the RandomSearch calculation are reflected in the following graphs, as shown in Figure 11 for the attention values and in Figure 12 for the meditation values.
As a summary, the result of the best prediction based on the RMSE metric is shown in Table 17, where you can see the comparison not only of the performance of the LSTM and GRU networks but also the size of the time window (look-back) and which of the prediction strategies is more interesting to follow in our research.
From the above data, we can extract the optimal value obtained, as well as the architecture and the prediction calculation model, as can be seen in Table 18 for the attention value and Table 19 for the meditation value.
Continuing with the study and analysis of the values obtained and with the aim of guaranteeing the prediction process, a new calculation and test will be carried out on the look-back values, taking the previous and subsequent values to determine, without any doubt, the optimum value of the time window that determines the best prediction of the models. This new test has been carried out following the same procedure, comparing the two RNN architectures since, as can be seen in the values in Table 15 and Table 16 of results, in both cases, the prediction is more favorable with the model that does not use the calculation structure defined and followed by NeuroSky in the eSense algorithm.
To compare the GRU and LSTM architectures for predicting attention and meditation states using EEG signals, a temporal five-fold cross-validation was implemented to evaluate their performance and stability. Parameter 5 was selected because it provided positive results in previous works [34,35]. Key evaluation metrics included MAE, MSE, RMSE, and SMAPE, each accompanied by standard deviations to assess consistency across validation folds. The results revealed distinct patterns in the behavior of these architectures, offering valuable insights into their suitability for predicting mental states. Table 20 and Table 21 below show and compare the results obtained from the cross-validation application. For each case, the configuration of the best hyperparameters has been used.
Based on the values obtained and shown in the tables above, we can state that in the case of attention-state prediction, GRU outperformed LSTM across all metrics, with a notably lower MAE compared to LSTM’s. GRU also demonstrated superior stability, as reflected in lower standard deviations, particularly for MSE. However, the difference in SMAPE values between GRU and LSTM was marginal, indicating similar performance in terms of normalized percentage error. This suggests that while GRU is more robust and reliable for attention prediction, both models are comparable when interpretability of normalized errors is prioritized in practical applications.
For meditation-state prediction, the performance of GRU and LSTM was strikingly similar, with almost identical MAE values and parity across all metrics. Both architectures showed greater stability in meditation predictions compared to attention, as evidenced by significantly lower standard deviations. Notably, the SMAPE for meditation was considerably lower, suggesting that meditation states exhibit more consistent and predictable patterns in EEG signals. These findings highlight the distinct characteristics of mental states and their computational modeling potential, offering practical guidance for architecture selection and avenues for further research in deep learning applications for EEG-based mental-state prediction.
To ensure the consistency of the above results, the window values or (LB) before and after the calculated value will be analyzed, as the analysis intervals have been performed in two window steps, leaving values unanalyzed.
The result of the comparison with the previous and subsequent values are shown in Table 22 and Table 23 below.
With these data, we can confirm that the values of the time window are consistent and that the data were calculated previously. These data are reflected in Figure 13, corresponding to the attention and meditation value.
This verification allows us to specify the metrics and values of the time window that best results in the prediction of attention and meditation, and the final results correspond to Table 24 and Table 25, which confirm the initial values of the first RandomSearch test.
Thus, we can conclude that the prediction of the attention and meditation values using LSTM-type RNNs to determine the meditation value and GRU type for the attention value.

6.4. Real-Time Deployment and Analysis of Inference Time

With these results, the next step is to calculate the inference times of the networks in the calculation of the attention and meditation value in a real-time analysis. This calculation is motivated by the limitation of the reading process of the NeuroSky and Brainlink headset that supplies a block of raw data (Delta, Alpha, Theta, …) every second, so implicitly there is a limitation in the available time of inference in the calculation.
As can be seen in the graphs in Figure 14, the inference times in the calculation of the attention and meditation values are substantially less than one second, with average values around 50 milliseconds.
The inference times of attention were calculated with a GRU, with LB = 5. In the case of meditation, it was performed with LSTM, with LB = 7. These architectures were used because they provided the best performances, according to Section 6.3.
To complete the real-time analysis, additional EEG data were collected from a new subject, allowing us to independently validate the previous experimental setup. This approach ensures that the model is evaluated against entirely unseen data that were not included in the training, validation, or cross-validation processes.
The real-time testing was conducted using the GRU network, following insights from the cross-validation results, which demonstrated a slight advantage of this architecture over LSTM in predictive performance.
Using this newly acquired dataset, we proceeded with real-time testing of the GRU-based neural network, incorporating the optimized hyperparameters. The results obtained from this evaluation are presented in Table 26, and Figure 15 and Figure 16.
It is possible to see how these results are similar to those presented in Section 6.3. This additional testing further strengthens the validation of our model in a real-world setting.

7. Discussion

In our study, LSTM and GRU models were used to predict attention and meditation levels from raw EEG data. The results show that both models are able to make predictions with relatively low errors, as indicated by the MAE, MSE, and RMSE metrics.
Comparison with the literature:
  • “EEG-Based Age and Gender Prediction Using Deep BLSTM-LSTM Network Model” (2019) [36]: This study demonstrates the effectiveness of LSTM architectures in classifying EEG data, albeit in a different context (age and gender). The high accuracy obtained in this study suggests that LSTMs are suitable for capturing complex temporal features of EEG signals, a suggestion that is consistent with their findings that LSTMs can successfully predict attentional and meditative states.
  • “Application of Artificial Intelligence Techniques for Brain–Computer Interface in Mental Fatigue Detection: A Systematic Review (2011–2022)” (2023) [37]: Although this study focuses on mental-fatigue detection, the systematic review of AI techniques applied to BCI supports the idea that deep learning models are powerful tools for interpreting EEG signals. This reinforces the validity of the study’s approach using LSTM and GRU to predict cognitive states.
  • “EEG-based Biometric Authentication Using Machine Learning: A Comprehensive Survey” (2022) [38]: This study provides an overview of machine learning techniques applied to EEG-based biometric authentication. Although the goal is different, the effectiveness of machine learning techniques in classifying EEG signals bodes well for their application in attention and meditation prediction.
In summary, the results obtained are in line with the existing literature regarding the applicability and effectiveness of RNNs, specifically LSTMs and GRUs, for analyzing and predicting cognitive states from EEG signals. The comparison of different architectures and the optimization of hyperparameters in their study provide a valuable contribution to the field of BCI study, demonstrating that, with the right setup, these models can be tuned to improve accuracy in predicting complex mental states. The integration of bioelectric signal acquisition systems with artificial intelligence techniques, as demonstrated in recent work by Laganà et al. (2024), offers promising opportunities for enhancing signal interpretation and clinical diagnosis through the combination of robust hardware design and advanced computational analysis methods [39]. This synergistic approach can lead to more accurate and reliable diagnostic tools in neurological assessment.

7.1. Analysis of the Strengths and Weaknesses of LSTM and GRU Networks for the Prediction of Attention and Meditation

LSTM and GRU networks are variants of recurrent neural networks that have been widely used to process sequences of data such as EEG signals. Both architectures are designed to capture long-term temporal dependencies, making them suitable for time-series prediction tasks such as predicting attention and meditation from EEG signals. However, each has its own strengths and weaknesses in this context.
Strengths of LSTM:
  • Memory capacity: LSTMs are designed to avoid the problem of gradient fading, which allows them to learn long-term dependencies. This is crucial when working with EEG signals, which may contain patterns relevant to attention and meditation over long periods of time.
  • Accuracy: Studies have shown that LSTMs can be very accurate in classification and prediction tasks, as reflected in the study “EEG-Based Age and Gender Prediction Using Deep BLSTM-LSTM Network Model” (2019), suggesting that they can be equally effective in predicting attention and meditation [36].
Weaknesses of LSTMs:
  • Complexity and computational cost: LSTMs have a more complex structure than GRUs, possibly leading to higher computational cost and longer training times, especially on large datasets.
  • Risk of overfitting: Given their complexity, LSTMs can be prone to overfitting, especially when insufficient training data are available.
Strengths of GRU:
  • Efficiency: GRUs have a simpler structure than LSTMs, as they combine forgetting and updating gates. This can result in faster training and higher computational efficiency, as suggested in the systematic review “Application of Artificial Intelligence Techniques for Brain–Computer Interface in Mental Fatigue Detection” (2023) [37].
  • Flexibility: The simplicity of GRUs can make them more flexible to adapt to different data sizes, which can be advantageous in BCI applications where datasets may be limited or highly varied [40].
Weaknesses of GRU:
  • Memory capacity: Although GRUs are efficient, they may have a slightly lower memory capacity compared to LSTMs, potentially posing a drawback when modeling EEG signals that require the capture of long-term information.
  • Generalization: GRUs may have difficulty generalizing in some cases, especially when dealing with complex or subtle patterns in the data, which could affect the accuracy of attention prediction and meditation.
In our study, the final results show that both LSTM and GRU models perform comparably in terms of MAE, MSE, and RMSE metrics. This indicates that, despite their differences, both architectures can capture the dynamics of EEG signals to predict attention and meditation with reasonable accuracy. The choice between LSTM and GRU may depend on factors specific to the dataset and application context, such as the size of the dataset, the availability of computational resources, and the need for fast training.
We can observe how both LSTMs and GRUs have their merits in predicting cognitive states from EEG signals. The choice between them must be based on a balance between desired accuracy and available resources, as well as on the specific nature of the EEG data being worked with. On the other hand, if we also consider the results obtained from the cross-validation process, we can conclude that the results indicate that GRU offers superior performance and stability for attention-state prediction, making it the preferred choice for tasks requiring robust and consistent predictions. However, for meditation-state prediction, both GRU and LSTM demonstrate an equivalent performance, allowing the choice between them to be guided by practical considerations, such as computational efficiency. These findings provide valuable insights into the suitability of these architectures for mental-state modeling and underscore the potential for future research to further optimize their application in EEG-based cognitive-state prediction.

7.2. Implications and Possible Applications of the Research Results

The research results have several significant implications and open the door to multiple practical applications in the field of BCI, cognitive neuroscience, and mental health. The ability to accurately predict attentional and meditative states from EEG signals using LSTM and GRU networks has the potential to positively impact several areas:

7.2.1. Implications for BCI Research and Technology

  • Improved brain–computer interfaces: LSTM and GRU models could be integrated into BCI devices to provide real-time feedback on users’ attention and meditation states. This could improve human–machine interaction, especially in applications that require sustained concentration, such as learning or driving.
  • Personalization of user experience: By understanding and predicting cognitive states, applications could dynamically adapt to user needs, improving the experience in virtual reality applications, video games, and educational applications.

7.2.2. Applications in Mental Health and Well-Being

  • Monitoring and improving mental well-being: wearable devices equipped with EEG sensors and the predictive models developed could be used to monitor stress levels and mental well-being, providing timely interventions, such as breathing exercises or guided meditation.
  • Personalized therapies: In the clinical context, the models could help personalize therapies for attention or meditation disorders, such as ADHD or anxiety, by adjusting interventions based on the patient’s brain response in real time [41].

7.2.3. Implications for Education and Training

  • Improved educational tools: Education systems could use these models to assess and improve students’ concentration during learning activities, adapting content to maintain optimal attention.
  • Attention training: In high-performance pursuits, such as sport or music, the models could be used to train individuals in concentration and meditation techniques, improving overall performance.

7.2.4. Future Research in Cognitive Neuroscience

  • Understanding cognitive processes: The results may provide a basis for further studies on the underlying neural mechanisms of attention and meditation, contributing to scientific knowledge in cognitive neuroscience.
  • Biomarker development: The ability to predict cognitive states from EEG could lead to the development of biomarkers for various neurological and psychiatric conditions.

7.2.5. Challenges and Ethical Considerations

  • Data privacy and security: Implementation of these technologies must address the privacy and security of EEG data, which are sensitive biometric information.
  • Accessibility and equity: It is crucial to consider accessibility and equity in the development and implementation of BCI applications to ensure that the benefits are available to a wide range of users.
In summary, the results of this research have the potential to enrich human–computer interaction, improve mental health and well-being, and advance scientific understanding of cognitive processes. However, it is critical to address ethical and practical challenges in order to maximize the benefits and minimize the potential risks.
For a more detailed and specific discussion of BCI and EEG applications in mental-fatigue detection, the study [37] provides a relevant systematic review. In addition, the survey [38] provides an overview of EEG applications in biometric authentication and could provide insights into future applications of LSTM and GRU models in this field.

7.3. Limitations Encountered During the Study

  • Sample Size and Diversity:
    • The sample may have been limited in size or diversity, which affects the generalizability of the results. A larger and more diverse sample could improve the robustness of the predictive models.
  • EEG Data Quality:
    • EEG data can be subject to noise and artefacts, which can affect the accuracy of predictions. Data quality is critical to the performance of machine learning models.
  • Complexity of Cognitive States:
    • Attention and meditation are complex cognitive states that may not be fully captured by EEG data or metrics used.
  • Models and Hyperparameters:
    • Model selection and hyperparameter optimization may have been limited by time or available computational resources.
  • Model Interpretation:
    • Neural networks, especially deep ones such as LSTM and GRU, are often criticized for their lack of interpretability, which can make it difficult to understand how models arrive at their predictions.

7.4. Comparison with the Results Previously Obtained in Similar Studies

Our results demonstrate that both LSTM and GRU models are capable of effectively predicting attention and meditation values, showcasing their suitability for EEG-based cognitive-state analysis. These findings align with previous research that has employed recurrent neural networks for EEG signal processing, reinforcing their capacity to capture the temporal dynamics inherent in this type of data. For example, studies have shown similar predictive performance when using RNN-based architectures for cognitive-state classification [36,37]. However, many of these studies relied on preprocessed or proprietary EEG features, whereas our approach uses raw EEG signals, which enhance transparency and adaptability.
One of the notable insights from our work is that GRU models, due to their simpler architecture, provide a computational advantage over LSTM without sacrificing accuracy. This observation is consistent with prior findings in an analysis of the computational and efficiency advantages of the GRU over LSTMs [40]. However, unlike much of the existing research that relies on multi-channel EEG systems, our study demonstrates the feasibility of using low-cost, single-channel devices, making EEG-based technologies more accessible for practical applications. These distinctions underline the relevance of our study in bridging the gap between advanced predictive models and real-world usability.
Future research could build on these findings by testing the models on larger and more diverse datasets, as well as exploring hybrid architectures or additional neural network approaches to further enhance performance and generalizability. Nonetheless, this study provides a meaningful step toward simplifying and improving EEG-based cognitive-state predictions for practical and scalable applications.

8. Conclusions

This study has explored the application of deep learning models, specifically LSTM and GRU networks, in the prediction of cognitive states of attention and meditation using raw EEG signals. Our preliminary results indicate that these advanced models can accurately capture the temporal dynamics and long-term dependencies present in EEG signals, as doing so is essential for the accurate prediction of cognitive states [25]. Performance comparison between LSTM and GRU networks has provided valuable insight into the strengths and weaknesses of each model in this specific domain. Evaluation metrics, RMSE, MSE, MAE, and SMAPE, have been essential to quantify and compare the performance of these models [42].
For attention, the LSTM model with the partial feature set outperformed in regard to MAE and MSE, showing lower average and squared errors. Although its SMAPE is slightly higher, this model remains preferable if absolute error minimization is prioritized. Similarly, for meditation, the LSTM model with the partial feature set consistently showed better performance across all metrics compared to the GRU model with the algorithm, indicating higher accuracy and reliability. These findings highlight the promise of deep learning models in predicting cognitive states from raw EEG signals, paving the way for further exploration of RNNs in applied neuroscience and real-time BCI systems.
In conclusion, this study highlights the potential of LSTM and GRU neural networks to predict attention and meditation states using raw EEG signals collected from single-channel, low-cost devices. Both models demonstrated strong performance, with GRU standing out as a computationally efficient option that does not sacrifice accuracy. These results underscore the practicality of these neural network architectures for real-time cognitive-state monitoring, particularly in accessible applications like neurofeedback and brain–computer interface systems.
Moving forward, future research could build on these findings by involving larger and more diverse participant groups to improve the generalizability of the models. Additionally, integrating EEG data with other physiological signals or exploring hybrid neural network architectures may further enhance prediction accuracy and expand the range of applications. Overall, this work marks an important step toward making EEG-based cognitive-state prediction both simpler and more adaptable for real-world use.

Author Contributions

Conceptualization, F.R.; methodology, J.E.S.-G. and J.M.C.; formal analysis, F.R.; research, F.R., J.E.S.-G., and J.M.C.; data preservation, F.R.; data, F.R.; writing the original draft, F.R.; writing—revising and editing, J.E.S.-G. and J.M.C.; visualization, J.E.S.-G.; supervision, J.E.S.-G. and J.M.C.; project administration, F.R. All authors have read and accepted the published version. All authors have read and agreed to the published version of the manuscript.

Funding

This work was financially supported by the University of Burgos.

Data Availability Statement

The data presented in this study are available upon request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
BCIbrain–computer interface
EEGelectroencephalography
EMGelectromyography
EOG(electrooculography)
AIartificial intelligence
CNNconvolutional neural networks
RNNrecurrent neural networks
LSTMlong short-term memory
MAEMean Absolute Error
MSEMean Squared Error
RMSERoot Mean Squared Error
SMAPESymmetric Mean Absolute Percentage Error
DBSDeep Brain Stimulation

References

  1. Sergio, R.L. Reducción de Artefactos En Señales Electroencefalográficas Mediante Nuevas Técnicas de Filtrado Automático Basadas En Separación Ciega de Fuentes. Ph.D. Thesis, Universitat Politècnica de Catalunya, Barcelona, Spain, 2010. [Google Scholar]
  2. Miranda, C.; Lescher, A.; Rojas, A.; Molino, J.; Ibarra, E.; de Tristan, S. Detección Temprana de Epilepsia Pediátrica: Progresión de los Electrodos en EEG. Eur. Sci. J. 2023, 19, 1. [Google Scholar] [CrossRef]
  3. Posner, M.I. The Evolution and Future Development of Attention Networks. J. Intell. 2023, 11, 98. [Google Scholar] [CrossRef] [PubMed]
  4. Yang, X.; Cheng, P.Y.; Lin, L.; Huang, Y.M.; Ren, Y. Can an Integrated System of Electroencephalography and Virtual Reality Further the Understanding of Relationships Between Attention, Meditation, Flow State, and Creativity? J. Educ. Comput. Res. 2019, 57, 846–876. [Google Scholar] [CrossRef]
  5. Sharma, K.; Wernicke, A.G.; Rahman, H.; Potters, L.; Sharma, G.; Parashar, B. A Retrospective Analysis of Three Focused Attention Meditation Techniques: Mantra, Breath, and External-Point Meditation. Cureus 2022, 14, e23589. [Google Scholar] [CrossRef] [PubMed]
  6. Yoshida, K.; Takeda, K.; Kasai, T.; Makinae, S.; Murakami, Y.; Hasegawa, A.; Sakai, S. Focused Attention Meditation Training Modifies Neural Activity and Attention: Longitudinal EEG Data in Non-Meditators. Soc. Cogn. Affect. Neurosci. 2020, 15, 215–224. [Google Scholar] [CrossRef]
  7. García, P.; Ángel, M. Caracterización de la sincronía de fase de EEG para su aplicación en Interfaces Cerebro-Computadora. Ph.D. Thesis, Universidad Autónoma Metropolitana, Mexico City, Mexico, 2020. [Google Scholar] [CrossRef]
  8. Rușanu, O.A. A LabVIEW Instrument Aimed for the Research on Brain-Computer Interface by Enabling the Acquisition, Processing, and the Neural Networks Based Classification of the Raw EEG Signal Detected by the Embedded NeuroSky Biosensor. Int. J. Online Biomed. Eng. 2023, 19, 57–81. [Google Scholar] [CrossRef]
  9. Vélez, L.; Kemper, G. Algorithm for Detection of Raising Eyebrows and Jaw Clenching Artifacts in EEG Signals Using Neurosky Mindwave Headset. In Proceedings of the 5th Brazilian Technology Symposium; Smart Innovation, Systems and Technologies; Springer: Cham, Switzerland, 2021; Volume 202, pp. 99–110. [Google Scholar]
  10. Chen, R.C.; Liou, M.J.; Dewi, C. Combination of EEG and Brainwave Mind Lamp to Detect the Value of Attention, Meditation and Fatigue of a Person. In Proceedings of the 2021 International Conference on Technologies and Applications of Artificial Intelligence (TAAI), Taichung, Taiwan, 18–20 November 2021; pp. 174–179. [Google Scholar]
  11. Bréchet, L.; Ziegler, D.A.; Simon, A.J.; Brunet, D.; Gazzaley, A.; Michel, C.M. Reconfiguration of Electroencephalography Microstate Networks after Breath-Focused, Digital Meditation Training. Brain Connect. 2021, 11, 146–155. [Google Scholar] [CrossRef] [PubMed]
  12. You, S.D. Classification of Relaxation and Concentration Mental States with EEG. Information 2021, 12, 187. [Google Scholar] [CrossRef]
  13. Ali, A.; Afridi, R.; Soomro, T.A.; Khan, S.A.; Khan, M.Y.A.; Chowdhry, B.S. A Single-Channel Wireless EEG Headset Enabled Neural Activities Analysis for Mental Healthcare Applications. Wirel. Pers. Commun. 2022, 125, 3699–3713. [Google Scholar] [CrossRef] [PubMed]
  14. Adhikari, B.; Shrestha, A.; Mishra, S.; Singh, S.; Timalsina, A.K. EEG Based Directional Signal Classification Using RNN Variants. In Proceedings of the 2018 IEEE 3rd International Conference on Computing, Communication and Security (ICCCS), Kathmandu, Nepal, 25–27 October 2018; pp. 218–223. [Google Scholar]
  15. Chaddad, A.; Wu, Y.; Kateb, R.; Bouridane, A. Electroencephalography Signal Processing: A Comprehensive Review and Analysis of Methods and Techniques. Sensors 2023, 23, 6434. [Google Scholar] [CrossRef]
  16. Shrestha, A.; Adhikari, B. Color-Based Classification of EEG Signals for People with the Severe Locomotive Disorder. arXiv 2023, arXiv:2304.11068. [Google Scholar]
  17. Saha, S.; Mathur, A.; Bora, K.; Basak, S.; Agrawal, S. A New Activation Function for Artificial Neural Net Based Habitability Classification. In Proceedings of the 2018 International Conference on Advances in Computing, Communications and Informatics, ICACCI 2018, Bangalore, India, 19–22 September 2018; pp. 1781–1786. [Google Scholar] [CrossRef]
  18. Permana, K.; Wijaya, S.K.; Prajitno, P. Controlled Wheelchair Based on Brain Computer Interface Using Neurosky Mindwave Mobile 2. AIP Conf. Proc. 2019, 2168, 020022. [Google Scholar]
  19. Attentional Modulation Effects on Brain Networks: An FMRI Study on the Visual Attention Network and the Default-Mode Network. Available online: https://www.researchgate.net/publication/281239686_Attentional_Modulation_Effects_on_Brain_Networks_an_fMRI_Study_on_the_Visual_Attention_Network_and_the_Default-Mode_Network?channel=doi&linkId=55dc6dcc08aec156b9b1771d&showFulltext=true (accessed on 23 January 2025).
  20. Brainwaves, The Key To Healthy Brain Function|The Neurofeedback Center of Pittsburgh. Available online: https://www.neurofeedbackpittsburgh.com/brainwaves-the-key-to-healthy-brain-function/ (accessed on 21 September 2024).
  21. Chaipakornwong, T.; Sittiprapaporn, P. Brain Exercise in Elderly: NeuroSky Smarter Kit Investigation. Asian J. Med. Sci. 2020, 11, 69–74. [Google Scholar] [CrossRef]
  22. Alpha-Theta Training—The Neuro Brain—Neurofeedback Melbourne—FAQs. Available online: https://theneurobrain.com/blog/tag/alpha-theta+training (accessed on 21 September 2024).
  23. Cerebral Areas for EEG Band Power Spectrum Calculations|Download Scientific Diagram. Available online: https://www.researchgate.net/figure/Cerebral-areas-for-EEG-band-power-spectrum-calculations_fig2_267811728 (accessed on 21 September 2024).
  24. Bala, P.; Amob, R.; Islam, M.; Adib, S.; Hasan, F.; Uddin, M.N. EEG—Based Load Control System for Physically Challenged People. In Proceedings of the 2021 2nd International Conference on Robotics, Electrical and Signal Processing Techniques (ICREST), Khaka, Bangladesh, 5–7 January 2021; pp. 603–606. [Google Scholar]
  25. Tigga, N.P.; Garg, S. Efficacy of Novel Attention-Based Gated Recurrent Units Transformer for Depression Detection Using Electroencephalogram Signals. Health Inf. Sci. Syst. 2023, 11, 1. [Google Scholar] [CrossRef] [PubMed]
  26. Zheng, W.; Lu, B. Investigating Critical Frequency Bands and Channels for EEG-Based Emotion Recognition with Deep Neural Networks. IEEE Trans. Auton. Ment. Dev. 2019, 7, 162–175. [Google Scholar] [CrossRef]
  27. Alfredo, A.S.; Adytia, D.A. Time Series Forecasting of Significant Wave Height Using GRU, CNN-GRU, and LSTM. J. RESTI (Rekayasa Sist. Dan Teknol. Inf.) 2022, 6, 776–781. [Google Scholar] [CrossRef]
  28. Understanding MAE, MSE, and RMSE: Key Metrics in Machine Learning—DEV Community. Available online: https://dev.to/mondal_sabbha/understanding-mae-mse-and-rmse-key-metrics-in-machine-learning-4la2 (accessed on 24 January 2025).
  29. Goodwin, P.; Lawton, R. On the Asymmetry of the Symmetric MAPE. Int. J. Forecast. 1999, 15, 405–408. [Google Scholar] [CrossRef]
  30. How to Calculate SMAPE in Excel?—GeeksforGeeks. Available online: https://www.geeksforgeeks.org/how-to-calculate-smape-in-excel/ (accessed on 24 January 2025).
  31. Yang, D.; Hollenstein, N. PLM-AS: Pre-Trained Language Models Augmented with Scanpaths for Sentiment Classification. In Proceedings of the Northern Lights Deep Learning Workshop, Tromsø, Norway, 10–12 January 2023; Volume 4. [Google Scholar] [CrossRef]
  32. Sahu, A.K.; Sharma, S.; Raja, R. Deep Learning-Based Continuous Authentication for an IoT-Enabled Healthcare Service. Comput. Electr. Eng. 2022, 99, 107817. [Google Scholar] [CrossRef]
  33. Sravanth, K.R.; Peddi, A.; Sagar, G.S.; Gupta, B.; Chakraborty, C. Comparison of Attention and Meditation Based Mobile Applications by Using EEG Signals. In Proceedings of the 6th Global Wireless Summit, GWS 2018, Chiang Rai, Thailand, 25–28 November 2018; pp. 260–265. [Google Scholar] [CrossRef]
  34. Noh, J.H.; Yang, H.-D. Alzheimer Progression Classification Using FMRI Data. Smart Media J. 2024, 13, 86–93. [Google Scholar] [CrossRef]
  35. Ma, B.; Dong, S. A Hybrid Prediction Model for Pumping Well System Efficiency Based on Stacking Integration Strategy. Int. J. Energy Res. 2024, 2024, 8868949. [Google Scholar] [CrossRef]
  36. Kaushik, P.; Gupta, A.; Roy, P.P.; Dogra, D.P. EEG-Based Age and Gender Prediction Using Deep BLSTM-LSTM Network Model. IEEE Sens. J. 2019, 19, 2634–2641. [Google Scholar] [CrossRef]
  37. Yaacob, H.; Hossain, F.; Shari, S.; Khare, S.K.; Ooi, C.P.; Acharya, U.R. Application of Artificial Intelligence Techniques for Brain-Computer Interface in Mental Fatigue Detection: A Systematic Review (2011–2022). IEEE Access 2023, 11, 74736–74758. [Google Scholar] [CrossRef]
  38. Shams, T.B.; Hossain, M.S.; Mahmud, M.F.; Tehjib, M.S.; Hossain, Z.; Pramanik, M.I. EEG-Based Biometric Authentication Using Machine Learning: A Comprehensive Survey. ECTI Trans. Electr. Eng. Electron. Commun. 2022, 20, 225–241. [Google Scholar] [CrossRef]
  39. Laganà, F.; Pratticò, D.; Angiulli, G.; Oliva, G.; Pullano, S.A.; Versaci, M.; Foresta, F. La Development of an Integrated System of SEMG Signal Acquisition, Processing, and Analysis with AI Techniques. Signals 2024, 5, 476–493. [Google Scholar] [CrossRef]
  40. Wang, X.; Han, Q.; Li, J.; Jin, Y. Research on Prediction Model of Epileptic EEG Signal Based on GRU. In Proceedings of the 2021 International Conference on Electronic Information Engineering and Computer Science (EIECS), Changchun, China, 23–26 September 2021; pp. 9–12. [Google Scholar]
  41. Hidalgo-Munoz, A.R.; Acle-Vicente, D.; Garcia-Perez, A.; Tabernero-Urbieta, C. Application of Neurotechnology in Students with ADHD: An Umbrella Review. Comun. Media Educ. Res. J. 2023, 31, 59–69. [Google Scholar] [CrossRef]
  42. Ahmadzadeh, E.; Kim, H.; Jeong, O.; Kim, N.; Moon, I. A Deep Bidirectional LSTM-GRU Network Model for Automated Ciphertext Classification. IEEE Access 2022, 10, 3228–3237. [Google Scholar] [CrossRef]
Figure 1. Functional areas of the cerebral cortex where we can capture the raw signal [19].
Figure 1. Functional areas of the cerebral cortex where we can capture the raw signal [19].
Electronics 14 00707 g001
Figure 2. Brainwave typology I (source NeuroFeedBack) [20].
Figure 2. Brainwave typology I (source NeuroFeedBack) [20].
Electronics 14 00707 g002
Figure 3. Brainwave typology II (source NeuroFeedBack) [22].
Figure 3. Brainwave typology II (source NeuroFeedBack) [22].
Electronics 14 00707 g003
Figure 4. FP1 and FP2 electrode selected position for capturing raw data (source ResearchGate) [23].
Figure 4. FP1 and FP2 electrode selected position for capturing raw data (source ResearchGate) [23].
Electronics 14 00707 g004
Figure 5. LSTM diagram.
Figure 5. LSTM diagram.
Electronics 14 00707 g005
Figure 6. GRU diagram.
Figure 6. GRU diagram.
Electronics 14 00707 g006
Figure 7. Use-case architecture.
Figure 7. Use-case architecture.
Electronics 14 00707 g007
Figure 8. Brain signals obtained with Brainlink headset.
Figure 8. Brain signals obtained with Brainlink headset.
Electronics 14 00707 g008
Figure 9. Hyperparameter calculation with LSTM model for attention values.
Figure 9. Hyperparameter calculation with LSTM model for attention values.
Electronics 14 00707 g009
Figure 10. Hyperparameter calculation with LSTM model for meditation values.
Figure 10. Hyperparameter calculation with LSTM model for meditation values.
Electronics 14 00707 g010
Figure 11. Comparison for attention prediction.
Figure 11. Comparison for attention prediction.
Electronics 14 00707 g011
Figure 12. Comparison for prediction of meditation.
Figure 12. Comparison for prediction of meditation.
Electronics 14 00707 g012
Figure 13. Final prediction results for attention and meditation.
Figure 13. Final prediction results for attention and meditation.
Electronics 14 00707 g013
Figure 14. Histogram of inference times.
Figure 14. Histogram of inference times.
Electronics 14 00707 g014
Figure 15. Real-time prediction of ATTENTION signal with GRU and data other than those used during training.
Figure 15. Real-time prediction of ATTENTION signal with GRU and data other than those used during training.
Electronics 14 00707 g015
Figure 16. Real-time prediction of MEDITATION signal with GRU and data other than those used during training.
Figure 16. Real-time prediction of MEDITATION signal with GRU and data other than those used during training.
Electronics 14 00707 g016
Table 1. Summary of related work in EEG-based cognitive-state prediction.
Table 1. Summary of related work in EEG-based cognitive-state prediction.
ReferenceFocus AreaMethods/TechniquesKey ResultsContributions
Chaddad et al. (2023) [15]EEG signal-processing methodsComprehensive review of preprocessing and feature extractionIdentified critical preprocessing techniques for robust EEG analysisProvided a foundation for understanding preprocessing challenges in EEG studies
Posner (2023) [3]Evolution of attention networksHuman and animal studies’ integrationHighlighted the role of attention networks in cognitive integrationIntroduced frameworks to understand attention mechanisms
Yang et al. (2019) [4]EEG and virtual reality for creativityEEG combined with virtual reality for flow state analysisCorrelated creativity and brainwave patterns during flow statesPioneered studies linking EEG signals to creativity and attention dynamics
Yoshida et al. (2020) [6] Meditation and cognitive performanceFAM with EEG longitudinal studyDemonstrated FAM’s effect on improving neural activity and attentionValidated meditation’s role in enhancing cognitive performance
Shrestha et al. (2023) [16] EEG signal classification for disabilitiesDeep learning applied to classify EEG signals from low-cost devicesAchieved high classification accuracy for EEG signals based on stimuliOpened avenues for developing assistive communication systems
Rușanu et al. (2023) [8]Real-time EEG data processingLabVIEW for EEG signal acquisition and neural network classificationEnabled real-time EEG signal classification using low-cost devicesImproved accessibility to real-time EEG-based BCI applications
Mathur et al. (2018) [17]Directional EEG signal classificationLSTM and attention mechanisms for EEG signal classificationDemonstrated efficacy of deep learning models for classifying brain signalsAdvanced methods for interpreting raw EEG data using neural networks
Bréchet et al. (2021) [11].Meditation and EEG microstatesDigital meditation training with EEGShowed reconfiguration of EEG networks post meditation trainingExplored EEG network adaptations to meditation
Permana et al. (2019) [18].EEG-based BCI for wheelchair controlNeuroSky MindWave Mobile applied to BCIDemonstrated feasibility of EEG-based control for assistive devicesValidated low-cost EEG devices for practical BCI implementations
Table 2. Signals and frequency ranges captured by NeuroSky.
Table 2. Signals and frequency ranges captured by NeuroSky.
Wave TypeCharacteristicsValues
(Frequency and Voltage)
Delta wave:Typical of infancy, children under 3 months of age. Phase III of physiological sleep, its contribution in adults must be considered abnormal. a 4 Hz
Greater than 50 μV
Theta wave:Located in the fronto-central area.
If the signal is less than 15 μV, it can be considered abnormal, unless it is accompanied by a good Alpha-wave rhythm.
Phase I and II of physiological sleep and during hyperventilation and/or fatigue.
4 a 7 Hz
Greater than 40 μV
Low Alpha wave:Located in occipital area.
If it maintains an asymmetry of more than 50%, it can be considered abnormal.
8 to 12 Hz
15 μV
High Alpha wave:
Low Beta wave:Predominates in periods of wakefulness.
Appears in states where attention is directed to external cognitive tasks.
Fast frequency; it is present when we are attentive and focused on solving everyday tasks or making decisions.
Beta 1 (12 to 15 Hz)
Beta 2 (15 to 22 Hz)
Beta 3 (22 to 30 Hz)
High Beta wave:
Low Gamma wave:These are the fastest frequency waves and occur in short bursts.
They are related to central nervous-system tasks.
It is observed in brain processes of high resolution and intensity, high brain activity.
In normal state, it is considered a value of 40 Hz.
25 a 100 Hz
High Gamma wave:
Table 3. Feature sets according to input signals.
Table 3. Feature sets according to input signals.
Feature SetInput SignalsOutput Signals
CompleteDelta, Theta, low Alpha, high Alpha, low Beta, high Beta, low Gamma, high GammaAttention, meditation
PartialLow Beta, high Beta, low Gamma, high Gamma,Attention
Theta, low Alpha, high AlphaMeditation
Table 4. Hyperparameter ranges and optimal values for LSTM and GRU model.
Table 4. Hyperparameter ranges and optimal values for LSTM and GRU model.
HyperparameterRange ExploredDescription
LSTM units[4, 8, 16, 32, 64]Number of units in the LSTM layer. Optimized to balance capacity and computational efficiency.
GRU units[4, 8, 16, 32, 64]Number of units in the GRU layer. Selected to balance accuracy and efficiency.
Batch size[1, 5, 10, 20}Number of samples processed before updating the model. Smaller values reduce overfitting risk.
Optimizer“adam”, “rmsprop”, “sgd”Algorithm used to optimize the model’s parameters. Adam performed best for both attention and meditation.
Dropout rate[0.01, 0.1, 0.2, 0.3, 0.4]Regularization applied to prevent overfitting during training.
Look-back window[3, 5, 7, 9, 15, 30, 60, 100]Number of previous time steps used as input to predict the next value.
Table 5. Attention prediction with LSTM model, using the partial feature set.
Table 5. Attention prediction with LSTM model, using the partial feature set.
LSTM
Attention
Look-BackLSTM UnitsBatch_SizeOptimizerDropoutMAEMSERMSESMAPE
181Adam0.0118.440895520.6142622.81697346.424299
381Adam0.0116.443464390.0695819.7501844.18967
56410Adam0.117.237684444.384421.08042745.00878
781Adam0.0117.011745418.785320.46424544.6055114
981Adam0.0117.269892450.3594421.22167445.212623
12325Adam0.218.437788522.306722.8540346.92971
15161rmsprop0.218.791443526.8211722.95258548.3881175
6081Adam0.0119.565895558.2704523.62774751.152211
100641Adam0.321.289196666.6501525.81956953.5422564
Table 6. Meditation prediction with LSTM model, using the partial feature set.
Table 6. Meditation prediction with LSTM model, using the partial feature set.
LSTM
Meditation
Look-BackLSTM UnitsBatch_SizeOptimizerDropoutMAEMSERMSESMAPE
181Adam0.0112.93792248.019516.3407326.07285
381Adam0.019.74729152.8109412.36167220.292328
581Adam0.019.730363153.52112.39035920.3964844
7165Adam0.019.932504160.7063812.67700220.925404
9165Adam0.0111.080525194.3491213.94091522.482206
12165Adam0.019.750857153.2048512.37759520.5961972
15165Adam0.0110.855734167.2912813.6854422.5330263
6081Adam0.0113.085041263.261616.22533826.387578
10081Adam0.019.309048137.0504911.70685720.1372519
Table 7. Attention prediction with LSTM, using the complete feature set.
Table 7. Attention prediction with LSTM, using the complete feature set.
LSTM
Attention
Look-BackLSTM UnitsBatch_SizeOptimizerDropoutMAEMSERMSESMAPE
181Adam0.0112.770439252.9847815.90549536.03656
381Adam0.0110.93788180.8592213.44839138.268446
581Adam0.0111.117896185.601513.62356440.6224002
781Adam0.019.586014146.6087612.108241134.657588
93220Adam0.110.250811166.413712.90014432.837361
12161rmsprop0.210.234032162.9176812.7692139.718466
15410Adam0.0112.086856219.698214.822277.626531
60410Adam0.0110.679445170.3128413.05039637.424832
1003220Adam0.113.549226270.0080616.43192159.413784
Table 8. Meditation prediction with LSTM, using the complete feature set.
Table 8. Meditation prediction with LSTM, using the complete feature set.
LSTM
Meditation
Look-BackLSTM UnitsBatch_SizeOptimizerDropoutMAEMSERMSESMAPE
1641Adam0.0113.012856266.6571416.3296426.2508482
381Adam0.019.405639139.4234811.80777219.6614996
581Adam0.019.011736122.98048411.08965719.215181
781Adam0.018.75148118.90111510.90417918.5926675
981Adam0.019.703162144.4120812.01715820.2926966
12165Adam0.019.390194136.7433611.69373119.839185
15165Adam0.0111.352932205.3916614.33149223.227106
60165Adam0.0113.8042286.817116.93567528.2309085
60B410Adam0.0110.759749189.4606613.79447123.1912166
10081Adam0.0111.555381208.567214.4418524.088779
Table 9. Attention prediction with GRU model, using partial feature set.
Table 9. Attention prediction with GRU model, using partial feature set.
GRU
Attention
Look-BackGRU UnitsBatch_SizeOptimizerDropoutMAEMSERMSESMAPE
181Adam0.0118.205666498.3054822.34275846.277722
381Adam0.0116.537975391.9775719.79842444.8215187
3B81Adam0.0116.757694409.662220.24011445.133781
581Adam0.0116.450132394.2369719.85540243.7501669
781Adam0.0116.665699400.6136820.01533745.749473
9410sgd0.214.613368.046819.184529.7287
9B81Adam0.0116.847183423.5908220.58132246.538028
1285Adam0.217.409185442.020521.02428446.35398
15165Adam0.0117.239313477.591921.85387645.865431
6081Adam0.0117.830471503.5434622.43977450.804883
60B6410Adam0.120.124237652.845425.5508457.5832188
60C410Adam0.113.5327297.385117.244926.5076
100410Adam0.0119.392126590.453224.29924254.391807
100B81Adam0.0119.640924585.088924.18861252.710318
Table 10. Meditation prediction with GRU model, using partial feature set.
Table 10. Meditation prediction with GRU model, using partial feature set.
GRU
Meditation
Look-BackGRU UnitsBatch_SizeOptimizerDropoutMAEMSERMSESMAPE
181Adam0.0112.695457257.6172216.05045925.7325947
36420Adam0.019.669626153.7237112.398520.1065033
3B81Adam0.019.377201142.3908511.93276419.491779
581Adam0.0110.554417178.8014713.37166721.483133
76420Adam0.019.94037161.1216712.69337120.752151
9641Adam0.311.3582215.132514.667421.554
9B6420Adam0.0110.049313163.7561312.796720.726409
12165Adam0.019.7034855152.6655412.3557920.34242
12B165Adam0.0111.157973197.590414.05668522.694964
15165Adam0.0110.180891168.5574212.98296621.997992
60410Adam0.0110.4398775183.2172413.53577622.5904629
60B81Adam0.0111.246518206.3545514.365046523.528207
10081Adam0.0112.257871231.8460415.22649125.1133353
Table 11. Prediction attention with GRU, using the complete feature set.
Table 11. Prediction attention with GRU, using the complete feature set.
GRU
Attention
Look-BackGRU UnitsBatch_SizeOptimizerDropoutMAEMSERMSESMAPE
181Adam0.0112.905936259.1397716.09781836.4739567
36410Adam0.114.400285173.7746613.18236237.5045925
56410Adam0.110.174233169.1274713.00490235.3435516
5B81Adam0.019.3669615139.0298611.7919331.9900125
73210rmsprop0.418.602463561.655523.69927244.233652
7B6410Adam0.110.734492191.5628413.84062334.664142
7C81Adam0.019.570991147.8270712.15841634.180242
96410Adam0.19.922806159.2766912.62048733.0549777
9B161rmsprop0.211.752387211.5149814.54355414.8547634
1185Adam0.29.488643148.8262612.19943730.0053089
1281Adam0.0111.753124203.1294314.25234845.8035856
1585Adam0.213.397072266.4889816.3244951.8113613
603220Adam0.110.852699181.964713.48942938.016602
1003220Adam0.120.168981605.448224.60584688.43652
Table 12. Meditation prediction with GRU, using the complete feature set.
Table 12. Meditation prediction with GRU, using the complete feature set.
GRU
Meditation
Look-BackGRU UnitsBatch_SizeOptimizerDropoutMAEMSERMSESMAPE
181Adam0.0112.594968248.4514615.76234325.644166
381Adam0.018.601949112.52299510.60768618.3427959
56420Adam0.019.537732151.2138712.29690520.873832
5B6420Adam0.0110.158519166.1158312.88859320.95598
781Adam0.0113.383541311.4934417.64917836.1100912
7B81Adam0.019.552063143.5795311.98246819.974741
7C81Adam0.019.075036131.0945711.44965119.262354
9165Adam0.019.315576143.137811.96402119.808089
116420Adam0.019.868773167.21212.9314821.18044
13165Adam0.0110.399938173.7316113.18071921.70301
13B165Adam0.0110.867846187.1013813.678522.142434
1581Adam0.019.803316150.1701512.25439420.382806
6081Adam0.0110.531624178.2550813.3512222.1152886
10081Adam0.0112.487414236.9107215.39190525.4003704
Table 13. Best prediction results for attention/meditation with LSTM, using partial feature set.
Table 13. Best prediction results for attention/meditation with LSTM, using partial feature set.
Output SignalLook-BackLSTM UnitsBatch_SizeOptimizerDropoutMAEMSERMSESMAPE
Attention381Adam0.0116.443464390.0695819.7501844.18967
Meditation581Adam0.019.730363153.52112.39035920.3964844
Table 14. Best prediction results for attention/meditation with GRU, using partial feature set.
Table 14. Best prediction results for attention/meditation with GRU, using partial feature set.
Output SignalLook-BackGRU UnitsBatch_SizeOptimizerDropoutMAEMSERMSESMAPE
Attention9410sgd0.214.613368.046819.184529.7287
Meditation3B81Adam0.019.377201142.3908511.93276419.491779
Table 15. Best prediction results for attention/meditation with LSTM, using the complete feature set.
Table 15. Best prediction results for attention/meditation with LSTM, using the complete feature set.
Output SignalLook-BackLSTM UnitsBatch_SizeOptimizerDropoutMAEMSERMSESMAPE
Attention781Adam0.019.586014146.6087612.108241134.657588
Meditation781Adam0.018.75148118.90111510.90417918.5926675
Table 16. Best prediction results for attention/meditation with GRU, using complete feature set.
Table 16. Best prediction results for attention/meditation with GRU, using complete feature set.
Output SignalLook-BackGRU UnitsBatch_SizeOptimizerDropoutMAEMSERMSESMAPE
Attention5B81Adam0.019.3669615139.0298611.7919331.9900125
Meditation7C81Adam0.019.075036131.0945711.44965119.262354
Table 17. Best prediction results for attention/meditation regarding LSTM vs. GRU.
Table 17. Best prediction results for attention/meditation regarding LSTM vs. GRU.
AttentionRMSELBMeditationRMSELB
GRU without11.791935GRU without11.4496517
LSTM without12.108247LSTM without10.9041797
GRU with12.3903595GRU with11.9327645
LSTM with19.750187LSTM with12.3903597
Table 18. Best prediction result for attention regarding LSTM vs. GRU.
Table 18. Best prediction result for attention regarding LSTM vs. GRU.
AttentionRMSELB
GRU without11.791935
Table 19. Best prediction result for meditation regarding LSTM vs. GRU.
Table 19. Best prediction result for meditation regarding LSTM vs. GRU.
MeditationRMSELB
LSTM without10.9041797
Table 20. Comparison of cross-validation results for attention.
Table 20. Comparison of cross-validation results for attention.
AttentionMAEMSERMSESMAPE
GRU:9.1176 (±0.9822)134.0736 (±29.3221)11.5149 (±1.2169)27.9219 (±10.4861)
LSTM:9.4034 (±1.6250)144.8023 (±54.1297)11.8589 (±2.0415)27.7985 (±10.8337)
Table 21. Comparison of cross-validation results for meditation.
Table 21. Comparison of cross-validation results for meditation.
MeditationMAEMSERMSESMAPE
GRU:9.8498 (±0.4453)163.8544 (±15.3970)12.7868 (±0.5929)19.6260 (±1.5804)
LSTM:9.8380 (±0.3493)164.3154 (±18.7108)12.7973 (±0.7371)19.7476 (±1.6546)
Table 22. Best prediction results of prior and posterior LB for attention/meditation with LSTM architecture and partial feature set.
Table 22. Best prediction results of prior and posterior LB for attention/meditation with LSTM architecture and partial feature set.
Output SignalLook-BackLSTM UnitsBatch_SizeOptimizerDropoutRMSE
Attention681Adam0.0113.416016
781Adam0.0112.1082411
7B3220rmsprop0.313.100437
8410Adam0.0114.6655445
Meditation6165Adam0.0111.016348
781Adam0.0110.904179
7B6420Adam0.0112.357736
881Adam0.0112.174273
Table 23. Best prediction results of prior and posterior LB for attention/meditation with GRU architecture and partial feature set.
Table 23. Best prediction results of prior and posterior LB for attention/meditation with GRU architecture and partial feature set.
Output SignalLook-BackGRU UnitsBatch_SizeOptimizerDropoutRMSE
Attention481Adam0.0112.974952
581Adam0.0111.79193
5B6410Adam0.112.302976
63220Adam0.113.864193
Meditation681Adam0.0112.525784
781Adam0.0111.449651
7B165Adam0.0112.585928
8165Adam0.0112.0246105
Table 24. Best prediction result for attention.
Table 24. Best prediction result for attention.
Architecture/Feature Set RMSELB
GRU/partial feature set11.791935
Look-BackGRU UnitsBatch_SizeOptimizerDropout
581Adam0.01
Table 25. Best prediction result for meditation.
Table 25. Best prediction result for meditation.
Architecture/Feature Set RMSELB
LSTM/partial feature set10.9041797
Look-BackLSTM UnitsBatch_SizeOptimizerDropout
781Adam0.01
Table 26. Results of real-time prediction with GRU.
Table 26. Results of real-time prediction with GRU.
SignalMAEMSERMSESMAPECorrelationF1-Score
Attention13.99607337.2245218.36367424.7148780.7324090.873117
Mediation12.7323065261.573116.17322228.556460.6806170.645211
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Rivas, F.; Sierra-Garcia, J.E.; Camara, J.M. Comparison of LSTM- and GRU-Type RNN Networks for Attention and Meditation Prediction on Raw EEG Data from Low-Cost Headsets. Electronics 2025, 14, 707. https://doi.org/10.3390/electronics14040707

AMA Style

Rivas F, Sierra-Garcia JE, Camara JM. Comparison of LSTM- and GRU-Type RNN Networks for Attention and Meditation Prediction on Raw EEG Data from Low-Cost Headsets. Electronics. 2025; 14(4):707. https://doi.org/10.3390/electronics14040707

Chicago/Turabian Style

Rivas, Fernando, Jesús Enrique Sierra-Garcia, and Jose María Camara. 2025. "Comparison of LSTM- and GRU-Type RNN Networks for Attention and Meditation Prediction on Raw EEG Data from Low-Cost Headsets" Electronics 14, no. 4: 707. https://doi.org/10.3390/electronics14040707

APA Style

Rivas, F., Sierra-Garcia, J. E., & Camara, J. M. (2025). Comparison of LSTM- and GRU-Type RNN Networks for Attention and Meditation Prediction on Raw EEG Data from Low-Cost Headsets. Electronics, 14(4), 707. https://doi.org/10.3390/electronics14040707

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop