Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,201)

Search Parameters:
Keywords = EEG signal

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 2502 KB  
Article
Automatic Sleep Staging with Long-Term Temporal Modeling Using Single-Channel EEG
by Qiyu Yang, Dejun Zhang and Yi Huang
Appl. Sci. 2026, 16(9), 4092; https://doi.org/10.3390/app16094092 - 22 Apr 2026
Abstract
With the increasing demand for sleep health monitoring, automatic sleep staging using single-channel electroencephalogram (EEG) signals has become increasingly prominent due to its clinical practicality. Existing methods have achieved notable progress, but they often fail to adequately capture long-term temporal dependencies and struggle [...] Read more.
With the increasing demand for sleep health monitoring, automatic sleep staging using single-channel electroencephalogram (EEG) signals has become increasingly prominent due to its clinical practicality. Existing methods have achieved notable progress, but they often fail to adequately capture long-term temporal dependencies and struggle to characterize transition phases. We propose SleepLT, an automated sleep staging framework that integrates multi-scale wavelet decomposition (MWD) and multi-head latent Fourier attention (MLFA). The MLFA module incorporates Fourier analysis into self-attention mechanisms and employs a partially weight-sharing bottleneck to optimize Key/Value generation, effectively capturing sleep rhythms. Extensive experiments on SleepEDF-78 and SHHS datasets demonstrate strong and consistent performance, with Macro F1 improvements of 2.1–3.2% over the compared baselines. Visualizations confirm that SleepLT enhances inter-class discriminability between sleep stages, robustly detects salient waveforms, and effectively captures transitions through long-sequence modeling. These results indicate that SleepLT is effective for automatic sleep staging from single-channel EEG, particularly in improving the recognition of ambiguous transitional stages such as N1 and REM. Full article
(This article belongs to the Special Issue Applied Multimodal AI: Methods and Applications Across Domains)
Show Figures

Figure 1

27 pages, 2391 KB  
Article
Oracle Upper Bounds on Clean-EEG Recoverability from Single-Channel Decompositions Under EOG/EMG Contamination
by Usman Qamar Shaikh, Anubha Manju Kalra, Andrew Lowe and Imran Khan Niazi
Sensors 2026, 26(9), 2581; https://doi.org/10.3390/s26092581 - 22 Apr 2026
Abstract
Objective: Single-channel EEG artifact suppression often relies on signal decomposition; however, it is not always clear how much clean EEG is recoverable from a given decomposition when component weighting is ideal. We present an oracle-based benchmark that characterises this best-case recoverability across common [...] Read more.
Objective: Single-channel EEG artifact suppression often relies on signal decomposition; however, it is not always clear how much clean EEG is recoverable from a given decomposition when component weighting is ideal. We present an oracle-based benchmark that characterises this best-case recoverability across common 1-D decomposition families under controlled EOG, EMG, and mixed contamination. This work does not propose a new denoising algorithm; rather, it isolates representation capacity from component-selection heuristics by computing an upper bound on reconstruction quality. Approach: Using EEGdenoiseNet, we constructed a synthetic benchmark of 4500 single-channel 2 s segments (125 Hz; T = 250) by mixing clean EEG with ocular (EOG) and/or cranial EMG exemplars at noise-to-signal ratios (NSRs) spanning −10 to +10 dB (floor −10 dB denotes an absent modality). We evaluated variational mode decomposition (VMD), singular spectrum analysis (SSA), discrete wavelet transform (DWT), and CEEMDAN by decomposing each mixture and reconstructing the clean EEG using a bounded nonnegative linear combination of components obtained via constrained least squares (the oracle). Main results: Under this oracle benchmark, SSA achieved the lowest reconstruction error in most tested conditions, while DWT tended to rank best in milder ocular regimes; VMD performance improved, with an increased mode count at higher computational cost. CEEMDAN exhibited higher latency dominated by ensemble settings. Significance: These results should be interpreted as decomposition-level upper bounds under controlled mixtures, not field-ready denoising performance. The benchmark provides a tool with which to compare representational recoverability across decompositions and to inform the subsequent design of practical component-selection strategies. Full article
(This article belongs to the Section Biomedical Sensors)
13 pages, 437 KB  
Article
Effect of Sedation on EEG During Deep Brain Stimulation Surgery in Parkinson’s Patients
by Mahta Mousavi, Dorothee Kübler-Weller, Lisa Paulsen, Friedrich Borchers, Claudia Spies, Andrea A. Kühn and Benjamin Blankertz
Anesth. Res. 2026, 3(2), 10; https://doi.org/10.3390/anesthres3020010 - 22 Apr 2026
Abstract
Background: While providing enough sedatives to avoid pain and trauma during surgery is important, studies show a link between the received sedatives and the development of postoperative delirium (POD). Therefore, predicting POD from clinical or physiological data before or during surgery is highly [...] Read more.
Background: While providing enough sedatives to avoid pain and trauma during surgery is important, studies show a link between the received sedatives and the development of postoperative delirium (POD). Therefore, predicting POD from clinical or physiological data before or during surgery is highly advantageous. This capability enables healthcare providers to proactively implement necessary measures, thereby mitigating or preventing potential complications. Methods: In this study, we focus on patients with Parkinson’s disease undergoing deep brain stimulation surgery who are particularly susceptible to POD. We investigate what aspects of EEG’s power, functional connectivity and complexity during the course of the surgery are influenced by the amount of sedative. Furthermore, we aim to determine whether and to what extent the recorded brain activity during surgery can serve as a reliable means for the prediction of POD in this group of patients. Results and Conclusions: Our results show significant correlations between various power, connectivity and complexity features of EEG and the amount of sedatives. Even though single EEG features are not significantly different between the two groups who either developed or did not develop POD, we show that a classifier based on support vector machines using the selected EEG features could predict POD. Furthermore, our results provide evidence that a classifier trained only on the amount of sedatives is unable to predict POD. Accompanying this paper, our code is published as an open-source toolbox for the analysis of the EEG signal recorded with the four-channel SEDLine Root system, which is among the widely used EEG systems in operation rooms and its recorded data come with challenges that are addressed in our toolbox. Full article
Show Figures

Figure 1

24 pages, 646 KB  
Review
Processing of Amplitude-Temporal Acoustic Parameters in the Auditory System During Signal Coding for Image Recognition: Analytical Review
by Sergey Lytaev
Appl. Sci. 2026, 16(8), 4047; https://doi.org/10.3390/app16084047 - 21 Apr 2026
Abstract
In the study of sensory processes, the visual system has received the most research compared to other sensory systems. The primary difference between visual and auditory perception lies in the nature of the stimuli and the reception processes: vision perceives electromagnetic radiation, while [...] Read more.
In the study of sensory processes, the visual system has received the most research compared to other sensory systems. The primary difference between visual and auditory perception lies in the nature of the stimuli and the reception processes: vision perceives electromagnetic radiation, while auditory perception perceives acoustic signals of mechanical origin. This review aims to analyze modern approaches and controversies to the mechanisms of auditory perception related to psychophysics, psychophysiology, psychopathology, modern research on hearing in human–computer interaction (HCI) systems, and machine learning methods. Modern studies of acoustic patterns include a comprehensive assessment of the physical characteristics of perception, complex nonverbal auditory cues, verbalization, perception and memory, as well as individual differences in auditory perception. An analysis of the scientific literature allowed us to conclude that acoustic signals transformed in the brain into auditory images retain (encode) a number of amplitude-temporal parameters of acoustic signals that facilitate auditory discrimination (filtering), but interfere with auditory detection (recognition). Signal processing often, but not necessarily, involves brain regions involved in other forms of perception. It depends on subvocalization, includes semantically interpreted information and expectations, pictorial (visual) and descriptive components, functions as a mnemonic, and is linked to individual musical ability and experience (although the mechanisms of this connection are unclear). Full article
(This article belongs to the Special Issue Cognitive, Affective and Behavior Neuroscience)
18 pages, 9261 KB  
Article
MSResBiMamba: A Deep Cascaded Architecture for EEG Signal Decoding
by Ruiwen Jiang, Yi Zhou and Jingxiang Zhang
Mathematics 2026, 14(8), 1348; https://doi.org/10.3390/math14081348 - 17 Apr 2026
Viewed by 104
Abstract
Electroencephalogram (EEG) signals serve as the core information carrier for brain–computer interfaces (BCIs); however, their highly non-stationary nature, extremely low signal-to-noise ratio, and significant inter-individual variability pose considerable challenges for signal decoding. Existing deep learning methods struggle to strike a balance between multi-scale, [...] Read more.
Electroencephalogram (EEG) signals serve as the core information carrier for brain–computer interfaces (BCIs); however, their highly non-stationary nature, extremely low signal-to-noise ratio, and significant inter-individual variability pose considerable challenges for signal decoding. Existing deep learning methods struggle to strike a balance between multi-scale, fine-grained feature extraction and efficient long-range temporal modeling. To overcome this limitation, this study proposes a novel deep cascaded architecture, MSResBiMamba, which deeply integrates multi-scale spatiotemporal feature learning with cutting-edge long-sequence modeling techniques. The model first utilizes an enhanced multi-scale spatiotemporal convolutional network (MS-CNN) combined with a SE-channel attention mechanism to adaptively extract local multi-band features and dynamically suppress redundant artefacts. Subsequently, it innovatively introduces an enhanced bidirectional Mamba (Bi-Mamba) module to efficiently capture non-causal long-range temporal dependencies with linear computational complexity, whilst cascading multi-head self-attention mechanisms to establish global higher-order feature interactions. Extensive experiments on the BCI Competition IV-2a dataset demonstrate that MSResBiMamba achieves outstanding classification performance in multi-class motor imagery tasks, significantly outperforming traditional methods and existing state-of-the-art neural networks. Ablation studies and t-SNE visualisations further confirm the model’s robustness in feature decoupling and cross-subject applications, providing a high-precision, high-efficiency decoding solution for BCI systems. Full article
(This article belongs to the Section E1: Mathematics and Computer Science)
Show Figures

Figure 1

22 pages, 4372 KB  
Article
Suppressing Non-Stationary Motion Artefacts in Mobile EEG Using Generalized Eigenvalue Decomposition
by Mohammad Khazaei, Khadijeh Raeisi, Patrique Fiedler, Pierpaolo Croce, Filippo Zappasodi and Silvia Comani
Sensors 2026, 26(8), 2440; https://doi.org/10.3390/s26082440 - 16 Apr 2026
Viewed by 139
Abstract
Mobile EEG enables investigating brain activity during real-world behaviour, but remains susceptible to motion artefacts, limiting signal interpretability and the use of advanced analytical techniques. Methods developed for removing motion-related artefacts induced by periodic activity like cycling, walking or juggling showed degraded performance [...] Read more.
Mobile EEG enables investigating brain activity during real-world behaviour, but remains susceptible to motion artefacts, limiting signal interpretability and the use of advanced analytical techniques. Methods developed for removing motion-related artefacts induced by periodic activity like cycling, walking or juggling showed degraded performance with increasing movement variability and speed. To fill this gap, we developed a method based on generalized eigenvalue decomposition (GED) to identify and suppress highly variable, non-periodic—especially transient—artefacts due to very rapid, free full body movements of different types, as they occur during sports practice. By leveraging the contrast between covariance matrices of artefactual and resting-state EEG segments, this approach isolates motion-related components for removal during multichannel EEG signal reconstruction. The method was validated on two ecological datasets featuring stereotyped head and body movements and dynamic table tennis. Comparison with state-of-the-art technique showed superior performance of our method in terms of signal-to-error ratio (SER), artefact-to-residue ratio (ARR), brain spectral power preservation and computation time. Sensitivity analysis was applied to demonstrate the method’s robustness to parameter changes. These findings highlight the potential of the proposed method as a robust, generalizable approach for motion artefact suppression in mobile EEG, particularly when applied in extreme recording conditions like during active sports activity. Full article
Show Figures

Figure 1

27 pages, 1143 KB  
Systematic Review
Missing Data Gap Imputation Methods in Electroencephalogram (EEG) Signals: A Systematic Scoping Review
by Tobias Bergmann, Michael Movshovich, Yushu Shao, Julia Ryznar, Xue Nemoga-Stout, Izabella Marquez, Isuru Herath, Amanjyot Singh Sainbhi, Nuray Vakitbilir, Noah Silvaggio, Rakibul Hasan, Kevin Y. Stein, Hina Shaheen, Jaewoong Moon and Frederick A. Zeiler
Sensors 2026, 26(8), 2431; https://doi.org/10.3390/s26082431 - 15 Apr 2026
Viewed by 323
Abstract
Objective: Electroencephalogram (EEG) measures electrophysiological activity in the cerebral cortex and is broadly used across diagnostic, research, and clinical contexts. Missing data gaps are a pervasive issue in EEG signal recording, resulting from sensor failures and sensor disconnections, amongst other sources. To preserve [...] Read more.
Objective: Electroencephalogram (EEG) measures electrophysiological activity in the cerebral cortex and is broadly used across diagnostic, research, and clinical contexts. Missing data gaps are a pervasive issue in EEG signal recording, resulting from sensor failures and sensor disconnections, amongst other sources. To preserve a continuous signal describing underlying electrophysiological processes, imputation must be used to reconstruct these gaps. The aim of this review is to examine the methods that have been developed for missing data gap imputation in EEG signals. Methods: A search of five databases was conducted based on the Preferred Reporting Items for Systematic Reviews and Meta-Analysis guidelines. The search question examined existing algorithms for imputation in EEG signals. Results: The initial search yielded 17,490 results (an update included 1913 additional results). This review includes 16 articles presenting EEG gap imputation methods. These imputation methods were characterized as (i) tensor-based, (ii) machine learning and deep learning, and (iii) model-based and classical. Conclusions: Several of these methods achieved strong effectiveness for accurately reconstructing gaps in ‘ground truth’ EEG signals; however, the limited generalizability of many of the studies due to small datasets lacking adequate participant diversity as well as methodological differences made it impossible to describe a single leading method. Further, the reliance on full recordings for segment imputation in some methods could prove prohibitive to real-time imputation. Future study is required to rectify these limitations and to properly investigate computational latency and requirements. Significance: This work provides novel insights into existing methods for EEG gap imputation, as it identifies current shortcomings in the literature and paves a way for a more generalizable solution to be achieved through future work. Full article
Show Figures

Figure 1

32 pages, 1704 KB  
Systematic Review
A Systematic Review of How Cardiopulmonary Bypass Parameters Influence Electroencephalogram Signals
by Han Bao, Jiaying Wang, Ziru Cui, Min Zhu, Wenyi Chen, Liwei Zhou, Georg Northoff, Tao Tao and Pengmin Qin
Brain Sci. 2026, 16(4), 412; https://doi.org/10.3390/brainsci16040412 - 13 Apr 2026
Viewed by 277
Abstract
Background: Cardiopulmonary bypass (CPB) is an essential technique for cardiac surgery but significantly increases the risk of perioperative neurological complications. Electroencephalography (EEG) enables real-time monitoring of brain function and provides sensitive biomarkers for early detection of cerebral injury. However, a systematic synthesis of [...] Read more.
Background: Cardiopulmonary bypass (CPB) is an essential technique for cardiac surgery but significantly increases the risk of perioperative neurological complications. Electroencephalography (EEG) enables real-time monitoring of brain function and provides sensitive biomarkers for early detection of cerebral injury. However, a systematic synthesis of how CPB-related physiological, pharmacological, and technical factors influence EEG signals, and how these insights can be integrated into clinical decision-making, is still lacking. Objective: To systematically review the effects of temperature management, mean arterial pressure (MAP), hemodilution, anesthetic agents, embolization, and systemic inflammatory response during CPB on EEG parameters (including frequency bands, Bispectral Index (BIS), quantitative EEG metrics such as burst suppression ratio (BSR), spectral edge frequency (SEF), etc.), and to evaluate the associations between EEG changes and postoperative delirium (POD) and stroke. Methods: Following the PRISMA 2020 guidelines, we searched PubMed, Web of Science, and related databases for original English-language articles published between February 1974 and September 2025. Inclusion criteria: adult patients (≥18 years) undergoing cardiac surgery with CPB and intraoperative EEG monitoring (raw or processed). Exclusion criteria: reviews, case reports, animal studies, pediatric populations, and articles with inaccessible full texts. Two reviewers independently screened the literature and extracted data; a narrative synthesis was performed. Results: Fifty-one studies were included. Main findings: (1) Hypothermia: BIS decreases linearly with temperature (≈1.12 units/°C); electrocerebral silence occurs during deep hypothermic circulatory arrest; EEG recovery dynamics during rewarming predict POD. (2) MAP and cerebral perfusion: The rate of MAP decline (≥0.66 mmHg/s) is a stronger predictor of EEG abnormalities than the absolute MAP value; under fixed pump flow, some patients exhibit coexisting cerebral overperfusion and metabolic suppression. (3) Hemodilution: Maintaining hemoglobin ≥9.4 g/dL prevents EEG slowing; a drop below 9.2 g/dL significantly increases the risk of slowing. A ≥10% decrease in regional cerebral oxygen saturation (rSO2) is associated with a 1.5-fold increased risk of burst suppression. (4) Anesthetic agents: Propofol maintains flow-metabolism coupling, and BSR reflects deep anesthesia better than BIS; sevoflurane and isoflurane impair autoregulation and suppress EEG. (5) Embolization and inflammation: EEG epileptiform discharges increase the risk of POD five-fold; a decrease in LIR predicts stroke (AUC 0.771) and POD (AUC 0.779); persistent EEG changes increase the risk of POD 2.65-fold. Conclusions: CPB-related factors affect EEG signals through distinct mechanisms, and specific EEG patterns (slowing, burst suppression, asymmetry, epileptiform discharges) are significantly associated with postoperative neurological complications. Multimodal monitoring (EEG + cerebral oximetry + hemodynamics) with clear intervention thresholds facilitates individualized brain protection. Future interventional studies using real-time EEG feedback are needed to confirm improvements in long-term neurological outcomes. Full article
Show Figures

Figure 1

45 pages, 7613 KB  
Article
BrainTwin-AI: A Multimodal MRI-EEG-Based Cognitive Digital Twin for Real-Time Brain Health Intelligence
by Himadri Nath Saha, Utsho Banerjee, Rajarshi Karmakar, Saptarshi Banerjee and Jon Turdiev
Brain Sci. 2026, 16(4), 411; https://doi.org/10.3390/brainsci16040411 - 13 Apr 2026
Viewed by 490
Abstract
Background/Objectives: Brain health monitoring is increasingly essential as modern cognitive load, stress, and lifestyle pressures contribute to widespread neural instability. The paper presents BrainTwin, a next-generation cognitive digital twin, as a patient-specific, constantly updating computer model that combines state-of-the-art MRI analytics for [...] Read more.
Background/Objectives: Brain health monitoring is increasingly essential as modern cognitive load, stress, and lifestyle pressures contribute to widespread neural instability. The paper presents BrainTwin, a next-generation cognitive digital twin, as a patient-specific, constantly updating computer model that combines state-of-the-art MRI analytics for neuro-oncological assessment related to clinical study and management of tumors affecting the central nervous system (including their detection, progression, and monitoring) with real-time EEG-based brain health intelligence. Methods: Structural analysis is driven by an Enhanced Vision Transformer (ViT++), which improves spatial representation and boundary localization, achieving more accurate tumor prediction than conventional models. The extracted tumor volume forms the baseline for short-horizon tumor progression modeling. Parallel to MRI analysis, continuous EEG signals are captured through an in-house wearable skullcap, preprocessed using Edge AI on a Hailo Toolkit-enabled Raspberry Pi 5 for low-latency denoising and secure cloud transmission. Pre-processed EEG packets are authenticated at the fog layer, ensuring secure and reliable cloud transfer, enabling significant load reduction in the edge and cloud nodes. In the digital twin, EEG characteristics offer real-time functional monitoring through dynamic brainwave analysis, while a BiLSTM classifier distinguishes relaxed, stress, and fatigue states, which are probabilistically inferred cognitive conditions derived from EEG spectral patterns. Unlike static MRI imaging, EEG provides real-time brain health monitoring. The BrainTwin performs EEG–MRI fusion, correlating functional EEG metrics with ViT++ structural embeddings to produce a single risk score that can be interpreted by clinicians to determine brain vulnerability to future diseases. Explainable artificial intelligence (XAI) provides clinical interpretability through gradient-weighted class activation mapping (Grad-CAM) heatmaps, which are used to interpret ViT++ decisions and are visualized on a 3D interactive brain model to allow more in-depth inspection of spatial details. Results: The evaluation metrics demonstrate a BiLSTM macro-F1 of 0.94 (Precision/Recall/F1: Relaxed 0.96, Stress 0.93, Fatigue 0.92) and a ViT++ MRI accuracy of 96%, outperforming baseline architectures. Conclusions: These results demonstrate BrainTwin’s reliability, interpretability, and clinical utility as an integrated digital companion for tumor assessment and real-time functional brain monitoring. Full article
Show Figures

Figure 1

15 pages, 1621 KB  
Article
Role of Electroencephalography in the Assessment of Cortical Responses Elicited by Music Therapy in Burn Patients Undergoing Intensive Care
by Erica Iammarino, Alessia Baldoncini, Arianna Gagliardi, Laura Burattini and Ilaria Marcantoni
Sensors 2026, 26(8), 2358; https://doi.org/10.3390/s26082358 - 11 Apr 2026
Viewed by 250
Abstract
Music therapy (MT) is increasingly being integrated into intensive care unit (ICU) settings to modulate pain, stress, and emotional dysregulation. Although clinically promising, objective biomarkers for quantifying its neurophysiological effects are still missing. In this context, the electroencephalogram (EEG) represents a valid tool [...] Read more.
Music therapy (MT) is increasingly being integrated into intensive care unit (ICU) settings to modulate pain, stress, and emotional dysregulation. Although clinically promising, objective biomarkers for quantifying its neurophysiological effects are still missing. In this context, the electroencephalogram (EEG) represents a valid tool to assess cortical dynamics associated with cognitive–affective engagement elicited by MT. Our study aims to evaluate the role of electroencephalography as an objective tool for monitoring cortical responses to MT in the ICU. EEGs acquired from nine burn patients undergoing MT in the ICU were considered. Signals were preprocessed to improve the signal-to-noise ratio. Then, six frequency bands (delta, theta, alpha, beta, gamma, and sensorimotor rhythm) were extracted to compute band powers and derive 37 involvement indexes, which were statistically compared across three experimental phases: before, during, and after MT. Results demonstrate that involvement indexes effectively capture neurophysiological shifts induced by MT. Significant differences were observed in 22 indexes when comparing During-MT and Post-MT phases, with 2 indexes being statistically different also when comparing During-MT and Pre-MT phases; 5 indexes differed statistically when comparing Pre-MT and Post-MT phases. These results suggest a transient cortical engagement elicited during MT in ICU settings. Our findings align with previous research reporting EEG (and certain EEG-derived involvement indexes) sensitivity to capture music-induced cognitive and emotional modulation. This confirms electroencephalography potential to objectively reflect MT effects and support its integration in multidisciplinary burn care; however, analysis on larger cohorts is necessary to validate EEG as a clinical tool in MT. Full article
(This article belongs to the Special Issue EEG Signal Processing Techniques and Applications—3rd Edition)
Show Figures

Figure 1

25 pages, 5507 KB  
Article
A Cheonjiin Layout Mental Speller: Developing a Simple and Cost-Effective EEG-Based Brain–Computer Interface System
by Ji Won Ahn, Gi Yeon Yu, Seong-Wan Kim, Young-Seek Seok, Kyung-Min Byun and Seung Ho Choi
Sensors 2026, 26(7), 2265; https://doi.org/10.3390/s26072265 - 7 Apr 2026
Viewed by 472
Abstract
A brain–computer interface (BCI) enables direct communication between the brain and external devices by translating neural activity into executable control commands. Among electroencephalography (EEG)-based paradigms, steady-state visual evoked potential (SSVEP) is widely adopted due to its high signal-to-noise ratio, robustness, and minimal calibration [...] Read more.
A brain–computer interface (BCI) enables direct communication between the brain and external devices by translating neural activity into executable control commands. Among electroencephalography (EEG)-based paradigms, steady-state visual evoked potential (SSVEP) is widely adopted due to its high signal-to-noise ratio, robustness, and minimal calibration requirements. While SSVEP-based spellers have been extensively investigated, many existing systems rely on high-channel-density EEG recordings and computationally complex processing pipelines, and are primarily designed for alphabetic input structures. In this study, we present an SSVEP-based Korean speller that integrates the Cheonjiin keyboard layout to support intuitive composition of Hangul syllables. The proposed system adopts a simple configuration, employing only five visual stimulation frequencies (6.67–12 Hz) and two occipital EEG channels (O1 and O2), with real-time frequency recognition performed using canonical correlation analysis (CCA) within a 1.5 s sliding window. EEG signals were acquired at 200 Hz using an OpenBCI Ganglion board, band-pass filtered (5–45 Hz), and processed with harmonic sinusoidal reference templates for multi-frequency classification. The proposed interface generates five control commands (up, down, left, right, and select), enabling directional cursor navigation and character confirmation on a 4 × 4 virtual Cheonjiin keyboard. Experimental validation with three healthy participants demonstrated an average classification accuracy of approximately 82% and an information transfer rate (ITR) of 31.2 bits/min. Frequency-domain analysis revealed clear spectral peaks at the stimulation frequencies and their harmonics, indicating reliable SSVEP responses. The proposed system employs a simple two-channel configuration integrated with a Korean language-specific input structure, demonstrating that reliable SSVEP-based communication can be realized without computationally intensive algorithms or high-cost EEG acquisition systems. These findings demonstrate that reliable SSVEP-based communication can be achieved using a low-channel configuration without reliance on high-cost EEG equipment. Full article
(This article belongs to the Section Electronic Sensors)
Show Figures

Figure 1

30 pages, 4178 KB  
Article
An Intelligent Evaluation Algorithm for Pilot Flight Training Ability Based on Multimodal Information Fusion
by Heming Zhang, Changyuan Wang and Pengbo Wang
Sensors 2026, 26(7), 2245; https://doi.org/10.3390/s26072245 - 4 Apr 2026
Viewed by 498
Abstract
Intelligent-assisted assessment of pilot flight training ability is a method of automating the evaluation of pilots’ flight skills using artificial intelligence. Currently, using AI to assist or replace human instructors in flight skill assessment has become a mainstream research direction in the field [...] Read more.
Intelligent-assisted assessment of pilot flight training ability is a method of automating the evaluation of pilots’ flight skills using artificial intelligence. Currently, using AI to assist or replace human instructors in flight skill assessment has become a mainstream research direction in the field of intelligent aviation. Existing flight skill assessment methods suffer from limitations in data types and insufficient assessment accuracy. To address these issues, we evaluate and predict pilot performance in simulated flight missions based on physiological signals. Following the “OODA loop” theory, we established a multimodal dataset including pilot eye movement, electroencephalogram (EEG), electrocardiogram (ECG), electrodermal signaling (EDS), heart rate, respiration, and flight attitude data. This dataset records changes in physiological rhythms and flight behaviors during pilots’ flight training at different difficulty levels. To enhance the signal-to-noise ratio, we propose an enhanced wavelet fuzzy thresholding denoising algorithm utilizing LSTM optimization. We address the problem of isolated features across different time frames in multimodal data modeling by introducing a multi-feature fusion algorithm based on STFT. Furthermore, by combining a high-efficiency sub-attention mechanism with a Transformer network, we construct a multi-classification network for intelligent-assisted assessment of pilot flight training ability, further improving the output accuracy of each category. Experiments show that our designed algorithm can achieve a classification accuracy of up to 85% on the dataset (5-fold cross-validation), which meets the requirements for auxiliary assessment of flight capabilities. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

30 pages, 23210 KB  
Article
Multiscale Cosine Convolution Neural Network for Robust and Interpretable Epileptic EEG Detection
by Jiale Chen, Weidong Zhou and Guoyang Liu
Biosensors 2026, 16(4), 203; https://doi.org/10.3390/bios16040203 - 2 Apr 2026
Viewed by 416
Abstract
The accurate detection of epileptic seizures using an electroencephalogram (EEG) is essential for clinical diagnosis and reducing the burden on clinicians but remains challenging due to low detection performance and model interpretability. In this study, we propose a Multiscale Cosine Convolutional Heterogeneous Two-Stream [...] Read more.
The accurate detection of epileptic seizures using an electroencephalogram (EEG) is essential for clinical diagnosis and reducing the burden on clinicians but remains challenging due to low detection performance and model interpretability. In this study, we propose a Multiscale Cosine Convolutional Heterogeneous Two-Stream Cosine Convolution Network (MCC-HTSCC) to overcome these limitations. First, the raw EEG signals are input into the Multiscale Cosine Convolution (MCC) module, where multiscale temporal features are extracted by cosine convolutional layers with varying kernel lengths. Subsequently, the extracted temporal features are further processed through spatial convolutional layers to obtain comprehensive spatiotemporal representations. These spatiotemporal features are fused and subsequently fed into the Heterogeneous Two-Stream Cosine Convolution (HTSCC) module, comprising both deep and shallow subnetworks to perform hierarchical feature extraction and classification. Extensive evaluations were conducted on the publicly available CHB-MIT dataset and a clinically collected SH-SDU dataset, achieving accuracies of 98.52% and 94.56%, sensitivities of 97.98% and 88.09%, and specificities of 98.50% and 95.89%, respectively. Furthermore, the cosine convolution operators reduce the learnable parameters of our model by approximately 18.12% compared to the model with traditional convolution operators, making it more suitable for embedded deployment. By employing the Gradient-Weighted Class Activation Mapping (Grad-CAM) technique, we further provide interpretability and transparency in model decision making, highlighting the substantial potential of MCC-HTSCC for effective patient-specific epilepsy monitoring and diagnostics. Full article
(This article belongs to the Section Biosensors and Healthcare)
Show Figures

Figure 1

34 pages, 1485 KB  
Systematic Review
Sensor-Driven Machine Learning for Cognitive State and Performance Risk Assessment in eSports: A Systematic Review
by Abhineet Rajendra Kulkarni and Pranav Madhav Kuber
Electronics 2026, 15(7), 1465; https://doi.org/10.3390/electronics15071465 - 1 Apr 2026
Viewed by 631
Abstract
Competitive eSports impose substantial cognitive workload, yet performance evaluation still emphasizes post-match statistics without considering players’ cognitive states. We reviewed 30 papers that recorded physiological signals using sensors and utilized machine learning (ML) for predicting cognitive states and/or game performance. Findings showed that [...] Read more.
Competitive eSports impose substantial cognitive workload, yet performance evaluation still emphasizes post-match statistics without considering players’ cognitive states. We reviewed 30 papers that recorded physiological signals using sensors and utilized machine learning (ML) for predicting cognitive states and/or game performance. Findings showed that cardiovascular monitoring (heart rate variability/HRV) was the most prevalent modality (20/30 studies), followed by oculometry (10), electrodermal activity/EDA (9), and electroencephalogram/EEG (5); however, no standardized protocols (device/pre-processing/feature subset) were observed across HRV studies despite it being the most common measure. The best outcomes per construct (measure, accuracy) were: mental workload (pupillometry, ~82%), stress/arousal (EDA, p < 0.001), cognitive fatigue (pupil diameter/EEG, ~88%), expertise (EEG, ~92%), and tilt (EDA/HRV/eye-tracking, ~82–87%). Notably, current studies used small samples and were gender-imbalanced, while ML studies often lacked cross-validation. Only 2 of 30 studies examined flow state—a mental state of optimal performance characterized by total immersion and effortless execution—and interestingly, HRV showed decreases during stress/workload but increases during flow, suggesting context-dependent autonomic regulation. To address this gap, a new framework for flow detection is presented. This review will be of interest to game developers, eSports players, and coaches, and the reported findings may help towards improving player experience and game performance. Full article
Show Figures

Figure 1

13 pages, 1626 KB  
Article
Enhanced Sensitivity and Altered EEG Patterns During General Anesthesia in BTBR Mice, a Model of Autism
by Yeonsu Kim, Seounghun Lee, Seong-Eun Kim, Yeojung Kim, Xianshu Ju, Yulim Lee, Tao Zhang, Juyeon Kim, Sungho Choi, Jun Young Heo, Woosuk Chung and Jiho Park
Brain Sci. 2026, 16(4), 391; https://doi.org/10.3390/brainsci16040391 - 1 Apr 2026
Viewed by 417
Abstract
Background/Objectives: Alterations in excitation/inhibition (E/I) balance, involving both inhibitory and excitatory signaling, have been implicated in the pathophysiology of autism spectrum disorder (ASD). Volatile anesthetics, including sevoflurane, act on multiple molecular and network targets, and anesthetic sensitivity may therefore differ in ASD. This [...] Read more.
Background/Objectives: Alterations in excitation/inhibition (E/I) balance, involving both inhibitory and excitatory signaling, have been implicated in the pathophysiology of autism spectrum disorder (ASD). Volatile anesthetics, including sevoflurane, act on multiple molecular and network targets, and anesthetic sensitivity may therefore differ in ASD. This study investigated whether sevoflurane sensitivity is altered in BTBR T+Itpr3tf/J (BTBR) mice, a widely used mouse model of ASD. Methods: Sevoflurane sensitivity was compared between BTBR mice and C57BL/6J (B6) control mice using behavioral and electroencephalographic (EEG) analyses. The minimum alveolar concentration required to abolish nociceptive responses (MACsevo) and the sevoflurane concentration associated with recovery of the righting reflex (RRsevo) were measured. Dose-dependent EEG changes, including burst suppression and theta power distribution, were also evaluated. Results: MACsevo did not differ significantly between BTBR and B6 mice. However, RRsevo was significantly lower in BTBR mice (1.10 ± 0.10%) compared with B6 mice (1.65 ± 0.13%; p < 0.001). EEG analyses demonstrated that burst suppression occurred at lower sevoflurane concentrations in BTBR mice (2.0%) than in B6 mice (2.4%). In addition, topographical mapping revealed distinct theta power dynamics between the two strains during anesthesia. Conclusions: BTBR mice exhibit increased sensitivity to sevoflurane during emergence from anesthesia and show distinct EEG patterns compared with control mice. These findings suggest altered anesthetic responsiveness in a mouse model of ASD and support the possibility that network-level neurophysiological differences may influence anesthetic responses. Further studies are needed to clarify whether similar alterations are present across other ASD models and human ASD populations. Full article
(This article belongs to the Section Behavioral Neuroscience)
Show Figures

Figure 1

Back to TopTop