Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (73)

Search Parameters:
Keywords = EEG system architecture

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 2939 KB  
Article
ADG-SleepNet: A Symmetry-Aware Multi-Scale Dilation-Gated Temporal Convolutional Network with Adaptive Attention for EEG-Based Sleep Staging
by Hai Sun and Zhanfang Zhao
Symmetry 2025, 17(9), 1461; https://doi.org/10.3390/sym17091461 - 5 Sep 2025
Viewed by 332
Abstract
The increasing demand for portable health monitoring has highlighted the need for automated sleep staging systems that are both accurate and computationally efficient. However, most existing deep learning models for electroencephalogram (EEG)-based sleep staging suffer from parameter redundancy, fixed dilation rates, and limited [...] Read more.
The increasing demand for portable health monitoring has highlighted the need for automated sleep staging systems that are both accurate and computationally efficient. However, most existing deep learning models for electroencephalogram (EEG)-based sleep staging suffer from parameter redundancy, fixed dilation rates, and limited generalization, restricting their applicability in real-time and resource-constrained scenarios. In this paper, we propose ADG-SleepNet, a novel lightweight symmetry-aware multi-scale dilation-gated temporal convolutional network enhanced with adaptive attention mechanisms for EEG-based sleep staging. ADG-SleepNet features a structurally symmetric, parallel multi-branch architecture utilizing various dilation rates to comprehensively capture multi-scale temporal patterns in EEG signals. The integration of adaptive gating and channel attention mechanisms enables the network to dynamically adjust the contribution of each branch based on input characteristics, effectively breaking architectural symmetry when necessary to prioritize the most discriminative features. Experimental results on the Sleep-EDF-20 and Sleep-EDF-78 datasets demonstrate that ADG-SleepNet achieves accuracy rates of 87.1% and 85.1%, and macro F1 scores of 84.0% and 81.1%, respectively, outperforming several state-of-the-art lightweight models. These findings highlight the strong generalization ability and practical potential of ADG-SleepNet for EEG-based health monitoring applications. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

17 pages, 2167 KB  
Article
Interpretable EEG Emotion Classification via CNN Model and Gradient-Weighted Class Activation Mapping
by Yuxuan Zhao, Linjing Cao, Yidao Ji, Bo Wang and Wei Wu
Brain Sci. 2025, 15(8), 886; https://doi.org/10.3390/brainsci15080886 - 20 Aug 2025
Viewed by 492
Abstract
Background/Objectives: Electroencephalography (EEG)-based emotion recognition plays an important role in affective computing and brain–computer interface applications. However, existing methods often face the challenge of achieving high classification accuracy while maintaining physiological interpretability. Methods: In this study, we propose a convolutional neural network (CNN) [...] Read more.
Background/Objectives: Electroencephalography (EEG)-based emotion recognition plays an important role in affective computing and brain–computer interface applications. However, existing methods often face the challenge of achieving high classification accuracy while maintaining physiological interpretability. Methods: In this study, we propose a convolutional neural network (CNN) model with a simple architecture for EEG-based emotion classification. The model achieves classification accuracies of 95.21% for low/high arousal, 94.59% for low/high valence, and 93.01% for quaternary classification tasks on the DEAP dataset. To further improve model interpretability and support practical applications, Gradient-weighted Class Activation Mapping (Grad-CAM) is employed to identify the EEG electrode regions that contribute most to the classification results. Results: The visualization reveals that electrodes located in the right prefrontal cortex and left parietal lobe are the most influential, which is consistent with findings from emotional lateralization theory. Conclusions: This provides a physiological basis for optimizing electrode placement in wearable EEG-based emotion recognition systems. The proposed method combines high classification performance with interpretability and provides guidance for the design of efficient and portable affective computing systems. Full article
(This article belongs to the Section Cognitive, Social and Affective Neuroscience)
Show Figures

Figure 1

26 pages, 663 KB  
Article
Multi-Scale Temporal Fusion Network for Real-Time Multimodal Emotion Recognition in IoT Environments
by Sungwook Yoon and Byungmun Kim
Sensors 2025, 25(16), 5066; https://doi.org/10.3390/s25165066 - 14 Aug 2025
Viewed by 698
Abstract
This paper introduces EmotionTFN (Emotion-Multi-Scale Temporal Fusion Network), a novel hierarchical temporal fusion architecture that addresses key challenges in IoT emotion recognition by processing diverse sensor data while maintaining accuracy across multiple temporal scales. The architecture integrates physiological signals (EEG, PPG, and GSR), [...] Read more.
This paper introduces EmotionTFN (Emotion-Multi-Scale Temporal Fusion Network), a novel hierarchical temporal fusion architecture that addresses key challenges in IoT emotion recognition by processing diverse sensor data while maintaining accuracy across multiple temporal scales. The architecture integrates physiological signals (EEG, PPG, and GSR), visual, and audio data using hierarchical temporal attention across short-term (0.5–2 s), medium-term (2–10 s), and long-term (10–60 s) windows. Edge computing optimizations, including model compression, quantization, and adaptive sampling, enable deployment on resource-constrained devices. Extensive experiments on MELD, DEAP, and G-REx datasets demonstrate 94.2% accuracy on discrete emotion classification and 0.087 mean absolute error on dimensional prediction, outperforming the best baseline (87.4%). The system maintains sub-200 ms latency on IoT hardware while achieving a 40% improvement in energy efficiency. Real-world deployment validation over four weeks achieved 97.2% uptime and user satisfaction scores of 4.1/5.0 while ensuring privacy through local processing. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

20 pages, 1206 KB  
Article
Multilayer Neural-Network-Based EEG Analysis for the Detection of Epilepsy, Migraine, and Schizophrenia
by İbrahim Dursun, Mehmet Akın, M. Ufuk Aluçlu and Betül Uyar
Appl. Sci. 2025, 15(16), 8983; https://doi.org/10.3390/app15168983 - 14 Aug 2025
Viewed by 456
Abstract
The early detection of neurological and psychiatric disorders is critical for optimizing patient outcomes and improving the efficacy of healthcare delivery. This study presents a novel multiclass machine learning (ML) framework designed to classify epilepsy, migraine, and schizophrenia simultaneously using electroencephalography (EEG) signals. [...] Read more.
The early detection of neurological and psychiatric disorders is critical for optimizing patient outcomes and improving the efficacy of healthcare delivery. This study presents a novel multiclass machine learning (ML) framework designed to classify epilepsy, migraine, and schizophrenia simultaneously using electroencephalography (EEG) signals. Unlike conventional approaches that predominantly rely on binary classification (e.g., healthy vs. diseased cohorts), this work addresses a significant gap in the literature by introducing a unified artificial neural network (ANN) architecture capable of discriminating among three distinct neurological and psychiatric conditions. The proposed methodology involves decomposing raw EEG signals into constituent frequency subbands to facilitate robust feature extraction. These discriminative features were subsequently classified using a multilayer ANN, achieving performance metrics of 95% sensitivity, 96% specificity, and a 95% F1-score. To enhance clinical applicability, the model was optimized for potential integration into real-time diagnostic systems, thereby supporting the development of a rapid, reliable, and scalable decision support tool. The results underscore the viability of EEG-based multiclass models as a promising diagnostic aid for neurological and psychiatric disorders. By consolidating the detection of multiple conditions within a single computational framework, this approach offers a scalable and efficient alternative to traditional binary classification paradigms. Full article
(This article belongs to the Special Issue AI-Based Biomedical Signal Processing—2nd Edition)
Show Figures

Figure 1

29 pages, 2939 KB  
Article
Automated Sleep Stage Classification Using PSO-Optimized LSTM on CAP EEG Sequences
by Manjur Kolhar, Manahil Mohammed Alfuraydan, Abdulaziz Alshammary, Khalid Alharoon, Abdullah Alghamdi, Ali Albader, Abdulmalik Alnawah and Aryam Alanazi
Brain Sci. 2025, 15(8), 854; https://doi.org/10.3390/brainsci15080854 - 11 Aug 2025
Viewed by 610
Abstract
The automatic classification of sleep stages and Cyclic Alternating Pattern (CAP) subtypes from electroencephalogram (EEG) recordings remains a significant challenge in computational sleep research because of the short duration of CAP events and the inherent class imbalance in clinical datasets. Background/Objectives: The research [...] Read more.
The automatic classification of sleep stages and Cyclic Alternating Pattern (CAP) subtypes from electroencephalogram (EEG) recordings remains a significant challenge in computational sleep research because of the short duration of CAP events and the inherent class imbalance in clinical datasets. Background/Objectives: The research introduces a domain-specific deep learning system that employs an LSTM network optimized through a PSO-Hyperband hybrid hyperparameter tuning method. Methods: The research enhances EEG-based sleep analysis through the implementation of hybrid optimization methods within an LSTM architecture that addresses CAP sequence classification requirements without requiring architectural changes. Results: The developed model demonstrates strong performance on the CAP Sleep Database by achieving 97% accuracy for REM and 96% accuracy for stage S0 and ROC AUC scores exceeding 0.92 across challenging CAP subtypes (A1–A3). The model transparency is improved through the application of SHAP-based interpretability techniques, which highlight the role of spectral and morphological EEG features in classification outcomes. Conclusions: The proposed framework demonstrates resistance to class imbalance and better discrimination between visually similar CAP subtypes. The results demonstrate how hybrid optimization methods improve the performance, generalizability, and interpretability of deep learning models for EEG-based sleep microstructure analysis. Full article
Show Figures

Figure 1

17 pages, 1738 KB  
Article
Multimodal Fusion Multi-Task Learning Network Based on Federated Averaging for SDB Severity Diagnosis
by Songlu Lin, Renzheng Tang, Yuzhe Wang and Zhihong Wang
Appl. Sci. 2025, 15(14), 8077; https://doi.org/10.3390/app15148077 - 20 Jul 2025
Viewed by 734
Abstract
Accurate sleep staging and sleep-disordered breathing (SDB) severity prediction are critical for the early diagnosis and management of sleep disorders. However, real-world polysomnography (PSG) data often suffer from modality heterogeneity, label scarcity, and non-independent and identically distributed (non-IID) characteristics across institutions, posing significant [...] Read more.
Accurate sleep staging and sleep-disordered breathing (SDB) severity prediction are critical for the early diagnosis and management of sleep disorders. However, real-world polysomnography (PSG) data often suffer from modality heterogeneity, label scarcity, and non-independent and identically distributed (non-IID) characteristics across institutions, posing significant challenges for model generalization and clinical deployment. To address these issues, we propose a federated multi-task learning (FMTL) framework that simultaneously performs sleep staging and SDB severity classification from seven multimodal physiological signals, including EEG, ECG, respiration, etc. The proposed framework is built upon a hybrid deep neural architecture that integrates convolutional layers (CNN) for spatial representation, bidirectional GRUs for temporal modeling, and multi-head self-attention for long-range dependency learning. A shared feature extractor is combined with task-specific heads to enable joint diagnosis, while the FedAvg algorithm is employed to facilitate decentralized training across multiple institutions without sharing raw data, thereby preserving privacy and addressing non-IID challenges. We evaluate the proposed method across three public datasets (APPLES, SHHS, and HMC) treated as independent clients. For sleep staging, the model achieves accuracies of 85.3% (APPLES), 87.1% (SHHS_rest), and 79.3% (HMC), with Cohen’s Kappa scores exceeding 0.71. For SDB severity classification, it obtains macro-F1 scores of 77.6%, 76.4%, and 79.1% on APPLES, SHHS_rest, and HMC, respectively. These results demonstrate that our unified FMTL framework effectively leverages multimodal PSG signals and federated training to deliver accurate and scalable sleep disorder assessment, paving the way for the development of a privacy-preserving, generalizable, and clinically applicable digital sleep monitoring system. Full article
(This article belongs to the Special Issue Machine Learning in Biomedical Applications)
Show Figures

Figure 1

34 pages, 3704 KB  
Article
Uncertainty-Aware Deep Learning for Robust and Interpretable MI EEG Using Channel Dropout and LayerCAM Integration
by Óscar Wladimir Gómez-Morales, Sofia Escalante-Escobar, Diego Fabian Collazos-Huertas, Andrés Marino Álvarez-Meza and German Castellanos-Dominguez
Appl. Sci. 2025, 15(14), 8036; https://doi.org/10.3390/app15148036 - 18 Jul 2025
Viewed by 573
Abstract
Motor Imagery (MI) classification plays a crucial role in enhancing the performance of brain–computer interface (BCI) systems, thereby enabling advanced neurorehabilitation and the development of intuitive brain-controlled technologies. However, MI classification using electroencephalography (EEG) is hindered by spatiotemporal variability and the limited interpretability [...] Read more.
Motor Imagery (MI) classification plays a crucial role in enhancing the performance of brain–computer interface (BCI) systems, thereby enabling advanced neurorehabilitation and the development of intuitive brain-controlled technologies. However, MI classification using electroencephalography (EEG) is hindered by spatiotemporal variability and the limited interpretability of deep learning (DL) models. To mitigate these challenges, dropout techniques are employed as regularization strategies. Nevertheless, the removal of critical EEG channels, particularly those from the sensorimotor cortex, can result in substantial spatial information loss, especially under limited training data conditions. This issue, compounded by high EEG variability in subjects with poor performance, hinders generalization and reduces the interpretability and clinical trust in MI-based BCI systems. This study proposes a novel framework integrating channel dropout—a variant of Monte Carlo dropout (MCD)—with class activation maps (CAMs) to enhance robustness and interpretability in MI classification. This integration represents a significant step forward by offering, for the first time, a dedicated solution to concurrently mitigate spatiotemporal uncertainty and provide fine-grained neurophysiologically relevant interpretability in motor imagery classification, particularly demonstrating refined spatial attention in challenging low-performing subjects. We evaluate three DL architectures (ShallowConvNet, EEGNet, TCNet Fusion) on a 52-subject MI-EEG dataset, applying channel dropout to simulate structural variability and LayerCAM to visualize spatiotemporal patterns. Results demonstrate that among the three evaluated deep learning models for MI-EEG classification, TCNet Fusion achieved the highest peak accuracy of 74.4% using 32 EEG channels. At the same time, ShallowConvNet recorded the lowest peak at 72.7%, indicating TCNet Fusion’s robustness in moderate-density montages. Incorporating MCD notably improved model consistency and classification accuracy, especially in low-performing subjects where baseline accuracies were below 70%; EEGNet and TCNet Fusion showed accuracy improvements of up to 10% compared to their non-MCD versions. Furthermore, LayerCAM visualizations enhanced with MCD transformed diffuse spatial activation patterns into more focused and interpretable topographies, aligning more closely with known motor-related brain regions and thereby boosting both interpretability and classification reliability across varying subject performance levels. Our approach offers a unified solution for uncertainty-aware, and interpretable MI classification. Full article
(This article belongs to the Special Issue EEG Horizons: Exploring Neural Dynamics and Neurocognitive Processes)
Show Figures

Figure 1

19 pages, 26396 KB  
Article
Development of a Networked Multi-Participant Driving Simulator with Synchronized EEG and Telemetry for Traffic Research
by Poorendra Ramlall, Ethan Jones and Subhradeep Roy
Systems 2025, 13(7), 564; https://doi.org/10.3390/systems13070564 - 10 Jul 2025
Cited by 1 | Viewed by 647
Abstract
This paper presents a multi-participant driving simulation framework designed to support traffic experiments involving the simultaneous collection of vehicle telemetry and cognitive data. The system integrates motion-enabled driving cockpits, high-fidelity steering and pedal systems, immersive visual displays (monitor or virtual reality), and the [...] Read more.
This paper presents a multi-participant driving simulation framework designed to support traffic experiments involving the simultaneous collection of vehicle telemetry and cognitive data. The system integrates motion-enabled driving cockpits, high-fidelity steering and pedal systems, immersive visual displays (monitor or virtual reality), and the Assetto Corsa simulation engine. To capture cognitive states, dry-electrode EEG headsets are used alongside a custom-built software tool that synchronizes EEG signals with vehicle telemetry across multiple drivers. The primary contribution of this work is the development of a modular, scalable, and customizable experimental platform with robust data synchronization, enabling the coordinated collection of neural and telemetry data in multi-driver scenarios. The synchronization software developed through this study is freely available to the research community. This architecture supports the study of human–human interactions by linking driver actions with corresponding neural activity across a range of driving contexts. It provides researchers with a powerful tool to investigate perception, decision-making, and coordination in dynamic, multi-participant traffic environments. Full article
(This article belongs to the Special Issue Modelling and Simulation of Transportation Systems)
Show Figures

Figure 1

40 pages, 2250 KB  
Review
Comprehensive Comparative Analysis of Lower Limb Exoskeleton Research: Control, Design, and Application
by Sk Hasan and Nafizul Alam
Actuators 2025, 14(7), 342; https://doi.org/10.3390/act14070342 - 9 Jul 2025
Viewed by 1704
Abstract
This review provides a comprehensive analysis of recent advancements in lower limb exoskeleton systems, focusing on applications, control strategies, hardware architecture, sensing modalities, human-robot interaction, evaluation methods, and technical innovations. The study spans systems developed for gait rehabilitation, mobility assistance, terrain adaptation, pediatric [...] Read more.
This review provides a comprehensive analysis of recent advancements in lower limb exoskeleton systems, focusing on applications, control strategies, hardware architecture, sensing modalities, human-robot interaction, evaluation methods, and technical innovations. The study spans systems developed for gait rehabilitation, mobility assistance, terrain adaptation, pediatric use, and industrial support. Applications range from sit-to-stand transitions and post-stroke therapy to balance support and real-world navigation. Control approaches vary from traditional impedance and fuzzy logic models to advanced data-driven frameworks, including reinforcement learning, recurrent neural networks, and digital twin-based optimization. These controllers support personalized and adaptive interaction, enabling real-time intent recognition, torque modulation, and gait phase synchronization across different users and tasks. Hardware platforms include powered multi-degree-of-freedom exoskeletons, passive assistive devices, compliant joint systems, and pediatric-specific configurations. Innovations in actuator design, modular architecture, and lightweight materials support increased usability and energy efficiency. Sensor systems integrate EMG, EEG, IMU, vision, and force feedback, supporting multimodal perception for motion prediction, terrain classification, and user monitoring. Human–robot interaction strategies emphasize safe, intuitive, and cooperative engagement. Controllers are increasingly user-specific, leveraging biosignals and gait metrics to tailor assistance. Evaluation methodologies include simulation, phantom testing, and human–subject trials across clinical and real-world environments, with performance measured through joint tracking accuracy, stability indices, and functional mobility scores. Overall, the review highlights the field’s evolution toward intelligent, adaptable, and user-centered systems, offering promising solutions for rehabilitation, mobility enhancement, and assistive autonomy in diverse populations. Following a detailed review of current developments, strategic recommendations are made to enhance and evolve existing exoskeleton technologies. Full article
(This article belongs to the Section Actuators for Robotics)
Show Figures

Figure 1

20 pages, 2409 KB  
Article
Spatio-Temporal Deep Learning with Adaptive Attention for EEG and sEMG Decoding in Human–Machine Interaction
by Tianhao Fu, Zhiyong Zhou and Wenyu Yuan
Electronics 2025, 14(13), 2670; https://doi.org/10.3390/electronics14132670 - 1 Jul 2025
Viewed by 627
Abstract
Electroencephalography (EEG) and surface electromyography (sEMG) signals are widely used in human–machine interaction (HMI) systems due to their non-invasive acquisition and real-time responsiveness, particularly in neurorehabilitation and prosthetic control. However, existing deep learning approaches often struggle to capture both fine-grained local patterns and [...] Read more.
Electroencephalography (EEG) and surface electromyography (sEMG) signals are widely used in human–machine interaction (HMI) systems due to their non-invasive acquisition and real-time responsiveness, particularly in neurorehabilitation and prosthetic control. However, existing deep learning approaches often struggle to capture both fine-grained local patterns and long-range spatio-temporal dependencies within these signals, which limits classification performance. To address these challenges, we propose a lightweight deep learning framework that integrates adaptive spatial attention with multi-scale temporal feature extraction for end-to-end EEG and sEMG signal decoding. The architecture includes two core components: (1) an adaptive attention mechanism that dynamically reweights multi-channel time-series features based on spatial relevance, and (2) a multi-scale convolutional module that captures diverse temporal patterns through parallel convolutional filters. The proposed method achieves classification accuracies of 79.47% on the BCI-IV 2a EEG dataset (9 subjects, 22 channels) for motor intent decoding and 85.87% on the NinaPro DB2 sEMG dataset (40 subjects, 12 channels) for gesture recognition. Ablation studies confirm the effectiveness of each module, while comparative evaluations demonstrate that the proposed framework outperforms existing state-of-the-art methods across all tested scenarios. Together, these results demonstrate that our model not only achieves strong performance but also maintains a lightweight and resource-efficient design for EEG and sEMG decoding. Full article
Show Figures

Figure 1

41 pages, 2631 KB  
Systematic Review
Brain-Computer Interfaces and AI Segmentation in Neurosurgery: A Systematic Review of Integrated Precision Approaches
by Sayantan Ghosh, Padmanabhan Sindhujaa, Dinesh Kumar Kesavan, Balázs Gulyás and Domokos Máthé
Surgeries 2025, 6(3), 50; https://doi.org/10.3390/surgeries6030050 - 26 Jun 2025
Cited by 2 | Viewed by 1836
Abstract
Background: BCI and AI-driven image segmentation are revolutionizing precision neurosurgery by enhancing surgical accuracy, reducing human error, and improving patient outcomes. Methods: This systematic review explores the integration of AI techniques—particularly DL and CNNs—with neuroimaging modalities such as MRI, CT, EEG, and ECoG [...] Read more.
Background: BCI and AI-driven image segmentation are revolutionizing precision neurosurgery by enhancing surgical accuracy, reducing human error, and improving patient outcomes. Methods: This systematic review explores the integration of AI techniques—particularly DL and CNNs—with neuroimaging modalities such as MRI, CT, EEG, and ECoG for automated brain mapping and tissue classification. Eligible clinical and computational studies, primarily published between 2015 and 2025, were identified via PubMed, Scopus, and IEEE Xplore. The review follows PRISMA guidelines and is registered with the OSF (registration number: J59CY). Results: AI-based segmentation methods have demonstrated Dice similarity coefficients exceeding 0.91 in glioma boundary delineation and tumor segmentation tasks. Concurrently, BCI systems leveraging EEG and SSVEP paradigms have achieved information transfer rates surpassing 22.5 bits/min, enabling high-speed neural decoding with sub-second latency. We critically evaluate real-time neural signal processing pipelines and AI-guided surgical robotics, emphasizing clinical performance and architectural constraints. Integrated systems improve targeting precision and postoperative recovery across select neurosurgical applications. Conclusions: This review consolidates recent advancements in BCI and AI-driven medical imaging, identifies barriers to clinical adoption—including signal reliability, latency bottlenecks, and ethical uncertainties—and outlines research pathways essential for realizing closed-loop, intelligent neurosurgical platforms. Full article
Show Figures

Figure 1

31 pages, 3895 KB  
Article
Enhanced Pilot Attention Monitoring: A Time-Frequency EEG Analysis Using CNN–LSTM Networks for Aviation Safety
by Quynh Anh Nguyen, Nam Anh Dao and Long Nguyen
Information 2025, 16(6), 503; https://doi.org/10.3390/info16060503 - 17 Jun 2025
Viewed by 568
Abstract
Despite significant technological advancements in aviation safety systems, human-operator condition monitoring remains a critical challenge, with more than 75% of aircraft incidents stemming from attention-related perceptual failures. This study addresses a fundamental question in sensor-based condition monitoring: how can temporal- and frequency-domain EEG [...] Read more.
Despite significant technological advancements in aviation safety systems, human-operator condition monitoring remains a critical challenge, with more than 75% of aircraft incidents stemming from attention-related perceptual failures. This study addresses a fundamental question in sensor-based condition monitoring: how can temporal- and frequency-domain EEG sensor data be optimally integrated to detect precursors of system failure in human–machine interfaces? We propose a three-stage diagnostic framework that mirrors industrial condition monitoring approaches. First, raw EEG sensor signals undergo preprocessing into standardized one-second epochs. Second, a novel hybrid feature-extraction methodology combines time- and frequency-domain features to create comprehensive sensor signatures of neural states. Finally, our dual-architecture CNN–LSTM model processes spatial patterns via CNNs while capturing temporal degradation signals via LSTMs, enabling robust classification in noisy operational environments. Our contributions include (1) a multimodal data fusion approach for EEG sensors that provides a more comprehensive representation of operator conditions, and (2) an artificial intelligence architecture that balances spatial and temporal analysis for the predictive maintenance of attention states. When validated on aviation-related EEG datasets, our condition monitoring system achieved significantly higher diagnostic accuracy across various noise conditions compared to existing approaches. The practical applications extend beyond theoretical improvement, offering a pathway to implement more reliable human–machine interface monitoring in critical systems, potentially preventing catastrophic failures by detecting condition anomalies before they propagate through the system. Full article
(This article belongs to the Special Issue Machine Learning and Artificial Intelligence with Applications)
Show Figures

Figure 1

15 pages, 329 KB  
Article
Classification of Electroencephalography Motor Execution Signals Using a Hybrid Neural Network Based on Instantaneous Frequency and Amplitude Obtained via Empirical Wavelet Transform
by Patryk Zych, Kacper Filipek, Agata Mrozek-Czajkowska and Piotr Kuwałek
Sensors 2025, 25(11), 3284; https://doi.org/10.3390/s25113284 - 23 May 2025
Viewed by 724
Abstract
Brain–computer interfaces (BCIs) have garnered significant interest due to their potential to enable communication and control for individuals with limited or no ability to interact with technologies in a conventional way. By applying electrical signals generated by brain cells, BCIs eliminate the need [...] Read more.
Brain–computer interfaces (BCIs) have garnered significant interest due to their potential to enable communication and control for individuals with limited or no ability to interact with technologies in a conventional way. By applying electrical signals generated by brain cells, BCIs eliminate the need for physical interaction with external devices. This study investigates the performance of traditional classifiers—specifically, linear discriminant analysis (LDA) and support vector machines (SVMs)—in comparison with a hybrid neural network model for EEG-based gesture classification. The dataset comprised EEG recordings of seven distinct gestures performed by 33 participants. Binary classification tasks were conducted using both raw windowed EEG signals and features extracted via bandpower and the empirical wavelet transform (EWT). The hybrid neural network architecture demonstrated higher classification accuracy compared to the standard classifiers. These findings suggest that combining featuring extraction with deep learning models offers a promising approach for improving EEG gesture recognition in BCI systems. Full article
Show Figures

Figure 1

17 pages, 4523 KB  
Article
Predicting Activity in Brain Areas Associated with Emotion Processing Using Multimodal Behavioral Signals
by Lahoucine Kdouri, Youssef Hmamouche, Amal El Fallah Seghrouchni and Thierry Chaminade
Multimodal Technol. Interact. 2025, 9(4), 31; https://doi.org/10.3390/mti9040031 - 31 Mar 2025
Viewed by 1283
Abstract
Artificial agents are expected to increasingly interact with humans and to demonstrate multimodal adaptive emotional responses. Such social integration requires both perception and production mechanisms, thus enabling a more realistic approach to emotional alignment than existing systems. Indeed, existing emotion recognition methods rely [...] Read more.
Artificial agents are expected to increasingly interact with humans and to demonstrate multimodal adaptive emotional responses. Such social integration requires both perception and production mechanisms, thus enabling a more realistic approach to emotional alignment than existing systems. Indeed, existing emotion recognition methods rely on behavioral signals, predominantly facial expressions, as well as non-invasive brain recordings, such as Electroencephalograms (EEGs) and functional Magnetic Resonance Imaging (fMRI), to identify humans’ emotions, but accurate labeling remains a challenge. This paper introduces a novel approach examining how behavioral and physiological signals can be used to predict activity in emotion-related regions of the brain. To this end, we propose a multimodal deep learning network that processes two categories of signals recorded alongside brain activity during conversations: two behavioral signals (video and audio) and one physiological signal (blood pulse). Our network enables (1) the prediction of brain activity from these multimodal inputs, and (2) the assessment of our model’s performance depending on the nature of interlocutor (human or robot) and the brain region of interest. Results demonstrate that the proposed architecture outperforms existing models in anterior insula and hypothalamus regions, for interactions with a human or a robot. An ablation study evaluating subsets of input modalities indicates that local brain activity prediction was reduced when one or two modalities are omitted. However, they also revealed that the physiological data (blood pulse) achieve similar levels of predictions alone compared to the full model, further underscoring the importance of somatic markers in the central nervous system’s processing of social emotions. Full article
Show Figures

Figure 1

22 pages, 20778 KB  
Article
Enhancing EEG-Based Emotion Detection with Hybrid Models: Insights from DEAP Dataset Applications
by Badr Mouazen, Ayoub Benali, Nouh Taha Chebchoub, El Hassan Abdelwahed and Giovanni De Marco
Sensors 2025, 25(6), 1827; https://doi.org/10.3390/s25061827 - 14 Mar 2025
Cited by 1 | Viewed by 3624
Abstract
Emotion detection using electroencephalogram (EEG) signals is a rapidly evolving field with significant applications in mental health diagnostics, affective computing, and human–computer interaction. However, existing approaches often face challenges related to accuracy, interpretability, and real-time feasibility. This study leverages the DEAP dataset to [...] Read more.
Emotion detection using electroencephalogram (EEG) signals is a rapidly evolving field with significant applications in mental health diagnostics, affective computing, and human–computer interaction. However, existing approaches often face challenges related to accuracy, interpretability, and real-time feasibility. This study leverages the DEAP dataset to explore and evaluate various machine learning and deep learning techniques for emotion recognition, aiming to address these challenges. To ensure reproducibility, we have made our code publicly available. Extensive experimentation was conducted using K-Nearest Neighbors (KNN), Support Vector Machines (SVMs), Decision Tree (DT), Random Forest (RF), Bidirectional Long Short-Term Memory (BiLSTM), Gated Recurrent Units (GRUs), Convolutional Neural Networks (CNNs), autoencoders, and transformers. Our hybrid approach achieved a peak accuracy of 85–95%, demonstrating the potential of advanced neural architectures in decoding emotional states from EEG signals. While this accuracy is slightly lower than some state-of-the-art methods, our approach offers advantages in computational efficiency and real-time applicability, making it suitable for practical deployment. Furthermore, we employed SHapley Additive exPlanations (SHAP) to enhance model interpretability, offering deeper insights into the contribution of individual features to classification decisions. A comparative analysis with existing methods highlights the novelty and advantages of our approach, particularly in terms of accuracy, interpretability, and computational efficiency. A key contribution of this study is the development of a real-time emotion detection system, which enables instantaneous classification of emotional states from EEG signals. We provide a detailed analysis of its computational efficiency and compare it with existing methods, demonstrating its feasibility for real-world applications. Our findings highlight the effectiveness of hybrid deep learning models in improving accuracy, interpretability, and real-time processing capabilities. These contributions have significant implications for applications in neurofeedback, mental health monitoring, and affective computing. Future work will focus on expanding the dataset, testing the system on a larger and more diverse participant pool, and further optimizing the system for broader clinical and industrial applications. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

Back to TopTop