sensors-logo

Journal Browser

Journal Browser

Sensing Brain Activity Using EEG and Machine Learning

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Biomedical Sensors".

Deadline for manuscript submissions: closed (20 March 2024) | Viewed by 8846

Special Issue Editor


E-Mail Website
Guest Editor
Faculty of Mathematics, Natural Sciences and Information Technologies, University of Primorska, 6000 Koper, Slovenia
Interests: medical image processing and analysis; computer vision; EEG signal analysis; machine learning; deep learning; system dynamics analysis

Special Issue Information

Dear Colleagues,

Understanding brain activity is challenging due to its high structural and functional complexity, as well as high inter- and intra-subject variability. One of the most promising approaches to sense and study it is in the spatiotemporal domain using electroencephalography (EEG) and machine learning techniques (ML). The applied ML techniques address the specifics of EEG data and sensed neural processes, including noise, artefacts, volume conduction, brain connectivity, limited spatial resolution, and high temporal resolution. This Special Issue aims to collect papers presenting recent research on brain activity sensing, analysis, and recognition using machine learning techniques on EEG data, including but not limited to:

  • Feature-based ML approaches;
  • Artificial neural network architectures;
  • Reinforcement learning;
  • System dynamics analysis;
  • Statistical approaches in modelling;
  • Applications of graph theory.

And various applications of machine learning to EEG analysis, such as:

  • Clinical diagnostics;
  • Emotion recognition;
  • Attention recognition;
  • Brain activity classification;
  • Brain–computer interfaces (BCI);
  • Artefact removal;
  • Brain connectivity analysis.

Dr. Peter Rogelj
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • EEG
  • machine learning
  • classification
  • feature extraction
  • artificial neural networks
  • deep learning

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research

2 pages, 128 KiB  
Editorial
Editorial for the Special Issue “Sensing Brain Activity Using EEG and Machine Learning”
by Peter Rogelj
Sensors 2024, 24(8), 2535; https://doi.org/10.3390/s24082535 - 15 Apr 2024
Viewed by 420
Abstract
Sensing brain activity to reveal, analyze and recognize brain activity patterns has become a topic of great interest and ongoing research [...] Full article
(This article belongs to the Special Issue Sensing Brain Activity Using EEG and Machine Learning)

Research

Jump to: Editorial

13 pages, 5765 KiB  
Article
Automated Seizure Detection Based on State-Space Model Identification
by Zhuo Wang, Michael R. Sperling, Dale Wyeth and Allon Guez
Sensors 2024, 24(6), 1902; https://doi.org/10.3390/s24061902 - 16 Mar 2024
Viewed by 652
Abstract
In this study, we developed a machine learning model for automated seizure detection using system identification techniques on EEG recordings. System identification builds mathematical models from a time series signal and uses a small number of parameters to represent the entirety of time [...] Read more.
In this study, we developed a machine learning model for automated seizure detection using system identification techniques on EEG recordings. System identification builds mathematical models from a time series signal and uses a small number of parameters to represent the entirety of time domain signal epochs. Such parameters were used as features for the classifiers in our study. We analyzed 69 seizure and 55 non-seizure recordings and an additional 10 continuous recordings from Thomas Jefferson University Hospital, alongside a larger dataset from the CHB-MIT database. By dividing EEGs into epochs (1 s, 2 s, 5 s, and 10 s) and employing fifth-order state-space dynamic systems for feature extraction, we tested various classifiers, with the decision tree and 1 s epochs achieving the highest performance: 96.0% accuracy, 92.7% sensitivity, and 97.6% specificity based on the Jefferson dataset. Moreover, as the epoch length increased, the accuracy dropped to 94.9%, with a decrease in sensitivity to 91.5% and specificity to 96.7%. Accuracy for the CHB-MIT dataset was 94.1%, with 87.6% sensitivity and 97.5% specificity. The subject-specific cases showed improved results, with an average of 98.3% accuracy, 97.4% sensitivity, and 98.4% specificity. The average false detection rate per hour was 0.5 ± 0.28 in the 10 continuous EEG recordings. This study suggests that using a system identification technique, specifically, state-space modeling, combined with machine learning classifiers, such as decision trees, is an effective and efficient approach to automated seizure detection. Full article
(This article belongs to the Special Issue Sensing Brain Activity Using EEG and Machine Learning)
Show Figures

Figure 1

21 pages, 2870 KiB  
Article
Integrating EEG and Machine Learning to Analyze Brain Changes during the Rehabilitation of Broca’s Aphasia
by Vanesa Močilnik, Veronika Rutar Gorišek, Jakob Sajovic, Janja Pretnar Oblak, Gorazd Drevenšek and Peter Rogelj
Sensors 2024, 24(2), 329; https://doi.org/10.3390/s24020329 - 5 Jan 2024
Viewed by 1074
Abstract
The fusion of electroencephalography (EEG) with machine learning is transforming rehabilitation. Our study introduces a neural network model proficient in distinguishing pre- and post-rehabilitation states in patients with Broca’s aphasia, based on brain connectivity metrics derived from EEG recordings during verbal and spatial [...] Read more.
The fusion of electroencephalography (EEG) with machine learning is transforming rehabilitation. Our study introduces a neural network model proficient in distinguishing pre- and post-rehabilitation states in patients with Broca’s aphasia, based on brain connectivity metrics derived from EEG recordings during verbal and spatial working memory tasks. The Granger causality (GC), phase-locking value (PLV), weighted phase-lag index (wPLI), mutual information (MI), and complex Pearson correlation coefficient (CPCC) across the delta, theta, and low- and high-gamma bands were used (excluding GC, which spanned the entire frequency spectrum). Across eight participants, employing leave-one-out validation for each, we evaluated the intersubject prediction accuracy across all connectivity methods and frequency bands. GC, MI theta, and PLV low-gamma emerged as the top performers, achieving 89.4%, 85.8%, and 82.7% accuracy in classifying verbal working memory task data. Intriguingly, measures designed to eliminate volume conduction exhibited the poorest performance in predicting rehabilitation-induced brain changes. This observation, coupled with variations in model performance across frequency bands, implies that different connectivity measures capture distinct brain processes involved in rehabilitation. The results of this paper contribute to current knowledge by presenting a clear strategy of utilizing limited data to achieve valid and meaningful results of machine learning on post-stroke rehabilitation EEG data, and they show that the differences in classification accuracy likely reflect distinct brain processes underlying rehabilitation after stroke. Full article
(This article belongs to the Special Issue Sensing Brain Activity Using EEG and Machine Learning)
Show Figures

Figure 1

13 pages, 5950 KiB  
Article
Temporal Electroencephalography Traits Dissociating Tactile Information and Cross-Modal Congruence Effects
by Yusuke Ozawa and Natsue Yoshimura
Sensors 2024, 24(1), 45; https://doi.org/10.3390/s24010045 (registering DOI) - 21 Dec 2023
Viewed by 762
Abstract
To explore whether temporal electroencephalography (EEG) traits can dissociate the physical properties of touching objects and the congruence effects of cross-modal stimuli, we applied a machine learning approach to two major temporal domain EEG traits, event-related potential (ERP) and somatosensory evoked potential (SEP), [...] Read more.
To explore whether temporal electroencephalography (EEG) traits can dissociate the physical properties of touching objects and the congruence effects of cross-modal stimuli, we applied a machine learning approach to two major temporal domain EEG traits, event-related potential (ERP) and somatosensory evoked potential (SEP), for each anatomical brain region. During a task in which participants had to identify one of two material surfaces as a tactile stimulus, a photo image that matched (‘congruent’) or mismatched (‘incongruent’) the material they were touching was given as a visual stimulus. Electrical stimulation was applied to the median nerve of the right wrist to evoke SEP while the participants touched the material. The classification accuracies using ERP extracted in reference to the tactile/visual stimulus onsets were significantly higher than chance levels in several regions in both congruent and incongruent conditions, whereas SEP extracted in reference to the electrical stimulus onsets resulted in no significant classification accuracies. Further analysis based on current source signals estimated using EEG revealed brain regions showing significant accuracy across conditions, suggesting that tactile-based object recognition information is encoded in the temporal domain EEG trait and broader brain regions, including the premotor, parietal, and somatosensory areas. Full article
(This article belongs to the Special Issue Sensing Brain Activity Using EEG and Machine Learning)
Show Figures

Figure 1

17 pages, 1079 KiB  
Article
Classification of Targets and Distractors in an Audiovisual Attention Task Based on Electroencephalography
by Steven Mortier, Renata Turkeš, Jorg De Winne, Wannes Van Ransbeeck, Dick Botteldooren, Paul Devos, Steven Latré, Marc Leman and Tim Verdonck
Sensors 2023, 23(23), 9588; https://doi.org/10.3390/s23239588 - 3 Dec 2023
Viewed by 862
Abstract
Within the broader context of improving interactions between artificial intelligence and humans, the question has arisen regarding whether auditory and rhythmic support could increase attention for visual stimuli that do not stand out clearly from an information stream. To this end, we designed [...] Read more.
Within the broader context of improving interactions between artificial intelligence and humans, the question has arisen regarding whether auditory and rhythmic support could increase attention for visual stimuli that do not stand out clearly from an information stream. To this end, we designed an experiment inspired by pip-and-pop but more appropriate for eliciting attention and P3a-event-related potentials (ERPs). In this study, the aim was to distinguish between targets and distractors based on the subject’s electroencephalography (EEG) data. We achieved this objective by employing different machine learning (ML) methods for both individual-subject (IS) and cross-subject (CS) models. Finally, we investigated which EEG channels and time points were used by the model to make its predictions using saliency maps. We were able to successfully perform the aforementioned classification task for both the IS and CS scenarios, reaching classification accuracies up to 76%. In accordance with the literature, the model primarily used the parietal–occipital electrodes between 200 ms and 300 ms after the stimulus to make its prediction. The findings from this research contribute to the development of more effective P300-based brain–computer interfaces. Furthermore, they validate the EEG data collected in our experiment. Full article
(This article belongs to the Special Issue Sensing Brain Activity Using EEG and Machine Learning)
Show Figures

Figure 1

14 pages, 13284 KiB  
Article
EEG-Based Emotion Recognition with Consideration of Individual Difference
by Yuxiao Xia and Yinhua Liu
Sensors 2023, 23(18), 7749; https://doi.org/10.3390/s23187749 - 8 Sep 2023
Viewed by 1243
Abstract
Electroencephalograms (EEGs) are often used for emotion recognition through a trained EEG-to-emotion models. The training samples are EEG signals recorded while participants receive external induction labeled as various emotions. Individual differences such as emotion degree and time response exist under the same external [...] Read more.
Electroencephalograms (EEGs) are often used for emotion recognition through a trained EEG-to-emotion models. The training samples are EEG signals recorded while participants receive external induction labeled as various emotions. Individual differences such as emotion degree and time response exist under the same external emotional inductions. These differences can lead to a decrease in the accuracy of emotion classification models in practical applications. The brain-based emotion recognition model proposed in this paper is able to sufficiently consider these individual differences. The proposed model comprises an emotion classification module and an individual difference module (IDM). The emotion classification module captures the spatial and temporal features of the EEG data, while the IDM introduces personalized adjustments to specific emotional features by accounting for participant-specific variations as a form of interference. This approach aims to enhance the classification performance of EEG-based emotion recognition for diverse participants. The results of our comparative experiments indicate that the proposed method obtains a maximum accuracy of 96.43% for binary classification on DEAP data. Furthermore, it performs better in scenarios with significant individual differences, where it reaches a maximum accuracy of 98.92%. Full article
(This article belongs to the Special Issue Sensing Brain Activity Using EEG and Machine Learning)
Show Figures

Figure 1

21 pages, 9671 KiB  
Article
A Modified Aquila-Based Optimized XGBoost Framework for Detecting Probable Seizure Status in Neonates
by Khondoker Mirazul Mumenin, Prapti Biswas, Md. Al-Masrur Khan, Ali Saleh Alammary and Abdullah-Al Nahid
Sensors 2023, 23(16), 7037; https://doi.org/10.3390/s23167037 - 9 Aug 2023
Cited by 2 | Viewed by 1090
Abstract
Electroencephalography (EEG) is increasingly being used in pediatric neurology and provides opportunities to diagnose various brain illnesses more accurately and precisely. It is thought to be one of the most effective tools for identifying newborn seizures, especially in Neonatal Intensive Care Units (NICUs). [...] Read more.
Electroencephalography (EEG) is increasingly being used in pediatric neurology and provides opportunities to diagnose various brain illnesses more accurately and precisely. It is thought to be one of the most effective tools for identifying newborn seizures, especially in Neonatal Intensive Care Units (NICUs). However, EEG interpretation is time-consuming and requires specialists with extensive training. It can be challenging and time-consuming to distinguish between seizures since they might have a wide range of clinical characteristics and etiologies. Technological advancements such as the Machine Learning (ML) approach for the rapid and automated diagnosis of newborn seizures have increased in recent years. This work proposes a novel optimized ML framework to eradicate the constraints of conventional seizure detection techniques. Moreover, we modified a novel meta-heuristic optimization algorithm (MHOA), named Aquila Optimization (AO), to develop an optimized model to make our proposed framework more efficient and robust. To conduct a comparison-based study, we also examined the performance of our optimized model with that of other classifiers, including the Decision Tree (DT), Random Forest (RF), and Gradient Boosting Classifier (GBC). This framework was validated on a public dataset of Helsinki University Hospital, where EEG signals were collected from 79 neonates. Our proposed model acquired encouraging results showing a 93.38% Accuracy Score, 93.9% Area Under the Curve (AUC), 92.72% F1 score, 65.17% Kappa, 93.38% sensitivity, and 77.52% specificity. Thus, it outperforms most of the present shallow ML architectures by showing improvements in accuracy and AUC scores. We believe that these results indicate a major advance in the detection of newborn seizures, which will benefit the medical community by increasing the reliability of the detection process. Full article
(This article belongs to the Special Issue Sensing Brain Activity Using EEG and Machine Learning)
Show Figures

Figure 1

21 pages, 515 KiB  
Article
Motor Imagery Classification Based on EEG Sensing with Visual and Vibrotactile Guidance
by Luka Batistić, Diego Sušanj, Domagoj Pinčić and Sandi Ljubic
Sensors 2023, 23(11), 5064; https://doi.org/10.3390/s23115064 - 25 May 2023
Cited by 2 | Viewed by 1786
Abstract
Motor imagery (MI) is a technique of imagining the performance of a motor task without actually using the muscles. When employed in a brain–computer interface (BCI) supported by electroencephalographic (EEG) sensors, it can be used as a successful method of human–computer interaction. In [...] Read more.
Motor imagery (MI) is a technique of imagining the performance of a motor task without actually using the muscles. When employed in a brain–computer interface (BCI) supported by electroencephalographic (EEG) sensors, it can be used as a successful method of human–computer interaction. In this paper, the performance of six different classifiers, namely linear discriminant analysis (LDA), support vector machine (SVM), random forest (RF), and three classifiers from the family of convolutional neural networks (CNN), is evaluated using EEG MI datasets. The study investigates the effectiveness of these classifiers on MI, guided by a static visual cue, dynamic visual guidance, and a combination of dynamic visual and vibrotactile (somatosensory) guidance. The effect of filtering passband during data preprocessing was also investigated. The results show that the ResNet-based CNN significantly outperforms the competing classifiers on both vibrotactile and visually guided data when detecting different directions of MI. Preprocessing the data using low-frequency signal features proves to be a better solution to achieve higher classification accuracy. It has also been shown that vibrotactile guidance has a significant impact on classification accuracy, with the associated improvement particularly evident for architecturally simpler classifiers. These findings have important implications for the development of EEG-based BCIs, as they provide valuable insight into the suitability of different classifiers for different contexts of use. Full article
(This article belongs to the Special Issue Sensing Brain Activity Using EEG and Machine Learning)
Show Figures

Figure 1

Back to TopTop