Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (576)

Search Parameters:
Keywords = neural–computer interfaces

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
43 pages, 4722 KB  
Article
Data-Driven Modeling and Coupled Simulation Method for Fuze Exterior Ballistic Dynamics
by Siyu Xin, Yongping Hao, Jiayi Zhang and Hui Zhang
Electronics 2026, 15(8), 1619; https://doi.org/10.3390/electronics15081619 - 13 Apr 2026
Viewed by 116
Abstract
To address the strong nonlinearity of aerodynamic loads during projectile exterior ballistic flight and the difficulty in accurately modeling fuze dynamic responses, this paper proposes a data-driven modeling and simulation method for fuze exterior ballistic dynamics. A high-fidelity aerodynamic database covering a range [...] Read more.
To address the strong nonlinearity of aerodynamic loads during projectile exterior ballistic flight and the difficulty in accurately modeling fuze dynamic responses, this paper proposes a data-driven modeling and simulation method for fuze exterior ballistic dynamics. A high-fidelity aerodynamic database covering a range of Mach numbers and angles of attack is constructed based on CFD (Computational Fluid Dynamics) simulations. An MLP (Multilayer Perceptron) neural network is then employed to develop an aerodynamic surrogate model, enabling continuous representation of aerodynamic loads within the given sample space. The results show that, within the data coverage range, the proposed model is able to capture the nonlinear variation in aerodynamic parameters and shows improved prediction accuracy compared with the polynomial fitting method. Specifically, for typical aerodynamic parameters, the RMSE (Root Mean Square Error) is reduced from 5.758 to 0.223, the MAE (Mean Absolute Error) is reduced to 0.099, and the R2 (Coefficient of Determination) approaches 1. On this basis, the aerodynamic surrogate model is embedded into a six-degree-of-freedom projectile–fuze exterior ballistic dynamics model via the secondary development interface of ADAMS 2020 (Automated Dynamic Analysis of Mechanical Systems), enabling coupled simulation between aerodynamic loads and multibody dynamics. Comparison with firing table data indicates that, under typical operating conditions, the relative deviation of ballistic parameters is generally better than 94%, demonstrating that the proposed method can reasonably reproduce the projectile exterior ballistic characteristics. Furthermore, based on the coupled dynamics model, the dynamic response characteristics of the fuze moving body during the exterior ballistic phase are analyzed. The results indicate that the axial forward overload of the moving body increases significantly with the initial nutation angle, and the variation in the axial projection of gravity induced by nutation plays an important role in its transient response. The proposed approach provides a useful reference for the dynamic response analysis and safety evaluation of fuzes. Full article
(This article belongs to the Section Artificial Intelligence)
25 pages, 5507 KB  
Article
A Cheonjiin Layout Mental Speller: Developing a Simple and Cost-Effective EEG-Based Brain–Computer Interface System
by Ji Won Ahn, Gi Yeon Yu, Seong-Wan Kim, Young-Seek Seok, Kyung-Min Byun and Seung Ho Choi
Sensors 2026, 26(7), 2265; https://doi.org/10.3390/s26072265 - 7 Apr 2026
Viewed by 389
Abstract
A brain–computer interface (BCI) enables direct communication between the brain and external devices by translating neural activity into executable control commands. Among electroencephalography (EEG)-based paradigms, steady-state visual evoked potential (SSVEP) is widely adopted due to its high signal-to-noise ratio, robustness, and minimal calibration [...] Read more.
A brain–computer interface (BCI) enables direct communication between the brain and external devices by translating neural activity into executable control commands. Among electroencephalography (EEG)-based paradigms, steady-state visual evoked potential (SSVEP) is widely adopted due to its high signal-to-noise ratio, robustness, and minimal calibration requirements. While SSVEP-based spellers have been extensively investigated, many existing systems rely on high-channel-density EEG recordings and computationally complex processing pipelines, and are primarily designed for alphabetic input structures. In this study, we present an SSVEP-based Korean speller that integrates the Cheonjiin keyboard layout to support intuitive composition of Hangul syllables. The proposed system adopts a simple configuration, employing only five visual stimulation frequencies (6.67–12 Hz) and two occipital EEG channels (O1 and O2), with real-time frequency recognition performed using canonical correlation analysis (CCA) within a 1.5 s sliding window. EEG signals were acquired at 200 Hz using an OpenBCI Ganglion board, band-pass filtered (5–45 Hz), and processed with harmonic sinusoidal reference templates for multi-frequency classification. The proposed interface generates five control commands (up, down, left, right, and select), enabling directional cursor navigation and character confirmation on a 4 × 4 virtual Cheonjiin keyboard. Experimental validation with three healthy participants demonstrated an average classification accuracy of approximately 82% and an information transfer rate (ITR) of 31.2 bits/min. Frequency-domain analysis revealed clear spectral peaks at the stimulation frequencies and their harmonics, indicating reliable SSVEP responses. The proposed system employs a simple two-channel configuration integrated with a Korean language-specific input structure, demonstrating that reliable SSVEP-based communication can be realized without computationally intensive algorithms or high-cost EEG acquisition systems. These findings demonstrate that reliable SSVEP-based communication can be achieved using a low-channel configuration without reliance on high-cost EEG equipment. Full article
(This article belongs to the Section Electronic Sensors)
Show Figures

Figure 1

23 pages, 1751 KB  
Article
The Use of EEG in the Study of Emotional States and Visual Word Recognition with or Without Musical Stimulus in University Students with Dyslexia
by Pavlos Christodoulides, Dimitrios Peschos and Victoria Zakopoulou
Brain Sci. 2026, 16(4), 396; https://doi.org/10.3390/brainsci16040396 - 6 Apr 2026
Viewed by 328
Abstract
This study investigated neural oscillatory dynamics underlying visual word recognition in university students with dyslexia using a portable brain–computer interface (BCI) EEG system. The sample included university students with dyslexia (N = 12) and matched controls (N = 14) who completed auditory discrimination [...] Read more.
This study investigated neural oscillatory dynamics underlying visual word recognition in university students with dyslexia using a portable brain–computer interface (BCI) EEG system. The sample included university students with dyslexia (N = 12) and matched controls (N = 14) who completed auditory discrimination and visual word recognition tasks, with and without musical accompaniment. Through these experimental conditions, the researchers assessed (a) the cortical activation across frequency bands, (b) the modulatory effect of background music, and (c) the relationship between emotional states and brain activity. Results revealed significant group differences in oscillatory patterns, with reduced β- and γ-band activity in the left occipito-temporal cortex among participants with dyslexia, confirming disrupted temporal coordination in posterior reading networks. Compensatory right-hemisphere activation was observed, particularly under musical conditions, accompanied by increased α-band power and reduced δ activity, indicating enhanced attentional engagement and reduced cognitive fatigue. Emotional assessment using the DASS-21 revealed higher stress and anxiety scores in the dyslexic group, suggesting that affective factors may modulate oscillatory dynamics. The presence of background music appeared to attenuate these effects, supporting improved emotional regulation and cognitive focus. These findings demonstrate that dyslexia reflects a distributed disruption in neural synchrony and cross-frequency coupling, influenced by both cognitive and affective mechanisms. The integration of portable EEG technology with rhythmic auditory stimulation offers new insights into the neurophysiological and emotional aspects of dyslexia, highlighting the potential of rhythm- and music-based approaches for both diagnostic and therapeutic applications. Full article
Show Figures

Figure 1

21 pages, 2193 KB  
Article
Electroencephalography-Based Brain–Computer Interface System Using Tongue Movement Imagery for Wheelchair Control
by Theerat Saichoo, Nannaphat Siribunyaphat, Bukhoree Sahoh, M. Arif Efendi and Yunyong Punsawad
Sensors 2026, 26(7), 2211; https://doi.org/10.3390/s26072211 - 2 Apr 2026
Viewed by 489
Abstract
Brain–computer interfaces (BCIs) are essential in assistive technologies to restore mobility in individuals with motor impairments. Although electroencephalography (EEG)-based brain-controlled wheelchairs have been extensively studied, most tongue-controlled systems rely on physical tongue movements, intraoral devices, or limited offline commands, which reduces the usability [...] Read more.
Brain–computer interfaces (BCIs) are essential in assistive technologies to restore mobility in individuals with motor impairments. Although electroencephalography (EEG)-based brain-controlled wheelchairs have been extensively studied, most tongue-controlled systems rely on physical tongue movements, intraoral devices, or limited offline commands, which reduces the usability and comfort. This study introduces an EEG-based tongue motor imagery (MI) BCI for intuitive and entirely mental wheelchair control. By leveraging preserved motor function and the cortical representation of the tongue, the system enables natural four-directional control through imagined tongue movements. Six imagined tongue actions—touching the left and right mouth corners, the upper and lower lips, and producing left and right cheek bulges—were designed to elicit alpha-band event-related desynchronization (ERD) patterns over the tongue motor cortex. EEG data were collected from 15 healthy participants using a 14-channel consumer-grade EMOTIV EPOC X headset. Alpha-band ERD features were extracted and classified using linear discriminant analysis, support vector machine, naïve Bayes, and artificial neural networks (ANNs). Simpler command sets yielded the highest accuracy: two-class tasks achieved 76.19%, while the performance decreased with increasing task complexity. The ANN achieved superior results in multi-class scenarios. The proposed tongue MI method offers initial support for developing a BCI control strategy for assistive technology; however, further improvements in classification techniques, user training, and real-time validation are needed to improve the robustness and practical usability. Full article
Show Figures

Figure 1

22 pages, 2650 KB  
Article
Design and Implementation of an Eyewear-Integrated Infrared Eye-Tracking System
by Carlo Pezzoli, Marco Brando Mario Paracchini, Daniele Maria Crafa, Marco Carminati, Luca Merigo, Tommaso Ongarello and Marco Marcon
Sensors 2026, 26(7), 2065; https://doi.org/10.3390/s26072065 - 26 Mar 2026
Viewed by 483
Abstract
Eye-tracking is a key enabling technology for smart eyewear, supporting hands-free interaction, accessibility, and context-aware human–machine interfaces under strict constraints on size, power consumption, and computational complexity. While camera-based solutions provide high accuracy, their integration into lightweight and low-power wearable platforms remains challenging. [...] Read more.
Eye-tracking is a key enabling technology for smart eyewear, supporting hands-free interaction, accessibility, and context-aware human–machine interfaces under strict constraints on size, power consumption, and computational complexity. While camera-based solutions provide high accuracy, their integration into lightweight and low-power wearable platforms remains challenging. This paper is a feasibility study for the design, simulation, and experimental evaluation of a photosensor oculography (PSOG) eye-tracking system that is fully integrated into an eyewear frame, based on near-infrared (NIR) emitters and photodiodes. The proposed approach combines simulation-driven optimization of the optical constellation, a multi-frequency modulation and demodulation scheme enabling parallel source discrimination and robust ambient-light rejection, and a resource-efficient signal acquisition pipeline suitable for embedded implementation. Eye rotations in azimuth and elevation are inferred from differential reflectance patterns of ocular regions (sclera, iris, and pupil) using lightweight regression techniques, including shallow neural networks and Gaussian process regression, selected to balance estimation accuracy with computational and power constraints. System performance is evaluated using a controllable artificial-eye platform under defined geometric and illumination conditions, enabling repeatable assessment of gaze-estimation accuracy and algorithmic behavior. Sub-degree errors are achieved in this controlled setting, demonstrating the feasibility and potential effectiveness of the proposed architecture. Practical considerations for translation to real-world smart eyewear, including human-subject validation, anatomical variability, calibration strategies, and embedded deployment, are discussed and identified as directions for future work. By detailing the optical design methodology, modulation strategy, and algorithmic trade-offs, this work clarifies the distinct contributions of the proposed PSOG system relative to existing frame-integrated and camera-free eye-tracking approaches, and provides a foundation for further development toward wearable and augmented-reality applications. Full article
Show Figures

Figure 1

45 pages, 2643 KB  
Article
From Complexity Theory to Computational Wisdom: Enhancing EEG–Neurotransmitter Models Through Sophimatics for Brain Data Analysis
by Gerardo Iovane and Giovanni Iovane
Algorithms 2026, 19(3), 237; https://doi.org/10.3390/a19030237 - 22 Mar 2026
Viewed by 327
Abstract
The analysis of brain data through electroencephalography (EEG) has become essential in neuroscience, affective computing, and brain–computer interfaces. Recent work associates EEG features with artificial neurotransmitter models, simulating emotions and rational–emotional decision-making using complexity theory. However, current methods face limitations: (1) linear temporal [...] Read more.
The analysis of brain data through electroencephalography (EEG) has become essential in neuroscience, affective computing, and brain–computer interfaces. Recent work associates EEG features with artificial neurotransmitter models, simulating emotions and rational–emotional decision-making using complexity theory. However, current methods face limitations: (1) linear temporal representations lacking memory and anticipation, (2) limited contextual adaptation, (3) difficulty with paradoxical affective states, and (4) absence of ethical reasoning in decision-making. We present a framework based on Sophimatics, using complex time (t=treal+itimagC) where treal represents chronology and timag encodes experiential dimensions including memory depth and anticipatory imagination. The Super Time Cognitive Neural Network (STCNN) architecture enables the parallel processing of objective time sequences and subjective cognitive experiences. Our Sophimatics-assisted EEG analysis achieves: (1) two-dimensional temporal coherence integrating past experiences and future projections, (2) context-sensitive adaptation via ontological knowledge graphs, (3) interpretable symbolic reasoning compatible with clinical psychology, (4) mechanisms for resolving affective paradoxes, and (5) ethical constraints ensuring value-based decision-making. Across three case studies (emotion recognition, meditation-induced transitions, and brain–computer interface decision support), integrated Sophimatics models outperform traditional machine learning (15–22% accuracy improvement) and complexity theory models (8–14% improvement), while offering greater cognitive richness and immunity to incomplete data. Results establish a post-generative AI framework with computational wisdom: relationally interactive, ethically informed, and temporally consistent with human cognitive and affective life. The framework outlines paths toward next-generation neuromorphic systems achieving genuine understanding beyond pattern recognition. Full article
Show Figures

Figure 1

32 pages, 7914 KB  
Article
UAV Target Detection and Tracking Integrating a Dynamic Brain–Computer Interface
by Jun Wang, Zanyang Li, Lirong Yan, Muhammad Imtiaz, Hang Li, Muhammad Usman Shoukat, Jianatihan Jinsihan, Benjun Feng, Yi Yang, Fuwu Yan, Shumo He and Yibo Wu
Drones 2026, 10(3), 222; https://doi.org/10.3390/drones10030222 - 21 Mar 2026
Viewed by 637
Abstract
To address the inherent limitations in the robustness of fully autonomous unmanned aerial vehicle (UAV) visual perception and the high cognitive workload associated with manual control, this paper proposes a human-in-the-loop brain–computer interface (BCI) control framework. The system integrates steady-state visual evoked potential [...] Read more.
To address the inherent limitations in the robustness of fully autonomous unmanned aerial vehicle (UAV) visual perception and the high cognitive workload associated with manual control, this paper proposes a human-in-the-loop brain–computer interface (BCI) control framework. The system integrates steady-state visual evoked potential (SSVEP) with deep learning techniques to create a spatio-temporally dynamic interaction paradigm, enabling real-time alignment between visual targets and frequency stimuli. At the perception level, an enhanced YOLOv11 network incorporating partial convolution (PConv) and shape intersection over union (Shape-IoU) loss is developed and coupled with the DeepSort multi-object tracking algorithm. This configuration ensures high-speed execution on edge computing platforms while maintaining stable stimulus coverage over dynamic targets, thus providing a robust visual induction environment for EEG decoding. At the neural decoding level, an enhanced task-discriminant component analysis (TDCA-V) algorithm is introduced to improve signal detection stability within non-stationary flight conditions. Experimental results demonstrate that within the predefined fixation task window, the system achieves 100% success in maintaining target identity (ID). The BCI system achieved an average command recognition accuracy of 91.48% within a 1.0 s time window, with the TDCA-V algorithm significantly outperforming traditional spatial filtering methods in dynamic scenarios. These findings demonstrate the system’s effectiveness in decoupling human cognitive intent from machine execution, providing a robust solution for human–machine collaborative control. Full article
(This article belongs to the Section Artificial Intelligence in Drones (AID))
Show Figures

Figure 1

21 pages, 20926 KB  
Article
Research on Neuro-Acoustic Human–Machine Collaborative Inter-Domain Global Attention Fusion for Underwater Acoustic Target Recognition
by Jiaqi Zhang, Zhangsong Shi, Huihui Xu, Zhe Rao, Songxue Bai and Junfeng Gao
J. Mar. Sci. Eng. 2026, 14(6), 578; https://doi.org/10.3390/jmse14060578 - 20 Mar 2026
Viewed by 264
Abstract
To enhance the adaptability of current underwater acoustic target recognition technology in complex marine environments and improve the performance of human–machine collaborative operations, this study proposes a human–machine collaborative underwater acoustic target recognition technology based on brain–computer interface technology. This method combines synchronized [...] Read more.
To enhance the adaptability of current underwater acoustic target recognition technology in complex marine environments and improve the performance of human–machine collaborative operations, this study proposes a human–machine collaborative underwater acoustic target recognition technology based on brain–computer interface technology. This method combines synchronized underwater acoustic neural features between acoustic signals and human brains to propose an inter-domain global attention fusion module to explore the fusion relationship of features at different depths, and to enhance the joint feature expression ability by combining potential complementary information between modalities. The experimental results show that the proposed network model can enhance the feature discrimination ability and obtain a more stable recognition model. Compared to a single feature, the human–machine collaborative fusion-feature model exhibits stronger classification performance, with an average classification accuracy of 96.4444%. This method can alleviate the limitations of single-mode underwater acoustic target recognition technology, combine the complementary advantages of humans and machines to achieve effective human–machine cooperation, and provide new insights for future underwater recognition technology and marine research. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

21 pages, 20116 KB  
Article
Hierarchical Data-Driven and PSO-Based Energy Management of Hybrid Energy Storage Systems in DC Microgrids
by Sujatha Banka and D. V. Ashok Kumar
Automation 2026, 7(2), 50; https://doi.org/10.3390/automation7020050 - 13 Mar 2026
Viewed by 367
Abstract
In the era of renewable dominated grids, integration of dynamic loads such as EV charging stations have increased the operational challenges in multifolds, particularly in DC microgrids (DC MGs). Traditional battery-dominated grid energy management strategies (EMSs) are often not capable of handling fast [...] Read more.
In the era of renewable dominated grids, integration of dynamic loads such as EV charging stations have increased the operational challenges in multifolds, particularly in DC microgrids (DC MGs). Traditional battery-dominated grid energy management strategies (EMSs) are often not capable of handling fast transients due to the limitations of battery electrochemistry. To overcome this limitation, a hierarchical hybrid energy management strategy is proposed that uses the combination of data-driven and metaheuristic algorithms. The designed optimization framework consists of particle swarm optimization (PSO) and a neural network (NN) implemented in the central controller of a 4-bus ringmain DC MG. An efficient decoupling of fast and slow storage dynamics is performed, where the supercapacitor (SC) is optimized using the NN and the battery is optimized using PSO. This selective optimization reduces the computational overhead on the PSO making it more feasible for real-time implementation. The designed hybrid PSO-Neural EMS framework is initially designed on MATLAB and further validated on a real-time hardware setup. Robustness of the control scheme is verified with various case studies, such as renewable intermittency, dynamic loading and partial shading scenarios. An effective optimization of the SC in both transient and heavy load scenarios are observed. LabVIEW interfacing is used for MODBUS-based interaction with PV emulators and DC-DC converters. Full article
Show Figures

Figure 1

24 pages, 50347 KB  
Article
Analysis Model of Load Transfer Method Based on Domain Decomposition Physics-Informed Neural Networks
by Xiaoru Jia, Keshen Zhang, Junwei Liu, Wenchang Shang, Yahui Zhang, Yuxing Ding and Guangyu Qi
Buildings 2026, 16(6), 1114; https://doi.org/10.3390/buildings16061114 - 11 Mar 2026
Viewed by 244
Abstract
The load transfer method is important for the settlement prediction of axially loaded piles, but in multi-layered complex soils, it lacks analytical solutions. Traditional numerical methods such as the finite element method suffer from strong dependence on mesh generation, time-consuming iterative calculations, and [...] Read more.
The load transfer method is important for the settlement prediction of axially loaded piles, but in multi-layered complex soils, it lacks analytical solutions. Traditional numerical methods such as the finite element method suffer from strong dependence on mesh generation, time-consuming iterative calculations, and high computational costs for back-analysis. This paper proposes a load transfer analysis model based on a Domain Decomposition Physics-Informed Neural Network. A multi-subnet parallel architecture is adopted to simulate multi-layered soils, solving the problem of inter-layer stress–strain discontinuity through interface coupling and gradient continuity constraints; a non-dimensionalization system and a hard constraint mechanism are introduced to enhance training efficiency and physical consistency; and a two-stage analysis framework comprising surrogate model forward analysis and field data inversion is established. Numerical experimental results indicate that the forward analysis of this model is in high agreement with FEM simulation results, and computational efficiency is improved by six orders of magnitude; based on a small amount of field static load test data, multi-layer soil parameters are accurately inverted, achieving more precise pile settlement prediction than FEM. Comparative analysis validates the effectiveness of the domain decomposition multi-subnet over a single network, demonstrating extensibility to hyperbolic and exponential multi-soil constitutive models. Full article
Show Figures

Figure 1

31 pages, 6044 KB  
Review
From Physical Replacement to Biological Symbiosis: Evolutionary Paradigms and Future Prospects of Auditory Reconstruction Brain–Computer Interfaces
by Li Shang, Juntao Liu, Shiya Lv, Longhui Jiang, Yu Liu, Sihan Hua, Jinping Luo and Xinxia Cai
Micromachines 2026, 17(3), 343; https://doi.org/10.3390/mi17030343 - 11 Mar 2026
Viewed by 826
Abstract
Auditory Brain–Computer Interfaces (BCIs) constitute the vital intervention for profound sensorineural hearing loss where the auditory nerve is compromised, yet their clinical efficacy remains restricted by substantial biological bottlenecks and limited spectral resolution. This review critically examines the evolutionary paradigm of auditory restoration, [...] Read more.
Auditory Brain–Computer Interfaces (BCIs) constitute the vital intervention for profound sensorineural hearing loss where the auditory nerve is compromised, yet their clinical efficacy remains restricted by substantial biological bottlenecks and limited spectral resolution. This review critically examines the evolutionary paradigm of auditory restoration, tracing the transition from static physical replacement to dynamic biological symbiosis. We systematically analyze physiological barriers across cochlear, brainstem, and cortical levels, elucidating how rigid interfaces provoke chronic tissue responses and why linear encoding protocols fail in distorted central tonotopy. The article synthesizes emerging methodologies in material science, demonstrating how soft, bio-integrated electronics and biomimetic topologies effectively address mechanical impedance mismatches. Furthermore, the trajectory of neural encoding is evaluated, highlighting the paradigm shift from traditional envelope extraction to deep learning-driven non-linear mapping and adaptive closed-loop neuromodulation. Finally, the potential of high-resolution modulation techniques, including optogenetics and sonogenetics, alongside AI-facilitated intent perception for active listening, is assessed. It is concluded that future neuroprostheses must evolve into symbiotic systems capable of seamlessly integrating with neural plasticity to enable high-fidelity cognitive reconstruction. Full article
(This article belongs to the Section B:Biology and Biomedicine)
Show Figures

Graphical abstract

9 pages, 924 KB  
Proceeding Paper
Multi-Class Electroencephalography Motor Imagery Classification of Limb Movements Using Convolutional Neural Network
by Yean Ling Chan, Yiqi Tew, Ching Pang Goh and Choon Kit Chan
Eng. Proc. 2026, 128(1), 20; https://doi.org/10.3390/engproc2026128020 - 11 Mar 2026
Viewed by 302
Abstract
We classified essential motor actions, dorsal and plantar flexion (lower limb), and arm movement (upper limb) from electroencephalography (EEG)-based brain–computer interface (BCI) signals, using a convolutional neural network (CNN). Different from previous research on upper or lower limb motor imagery in isolation, we [...] Read more.
We classified essential motor actions, dorsal and plantar flexion (lower limb), and arm movement (upper limb) from electroencephalography (EEG)-based brain–computer interface (BCI) signals, using a convolutional neural network (CNN). Different from previous research on upper or lower limb motor imagery in isolation, we integrated both categories in a unified framework to explore a broader range of movements for broader applications. These motor actions are fundamental to daily activities such as walking, running, maintaining balance, lifting, reaching, and exercising. Upper limb EEG data were provided by INTI International University, whereas lower limb data were obtained from a publicly available dataset, recorded using 16-channel Emotiv and OpenBCI systems, respectively, each with distinct sampling rates and signal formats. To improve signal quality and facilitate joint model training, all signals were downsampled to 125 Hz, standardized to 16 channels, segmented using sliding windows, normalized via StandardScaler, and labelled according to action class. The processed data were used to train a CNN model configured with a kernel size of 3 and rectified linear unit activation functions. Training was terminated early at epoch 11 using an early stopping strategy, resulting in approximately 67% accuracy for both training and validation sets. Although this accuracy was moderate for deep learning, a promising outcome for EEG-based multi-class motor imagery classification was obtained, with the challenges posed by limited data availability, low inter-class feature discriminability, and the inherently noisy nature of non-invasive EEG signals. The results of this study underscore the potential of CNN-based models for future real-time BCI applications. By expanding the dataset, deep learning architectures can be refined to improve signal preprocessing techniques. Prosthetic devices need to be integrated to validate the system in practical scenarios. Full article
Show Figures

Figure 1

26 pages, 1839 KB  
Article
EEG-TriNet++: A Transformer-Guided Meta-Learning Framework for Robust and Generalizable Motor Imagery Classification
by Ahmed Tibermacine, Ilyes Naidji, Imad Eddine Tibermacine, Lahcene Mamen, Abdelaziz Rabehi and Mustapha Habib
Bioengineering 2026, 13(3), 307; https://doi.org/10.3390/bioengineering13030307 - 6 Mar 2026
Cited by 1 | Viewed by 855
Abstract
Motor imagery (MI) classification using EEG signals is central to brain–computer interfaces but remains challenging due to low signal-to-noise ratio, non-stationarity, and high inter-subject variability. We introduce EEG-TriNet++, a multi-branch deep learning architecture that enhances both classification accuracy and cross-subject generalization. The model [...] Read more.
Motor imagery (MI) classification using EEG signals is central to brain–computer interfaces but remains challenging due to low signal-to-noise ratio, non-stationarity, and high inter-subject variability. We introduce EEG-TriNet++, a multi-branch deep learning architecture that enhances both classification accuracy and cross-subject generalization. The model integrates three complementary components: convolutional spatial–spectral encoders for channel-wise and frequency-specific patterns, bidirectional LSTMs to model temporal dynamics, and a Transformer head for global relational reasoning. A patchwise tokenization strategy and neural architecture search optimize the trade-off between efficiency and representational capacity. To address individual differences, a model-agnostic meta-learning (MAML) module enables rapid adaptation to new users with limited data. Evaluated on two public MI datasets under within-subject and leave-one-subject-out (LOSO) protocols, EEG-TriNet++ achieves 79.1% and 78.6% accuracy in within-subject tasks, and 72.4% and 71.3% in LOSO settings. Ablation studies validate the contribution of each module, and comparisons with state-of-the-art methods demonstrate consistent performance gains under identical conditions. Full article
(This article belongs to the Section Biosignal Processing)
Show Figures

Figure 1

18 pages, 4834 KB  
Article
Syntax–Semantics–Numeracy Fusion for Improving Math Word Problem Representation and Solving
by Zihan Feng, Hao Ming and Xinguo Yu
Symmetry 2026, 18(3), 434; https://doi.org/10.3390/sym18030434 - 2 Mar 2026
Viewed by 306
Abstract
Most pre-trained language representation models are designed to encode contextualized semantic information for general language processing tasks. However, they are insufficient for math word problem (MWP) solving, which requires not only linguistic syntax and semantic understanding but also numerical reasoning. In this work, [...] Read more.
Most pre-trained language representation models are designed to encode contextualized semantic information for general language processing tasks. However, they are insufficient for math word problem (MWP) solving, which requires not only linguistic syntax and semantic understanding but also numerical reasoning. In this work, we introduce SSN4Solver, a deep neural solver that improves MWP-solving performance by symmetrically fusing syntax, semantics, and numeracy representations within its contextual encoder. Our approach jointly captures syntactic structures from dependency trees, semantic features from part-of-speech tags, and the attributes and relations of numerical entities. By treating these heterogeneous information sources in a balanced and aligned manner, SSN4Solver constructs a rich, multi-faceted representation for MWP solving without introducing substantial computational overhead, empowering human–computer interaction (HCI) applications such as adaptive educational interfaces and intelligent tutoring systems. Extensive experiments demonstrate that SSN4Solver outperforms existing baseline models. In addition, a visualization scheme is designed to elucidate how the three types of representations contribute to the solving process. SSN4Solver thus offers a scalable solution, contributing to the development of HCI systems that are both intelligent and mathematically effective. Full article
(This article belongs to the Special Issue Symmetry and Asymmetry in Human-Computer Interaction)
Show Figures

Figure 1

15 pages, 1404 KB  
Article
A Deep Learning-Based Decision Support System for Cholelithiasis in MRI Data
by Ebru Hasbay, Caglar Cengizler, Mahmut Ucar, Nagihan Durgun, Hayriye Ulkucan Disli and Deniz Bolat
J. Clin. Med. 2026, 15(5), 1891; https://doi.org/10.3390/jcm15051891 - 2 Mar 2026
Viewed by 343
Abstract
Background: Cholelithiasis can lead to significant complications if not diagnosed and treated promptly. Recent advances in deep learning and the improved ability of computer systems to detect clinically significant textural and morphological patterns in magnetic resonance imaging (MRI) can help reduce the time [...] Read more.
Background: Cholelithiasis can lead to significant complications if not diagnosed and treated promptly. Recent advances in deep learning and the improved ability of computer systems to detect clinically significant textural and morphological patterns in magnetic resonance imaging (MRI) can help reduce the time and resources required for the radiological evaluation of the gallbladder and cholelithiasis. Objective: To detect cholelithiasis, a support system with a graphical user interface for magnetic resonance (MR) images of the gallbladder was implemented to reduce the manual effort and time required to identify gallstones. Method: A commonly used deep learning model for pixel-level mask generation and instance segmentation, Mask Region Based Convolutional Neural Network (Mask R-CNN), was modified, trained, and evaluated to provide a robust pipeline for automated analysis. The primary aim was to automatically locate and label the gallbladder in T2-weighted axial MR images to detect gallstones and highlight the visual characteristics of the target region, thereby supporting radiologists. All automation was designed to operate on a single optimal slice instead of the entire volume. While this approach limits generalisability, it offers a practical starting point for method development. This setup reflects a feasibility-oriented design, rather than a comprehensive diagnostic capability. The dataset included 788 axial MR images from different patients. Each image was labeled and segmented by an experienced radiologist to train and test the models at the image level. Results: The proposed model with squeeze and excitation (SE) modification improved classification accuracy, and at the image level, stone detection improved in terms of accuracy, precision, and specificity, although recall and F1 scores slightly decreased. Conclusions: The results show that the modified Mask R-CNN model can detect gallstones with up to 0.89 accuracy, supporting the clinical applicability of the proposed method. Full article
(This article belongs to the Topic Machine Learning and Deep Learning in Medical Imaging)
Show Figures

Figure 1

Back to TopTop