Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (265)

Search Parameters:
Keywords = multimodal physiological signals

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
36 pages, 884 KB  
Review
Real-Time Cognitive State Monitoring via Physiological Signals in Commercial Aviation: A Systematic Literature Review with Reasoned Snowballing Expansion
by Giacomo Belloni and Petru Lucian Curșeu
Safety 2026, 12(2), 56; https://doi.org/10.3390/safety12020056 - 20 Apr 2026
Abstract
Aviation safety depends critically on pilots’ mental and cognitive states, particularly in high-stakes and complex operational environments where human errors cause most safety events today. This paper reviews current advances in real-time monitoring of commercial pilots’ cognitive states through physiological and neurophysiological signals [...] Read more.
Aviation safety depends critically on pilots’ mental and cognitive states, particularly in high-stakes and complex operational environments where human errors cause most safety events today. This paper reviews current advances in real-time monitoring of commercial pilots’ cognitive states through physiological and neurophysiological signals and identifies methods applicable to enhance aviation safety and efficiency. In an increasingly complex and congested system, it is essential to investigate the relationships between pilots’ mental workload, stress, startle effect, and physiological parameters to highlight cognitive overload or deficiencies in real time. This systematic literature review was conducted according to PRISMA 2020 guidelines, using Google Scholar, Scopus, and PubMed, and identified 26 eligible studies. A targeted backward citation search screened 17 additional records, and two studies were added to the initial set. Twenty-eight records were therefore included and the review highlights a range of biometric indicators of pilots’ mental states with varying degrees of validity and operational applicability. Collectively, these studies offer a clear overview of state-of-the-art approaches, while also evidencing constraints related to intrusiveness and real-world feasibility. Physiological monitoring holds strong promise for enhancing pilot performance and safety by detecting early signs of overload and stress. However, its integration into operational aviation remains limited. Future research should prioritise longitudinal, in situ evaluations, multimodal data fusion, and pilot-centred design to ensure practical applicability, non-intrusiveness, and regulatory compliance, ultimately bridging the gap between academic research and cockpit reality. Full article
Show Figures

Figure 1

27 pages, 4304 KB  
Review
Towards Intelligent Pain Monitoring Systems: A Survey of Recent Technologies and Methods
by Atif Naseer, Nahla Tayyib and Sidra Rashid
Sensors 2026, 26(8), 2447; https://doi.org/10.3390/s26082447 - 16 Apr 2026
Viewed by 273
Abstract
Pain is a profoundly stressful experience that significantly impacts an individual’s daily life. In many situations, people can express the intensity of pain via some observable physical actions like crying or shouting. However, in cases where the patient is non-communicative, they cannot convey [...] Read more.
Pain is a profoundly stressful experience that significantly impacts an individual’s daily life. In many situations, people can express the intensity of pain via some observable physical actions like crying or shouting. However, in cases where the patient is non-communicative, they cannot convey their feelings through these actions. In both scenarios, automatically monitoring pain intensity using technology presents a considerable challenge. In the literature, researchers have presented numerous techniques for automatic pain monitoring using multiple approaches. This technological survey paper aims to provide an overview of current advancements in the field of automatic pain monitoring. In this paper, we present a taxonomy that summarizes our survey on the utilization of technology areas for monitoring pain automatically. Those technologies are based on Internet of Things (IoT), computer vision, and multimodal techniques. These technologies utilize various modalities, including physiological signals, facial expressions, vocalizations, and behavioral patterns, to detect and quantify pain. The paper discusses the advantages and limitations of each modality, as well as the challenges faced in developing accurate and reliable pain monitoring systems. Additionally, the paper surveys the current state of research in this field, including the development of machine learning algorithms and wearable devices for pain monitoring. Overall, this paper provides a comprehensive overview of the current state of automatic pain monitoring technology and highlights areas for future research and development. This paper also creates a keyword map that will serve as a valuable resource for researchers, enabling them to refine their investigations by identifying frequently used terms and emerging trends within each domain. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

17 pages, 892 KB  
Article
Artificial Intelligence for Biomedical Diagnostics: Diagnostic Accuracy and Reliability of Multimodal Large Language Models in Electrocardiogram Interpretation
by Henrik Stelling, Armin Kraus, Gerrit Grieb, David Breidung and Ibrahim Güler
Life 2026, 16(4), 681; https://doi.org/10.3390/life16040681 - 16 Apr 2026
Viewed by 244
Abstract
The electrocardiogram (ECG) is a central tool in cardiovascular diagnostics, yet interpretation requires expertise and remains subject to variability. Multimodal large language models (MLLMs) have shown emerging capabilities in medical image analysis, but their performance in ECG interpretation remains insufficiently characterized. This study [...] Read more.
The electrocardiogram (ECG) is a central tool in cardiovascular diagnostics, yet interpretation requires expertise and remains subject to variability. Multimodal large language models (MLLMs) have shown emerging capabilities in medical image analysis, but their performance in ECG interpretation remains insufficiently characterized. This study evaluated the diagnostic accuracy and inter-run reliability of five MLLMs across ECG interpretation tasks. Thirteen standard 12-lead ECGs were presented to five models (ChatGPT-5.3, Gemini 3.1 Pro, Claude Opus 4.6, Grok 4.1, and ERNIE 5.0) across five independent runs per case, yielding 2275 task-level assessments. Six categorical interpretation tasks (rhythm, electrical axis, PR/P-wave morphology, QRS duration, ST/T-wave morphology, and QTc interval) were compared with expert-consensus ground truth, while heart rate estimation was evaluated using mean absolute error (MAE). Overall categorical accuracy ranged from 52.3% to 64.9%. QRS duration classification achieved the highest accuracy (66.2–90.8%), whereas ST/T-wave assessment showed the lowest performance (20.0–41.5%). Heart rate MAE ranged from 14.8 to 46.7 bpm. A dissociation between diagnostic accuracy and inter-run reliability was observed across models. These findings indicate that current MLLMs do not achieve clinically reliable ECG interpretation performance and highlight the importance of assessing diagnostic accuracy and inter-run reliability when evaluating artificial intelligence systems in biomedical diagnostics. Full article
Show Figures

Graphical abstract

18 pages, 1217 KB  
Article
Detect and Repair: Robust Self-Supervised Wearable Sensing Under Missing Modalities
by Aboul Hassane Cisse and Shoya Ishimaru
Sensors 2026, 26(8), 2419; https://doi.org/10.3390/s26082419 - 15 Apr 2026
Viewed by 204
Abstract
Wearable sensor systems are being increasingly deployed in real-world environments to monitor human activities and cognitive states. However, such systems frequently suffer from degraded or missing sensor modalities due to occlusions, energy constraints, or hardware failures. In this work, we introduce CognifySSL v2.0, [...] Read more.
Wearable sensor systems are being increasingly deployed in real-world environments to monitor human activities and cognitive states. However, such systems frequently suffer from degraded or missing sensor modalities due to occlusions, energy constraints, or hardware failures. In this work, we introduce CognifySSL v2.0, a self-supervised learning framework designed to detect and repair missing modalities in real time under simulated real-world missing-modality conditions. The model combines contrastive and masked modeling objectives across multiple physiological and motion signals (e.g., IMU, ECG, EDA) using a fusion architecture with dropout simulation. Evaluation on WESAD demonstrated effective multimodal detection and reconstruction under missing-modality conditions, while experiments on MobiAct validated unimodal robustness and representation learning under sensor dropout. We released our code and interactive visualization dashboard to support reproducibility and future research on robust multimodal fusion. Full article
24 pages, 10466 KB  
Article
Fusion of RR Interval Dynamics and HRV Multidomain Signatures Using Multimodal Neural Models for Metabolic Syndrome Classification
by Miguel A. Mejia, Oscar J. Suarez, Gilberto Perpiñan and Leiner Barba Jimenez
Med. Sci. 2026, 14(2), 197; https://doi.org/10.3390/medsci14020197 - 14 Apr 2026
Viewed by 251
Abstract
Background: Metabolic syndrome (MetS) leads to alterations in cardiac autonomic control that can be detected from electrocardiogram (ECG)-derived markers, particularly when the cardiovascular system is challenged during an oral glucose tolerance test (OGTT). Methods: In this paper, we present an automated framework for [...] Read more.
Background: Metabolic syndrome (MetS) leads to alterations in cardiac autonomic control that can be detected from electrocardiogram (ECG)-derived markers, particularly when the cardiovascular system is challenged during an oral glucose tolerance test (OGTT). Methods: In this paper, we present an automated framework for MetS identification using RR intervals and heart rate variability (HRV) features extracted from 12-lead ECG recordings acquired during the five OGTT stages in 40 male participants (15 with MetS, 10 controls, and 15 endurance-trained marathon runners). RR intervals were first derived using a multilead Pan-Tompkins approach with fusion-based validation. From these RR series, HRV descriptors were computed from time-domain statistics (RR mean, SDNN, rMSSD, pNN50), spectral indices (VLF, LF, HF, LF/HF), and nonlinear measures (SD1, SD2, SampEn, DFA-α1). Conventional HRV analysis revealed pronounced physiological differences between groups: MetS subjects exhibited reduced parasympathetic activity, reflected by lower rMSSD and SD1, lower HF power, and higher LF/HF ratios, whereas marathoners showed greater vagal modulation, higher HF power, and increased signal complexity. Healthy controls showed an intermediate autonomic profile. Using RR sequences and HRV descriptors (256 samples per stage), we trained three multimodal classifiers: a CNN-MLP model with a softmax output, a CNN-MLP model with an SVM head, and a CNN + LSTM-MLP + SVM architecture. Results: All models achieved strong discriminative performance, with accuracies ranging from 0.92 to 0.95, F1-macro values from 0.92 to 0.95, and macro-AUC values from 0.96 to 0.97. The CNN-MLP model achieved the best overall performance, whereas the CNN + LSTM-MLP + SVM model showed strong class discrimination, particularly for endurance athletes, while maintaining competitive recall for MetS. Conclusions: These findings support the feasibility of ECG-based autonomic assessment as a complementary non-invasive approach for early metabolic risk detection in clinical and preventive cardiometabolic screening settings. Full article
Show Figures

Figure 1

32 pages, 1704 KB  
Systematic Review
A Systematic Review of How Cardiopulmonary Bypass Parameters Influence Electroencephalogram Signals
by Han Bao, Jiaying Wang, Ziru Cui, Min Zhu, Wenyi Chen, Liwei Zhou, Georg Northoff, Tao Tao and Pengmin Qin
Brain Sci. 2026, 16(4), 412; https://doi.org/10.3390/brainsci16040412 - 13 Apr 2026
Viewed by 272
Abstract
Background: Cardiopulmonary bypass (CPB) is an essential technique for cardiac surgery but significantly increases the risk of perioperative neurological complications. Electroencephalography (EEG) enables real-time monitoring of brain function and provides sensitive biomarkers for early detection of cerebral injury. However, a systematic synthesis of [...] Read more.
Background: Cardiopulmonary bypass (CPB) is an essential technique for cardiac surgery but significantly increases the risk of perioperative neurological complications. Electroencephalography (EEG) enables real-time monitoring of brain function and provides sensitive biomarkers for early detection of cerebral injury. However, a systematic synthesis of how CPB-related physiological, pharmacological, and technical factors influence EEG signals, and how these insights can be integrated into clinical decision-making, is still lacking. Objective: To systematically review the effects of temperature management, mean arterial pressure (MAP), hemodilution, anesthetic agents, embolization, and systemic inflammatory response during CPB on EEG parameters (including frequency bands, Bispectral Index (BIS), quantitative EEG metrics such as burst suppression ratio (BSR), spectral edge frequency (SEF), etc.), and to evaluate the associations between EEG changes and postoperative delirium (POD) and stroke. Methods: Following the PRISMA 2020 guidelines, we searched PubMed, Web of Science, and related databases for original English-language articles published between February 1974 and September 2025. Inclusion criteria: adult patients (≥18 years) undergoing cardiac surgery with CPB and intraoperative EEG monitoring (raw or processed). Exclusion criteria: reviews, case reports, animal studies, pediatric populations, and articles with inaccessible full texts. Two reviewers independently screened the literature and extracted data; a narrative synthesis was performed. Results: Fifty-one studies were included. Main findings: (1) Hypothermia: BIS decreases linearly with temperature (≈1.12 units/°C); electrocerebral silence occurs during deep hypothermic circulatory arrest; EEG recovery dynamics during rewarming predict POD. (2) MAP and cerebral perfusion: The rate of MAP decline (≥0.66 mmHg/s) is a stronger predictor of EEG abnormalities than the absolute MAP value; under fixed pump flow, some patients exhibit coexisting cerebral overperfusion and metabolic suppression. (3) Hemodilution: Maintaining hemoglobin ≥9.4 g/dL prevents EEG slowing; a drop below 9.2 g/dL significantly increases the risk of slowing. A ≥10% decrease in regional cerebral oxygen saturation (rSO2) is associated with a 1.5-fold increased risk of burst suppression. (4) Anesthetic agents: Propofol maintains flow-metabolism coupling, and BSR reflects deep anesthesia better than BIS; sevoflurane and isoflurane impair autoregulation and suppress EEG. (5) Embolization and inflammation: EEG epileptiform discharges increase the risk of POD five-fold; a decrease in LIR predicts stroke (AUC 0.771) and POD (AUC 0.779); persistent EEG changes increase the risk of POD 2.65-fold. Conclusions: CPB-related factors affect EEG signals through distinct mechanisms, and specific EEG patterns (slowing, burst suppression, asymmetry, epileptiform discharges) are significantly associated with postoperative neurological complications. Multimodal monitoring (EEG + cerebral oximetry + hemodynamics) with clear intervention thresholds facilitates individualized brain protection. Future interventional studies using real-time EEG feedback are needed to confirm improvements in long-term neurological outcomes. Full article
Show Figures

Figure 1

25 pages, 854 KB  
Systematic Review
Hybrid Machine Learning Architectures for Emergency Triage: A Systematic Review of Predictive Performance and the Complexity Gradient
by Junaid Ullah, R. Kanesaraj Ramasamay and Venushini Rajendran
BioMedInformatics 2026, 6(2), 21; https://doi.org/10.3390/biomedinformatics6020021 - 10 Apr 2026
Viewed by 365
Abstract
Background: Emergency triage systems using machine learning traditionally rely on structured tabular data (vital signs), creating a “contextual blind spot” that ignores diagnostic information embedded in unstructured clinical narratives. Hybrid AI models that fuse tabular and text data may improve predictive discrimination, but [...] Read more.
Background: Emergency triage systems using machine learning traditionally rely on structured tabular data (vital signs), creating a “contextual blind spot” that ignores diagnostic information embedded in unstructured clinical narratives. Hybrid AI models that fuse tabular and text data may improve predictive discrimination, but the magnitude and conditions under which fusion adds value remain unclear. Methods: Five databases (PubMed, Scopus, Web of Science, IEEE Xplore, ACM Digital Library) were searched from 1 January 2015 to 15 December 2025. Eligible studies employed Hybrid AI models integrating structured and unstructured emergency department data with quantitative baseline comparisons. Twenty-five studies (N ≈ 4.8 million encounters) met inclusion criteria. We extracted marginal performance gains (ΔAUC), calibration metrics, and demographic reporting. Synthesis followed SWiM principles with subgroup meta-regression testing our novel “Complexity Gradient” hypothesis. Results: Hybrid models demonstrated superior discrimination compared to tabular baselines, with effect magnitude dependent on clinical task complexity. Low-complexity tasks (tachycardia prediction) showed minimal gains (median ΔAUC + 0.036, IQR: 0.02–0.05), while high-complexity tasks (hypoxia, sepsis) demonstrated substantial improvement (median ΔAUC + 0.111, IQR: 0.09–0.13). Meta-regression confirmed complexity significantly moderated effect size (R2 = 0.42, p = 0.003). Only 12% (3/25) of studies reported calibration metrics (Brier scores: 0.089–0.142). Zero studies stratified performance by race/ethnicity; 88% (22/25) failed to report training data demographics. Discussion: The complexity gradient framework explains when multimodal fusion adds predictive value: tasks where diagnostic signal resides in narrative features (temporality, negation) rather than physiological measurements. However, systematic absence of calibration reporting and fairness auditing prevents clinical deployment. Seventy-two percent of studies had high risk of bias in the analysis domain due to retrospective designs without temporal validation. Conclusions: Hybrid triage models show promise for complex diagnostic tasks but require mandatory calibration reporting and demographic performance stratification before clinical implementation. We propose minimum reporting standards including Brier scores, race-stratified metrics, and temporal validation protocols. Full article
Show Figures

Figure 1

27 pages, 1880 KB  
Article
Hierarchical Acoustic Encoding Distress in Pigs: Disentangling Individual, Developmental, and Emotional Effects with Subject-Wise Validation
by Irenilza de Alencar Nääs, Danilo Florentino Pereira, Alexandra Ferreira da Silva Cordeiro and Nilsa Duarte da Silva Lima
Animals 2026, 16(8), 1148; https://doi.org/10.3390/ani16081148 - 9 Apr 2026
Viewed by 219
Abstract
Automated pig-welfare monitoring needs scalable, non-invasive signals that work across ages and individuals. A key methodological contribution of this study is the use of subject-wise validation, which ensures generalization to unseen animals and prevents inflated accuracy caused by growth-related and individual ‘voice’ differences. [...] Read more.
Automated pig-welfare monitoring needs scalable, non-invasive signals that work across ages and individuals. A key methodological contribution of this study is the use of subject-wise validation, which ensures generalization to unseen animals and prevents inflated accuracy caused by growth-related and individual ‘voice’ differences. Vocalizations can help, but growth and individual “voice” differences can confound distress patterns and overstate accuracy without subject-wise validation. In our study, we explicitly accounted for individual variability by including animal identity as a random effect in mixed models and by using grouped cross-validation, where models were tested only on pigs not seen during training. This approach ensures that the reported accuracy reflects generalization across different individuals rather than memorization of specific vocal signatures. We analyzed 2221 vocal samples from 40 pigs (20 males, 20 females) recorded across four growth phases (farrowing, nursery, growing, finishing) under six conditions (pain, hunger, thirst, cold stress, heat stress, normal). Acoustic features extracted in Praat included energy, duration, intensity, pitch, and formants (F1–F4). Using blockwise variance decomposition, we quantified contributions of distress exposure, growth phase, and sex, and estimated the additional variance explained by animal identity. Distress exposure dominated intensity and spectral traits, particularly Formant 2, whereas the growth phase produced systematic shifts in duration and pitch. Animal identity added a modest but consistent increment in explained variance (~+0.02–0.03 R2 beyond sex, phase, and distress). For prediction, we used 5-fold cross-validation grouped by animal. A Random Forest achieved a modest balanced accuracy of 0.609 and macro-F1 of 0.597; pain was most separable (recall 0.825), while other states showed moderate recall, indicating overlap. These results support hierarchical acoustic encoding of distress and establish a benchmark for precision welfare monitoring. Furthermore, they highlight that resolving complex physiological overlaps, such as heat stress and resource competition, requires a shift from unimodal acoustic models to multimodal Precision Livestock Farming (PLF) systems that integrate bioacoustics with continuous environmental and behavioral data streams. Full article
Show Figures

Graphical abstract

20 pages, 2366 KB  
Article
Multimodal Machine Learning Framework for Driver Mental Workload Classification: A Comparative and Interpretable Approach
by Xiaojun Shao, Xiaoxiang Ma, Feng Chen and Xiaodong Pan
Appl. Sci. 2026, 16(7), 3581; https://doi.org/10.3390/app16073581 - 7 Apr 2026
Viewed by 336
Abstract
Understanding and monitoring driver mental workload is essential for improving road safety. This study proposes a multimodal machine learning framework to classify drivers’ mental workload using eye movement metrics, physiological signals, and driving behavior features. A driving simulator experiment was conducted with 26 [...] Read more.
Understanding and monitoring driver mental workload is essential for improving road safety. This study proposes a multimodal machine learning framework to classify drivers’ mental workload using eye movement metrics, physiological signals, and driving behavior features. A driving simulator experiment was conducted with 26 participants under two workload levels induced by a secondary auditory task. Seven feature combinations and six classification algorithms were evaluated. The results showed that eye metrics were the most informative modality, and that feature selection had a greater impact on classification performance than algorithm choice. A support vector machine with optimized features was selected as the final model based on performance and stability, achieving an accuracy of 87.8% and an AUC of 0.95. To improve model transparency, SHapley Additive exPlanations (SHAP) was applied, highlighting key predictors such as blink rate and heart rate, and uncovering synergistic effects between visual and physiological variables. The model was further validated in a tunnel entrance scenario, where it identified increased workload associated with steeper longitudinal slopes. These findings emphasize the importance of multimodal data integration—particularly eye movements—for assessing mental workload. Future applications should prioritize feature diversity over algorithm complexity to enhance real-world implementation in workload monitoring systems. Full article
Show Figures

Figure 1

30 pages, 8434 KB  
Review
AI-Assisted Molecular Biosensors: Design Strategies for Wearable and Real-Time Monitoring
by Sishi Zhu, Jie Zhang, Xuming He, Lijun Ding, Xiao Luo and Weijia Wen
Int. J. Mol. Sci. 2026, 27(7), 3305; https://doi.org/10.3390/ijms27073305 - 6 Apr 2026
Viewed by 789
Abstract
Artificial intelligence (AI) has become a transformative tool in the field of molecular biosensing, enabling data-driven optimization in sensor design, signal processing, and real-time monitoring. AI promotes the discovery of biomarkers, the design of high-affinity receptors, and the rational engineering of sensing materials, [...] Read more.
Artificial intelligence (AI) has become a transformative tool in the field of molecular biosensing, enabling data-driven optimization in sensor design, signal processing, and real-time monitoring. AI promotes the discovery of biomarkers, the design of high-affinity receptors, and the rational engineering of sensing materials, thereby enhancing sensitivity, specificity, and detection accuracy. In the development of biosensors, AI-assisted strategies have accelerated the identification of novel molecular targets, guided the design of proteins and aptamers with enhanced binding performance, and optimized plasmonic and nanophotonic structures through forward prediction and inverse design frameworks. The integration of artificial intelligence has significantly enhanced the performance of various biosensing platforms, including optical, electrochemical, and microfluidic biosensors. It also enabled automatic feature extraction, noise reduction, dimensionality reduction, and multimodal data fusion, overcoming the challenges posed by complex signals, environmental interference, and device variations. These capabilities are particularly crucial for wearable molecular biosensors, as low signal strength, motion artifacts, and fluctuations in physiological conditions impose strict requirements on robustness and real-time reliability. This review systematically summarizes the latest advancements in AI-assisted molecular biosensors, highlighting representative sensing strategies and algorithms for wearable and real-time monitoring, and discusses the current challenges and future development opportunities of intelligent biosensing technologies. Full article
(This article belongs to the Special Issue Biosensors: Emerging Technologies and Real-Time Monitoring)
Show Figures

Figure 1

30 pages, 4178 KB  
Article
An Intelligent Evaluation Algorithm for Pilot Flight Training Ability Based on Multimodal Information Fusion
by Heming Zhang, Changyuan Wang and Pengbo Wang
Sensors 2026, 26(7), 2245; https://doi.org/10.3390/s26072245 - 4 Apr 2026
Viewed by 487
Abstract
Intelligent-assisted assessment of pilot flight training ability is a method of automating the evaluation of pilots’ flight skills using artificial intelligence. Currently, using AI to assist or replace human instructors in flight skill assessment has become a mainstream research direction in the field [...] Read more.
Intelligent-assisted assessment of pilot flight training ability is a method of automating the evaluation of pilots’ flight skills using artificial intelligence. Currently, using AI to assist or replace human instructors in flight skill assessment has become a mainstream research direction in the field of intelligent aviation. Existing flight skill assessment methods suffer from limitations in data types and insufficient assessment accuracy. To address these issues, we evaluate and predict pilot performance in simulated flight missions based on physiological signals. Following the “OODA loop” theory, we established a multimodal dataset including pilot eye movement, electroencephalogram (EEG), electrocardiogram (ECG), electrodermal signaling (EDS), heart rate, respiration, and flight attitude data. This dataset records changes in physiological rhythms and flight behaviors during pilots’ flight training at different difficulty levels. To enhance the signal-to-noise ratio, we propose an enhanced wavelet fuzzy thresholding denoising algorithm utilizing LSTM optimization. We address the problem of isolated features across different time frames in multimodal data modeling by introducing a multi-feature fusion algorithm based on STFT. Furthermore, by combining a high-efficiency sub-attention mechanism with a Transformer network, we construct a multi-classification network for intelligent-assisted assessment of pilot flight training ability, further improving the output accuracy of each category. Experiments show that our designed algorithm can achieve a classification accuracy of up to 85% on the dataset (5-fold cross-validation), which meets the requirements for auxiliary assessment of flight capabilities. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

45 pages, 8329 KB  
Article
HRV-Based Multimodal Physiological Signal Monitoring Using Wearable Biosensors in Human–Computer Interaction: Cognitive Load in Real-Time Strategy Games
by Yunlong Shi, Muyesaier Kuerban, Yiyang Jin, Chaoyue Wang and Lu Chen
Sensors 2026, 26(7), 2181; https://doi.org/10.3390/s26072181 - 1 Apr 2026
Viewed by 636
Abstract
Real-time strategy (RTS) games provide a cognitively demanding and ecologically valid context for investigating workload dynamics in human–computer interaction (HCI). This multimodal study (HRV, NASA-TLX, behavior, interviews) examined multitasking, visual complexity, and decision pressure in 36 novice RTS players. High multitasking significantly increased [...] Read more.
Real-time strategy (RTS) games provide a cognitively demanding and ecologically valid context for investigating workload dynamics in human–computer interaction (HCI). This multimodal study (HRV, NASA-TLX, behavior, interviews) examined multitasking, visual complexity, and decision pressure in 36 novice RTS players. High multitasking significantly increased subjective workload (total raw-TLX: from 22.50 ± 14.65 to 36.47 ± 20.19, p < 0.001) and prolonged completion time (from 317.17 ± 37.26 s to 354.92 ± 50.70 s, p < 0.001). Decision pressure elevated subjective workload (total raw-TLX: from 20 to 28, p = 0.008) without affecting performance. Although HRV did not consistently differentiate experimental conditions at the group level, it showed stable individual-level associations with perceived workload—both in expected directions (e.g., LF power positively correlated with total raw-TLX across four experiments, r = 0.28–0.53, all p < 0.05) and in inverse relationships that deviate from conventional stress models (e.g., stress index negatively correlated with total raw-TLX, r = −0.34 to −0.40, all p < 0.01). These findings suggest that autonomic responses in complex interactive environments may reflect dynamic engagement processes rather than uniform stress activation, supporting multimodal cognitive load assessment and offering transferable insights for interface design and workload evaluation in demanding HCI contexts. Full article
(This article belongs to the Special Issue Human–Computer Interaction in Sensor Systems)
Show Figures

Graphical abstract

25 pages, 1110 KB  
Review
Piezoelectric Biomaterials for Osteochondral Tissue Engineering: Advances, Mechanisms, and Translational Prospects
by Hao Wang and Yunfeng Li
J. Funct. Biomater. 2026, 17(4), 173; https://doi.org/10.3390/jfb17040173 - 1 Apr 2026
Viewed by 508
Abstract
Piezoelectric biomaterials have attracted considerable interest in osteochondral tissue engineering owing to their inherent ability to produce electrical signals in response to mechanical stimuli without external power, thereby closely mimicking the physiological electrical microenvironment required for tissue regeneration. This review comprehensively summarizes recent [...] Read more.
Piezoelectric biomaterials have attracted considerable interest in osteochondral tissue engineering owing to their inherent ability to produce electrical signals in response to mechanical stimuli without external power, thereby closely mimicking the physiological electrical microenvironment required for tissue regeneration. This review comprehensively summarizes recent insights into biological piezoelectricity from the molecular to the macroscopic level, highlighting its interplay with streaming potentials and its regulatory roles in bone and cartilage regeneration. We critically analyze recent advances in major piezoelectric material systems, including ceramics, polymers, and composite scaffolds, with emphasis on their structural characteristics, bioactive performance, and suitability for tissue-specific repair. Among them, polymer-based composite and hybrid piezoelectric scaffolds appear particularly promising for the development of flexible, high-performance osteochondral repair platforms, as they offer a more favorable balance between mechanical compliance, electromechanical output, and biological adaptability. Despite encouraging preclinical findings, significant challenges remain, including biocompatibility, controlled degradation kinetics, and the precise modulation of electrical cues for specific biological contexts. To address these barriers, future research should focus on optimizing scaffold design, integrating responsive and multimodal stimulation strategies, and establishing standardized protocols for preclinical evaluation and clinical translation. Overall, piezoelectric biomaterials hold substantial potential for the development of innovative regenerative therapies for complex osteochondral defects. Full article
(This article belongs to the Special Issue Advanced Biomaterials and Biomechanics Studies in Tissue Engineering)
Show Figures

Figure 1

16 pages, 1966 KB  
Article
A Novel System for Physiological Signal Monitoring and Health-Informed Electrotactile Feedback for First Responders
by Bojan Jorgovanović, Vojin Ilić, Nikola Jorgovanović, Marina Peña-Díaz, Goran Bijelić, Jovana Malešević, Miloš Kostić and Matija Štrbac
Sensors 2026, 26(7), 2054; https://doi.org/10.3390/s26072054 - 25 Mar 2026
Viewed by 369
Abstract
Ensuring the safety and effectiveness of first responder teams during critical missions requires real-time health monitoring and responsive intervention systems. This study presents a novel system comprising a multimodal wearable device integrated with a remote command centre, designed to support the physiological monitoring [...] Read more.
Ensuring the safety and effectiveness of first responder teams during critical missions requires real-time health monitoring and responsive intervention systems. This study presents a novel system comprising a multimodal wearable device integrated with a remote command centre, designed to support the physiological monitoring and guidance of first responders in the field. The wearable device includes three main components: a physiological and biochemical signal acquisition unit, an electrotactile stimulation unit and a powerful communication interface. The acquisition unit continuously samples heart rate, body temperature, and biochemical markers from sweat, transmitting this data wirelessly to the remote command centre. The transmitted physiological data could be analyzed at the command centre and, based on the inferred first responder condition, appropriate feedback commands could be issued back to the corresponding wearer. The commands are then executed by the electrotactile stimulation unit on the wearable device. Initial testing in laboratory settings confirmed the system’s ability to generate accurate electrochemical readings and dehydration assessment through changes in bulk ionic conductivity. Electrochemical impedance spectroscopy showed good agreement with a commercial potentiostat. Heart rate and temperature readings demonstrated satisfying accuracy with minor removable artifacts. Field trials with first responders validated continuous signal transmission and electrotactile feedback with over 80% success. These results confirm the system’s robustness and modularity, supporting its application in operational environments. Full article
(This article belongs to the Section Electronic Sensors)
Show Figures

Figure 1

18 pages, 1085 KB  
Article
Self-Learning Multimodal Emotion Recognition Based on Multi-Scale Dilated Attention
by Xiuli Du and Luyao Zhu
Brain Sci. 2026, 16(4), 350; https://doi.org/10.3390/brainsci16040350 - 25 Mar 2026
Viewed by 391
Abstract
Background/Objectives: Emotions can be recognized through external behavioral cues and internal physiological signals. Owing to the inherently complex psychological and physiological nature of emotions, models relying on a single modality often suffer from limited robustness. This study aims to improve emotion recognition performance [...] Read more.
Background/Objectives: Emotions can be recognized through external behavioral cues and internal physiological signals. Owing to the inherently complex psychological and physiological nature of emotions, models relying on a single modality often suffer from limited robustness. This study aims to improve emotion recognition performance by effectively integrating electroencephalogram (EEG) signals and facial expressions through a multimodal framework. Methods: We propose a multimodal emotion recognition model that employs a Multi-Scale Dilated Attention Convolution (MSDAC) network tailored for facial expression recognition, integrates an EEG emotion recognition method based on three-dimensional features, and adopts a self-learning decision-level fusion strategy. MSDAC incorporates Multi-Scale Dilated Convolutions and a Dual-Branch Attention (D-BA) module to capture discontinuous facial action units. For EEG processing, raw signals are converted into a multidimensional time–frequency–spatial representation to preserve temporal, spectral, and spatial information. To overcome the limitations of traditional stitching or fixed-weight fusion approaches, a self-learning weight fusion mechanism is introduced at the decision level to adaptively adjust modality contributions. Results: The facial analysis branch achieved average accuracies of 74.1% on FER2013, 99.69% on CK+, and 98.05% (valence)/96.15% (arousal) on DEAP. On the DEAP dataset, the complete multimodal model reached 98.66% accuracy for valence and 97.49% for arousal classification. Conclusions: The proposed framework enhances emotion recognition by improving facial feature extraction and enabling adaptive multimodal fusion, demonstrating the effectiveness of combining EEG and facial information for robust emotion analysis. Full article
(This article belongs to the Section Cognitive, Social and Affective Neuroscience)
Show Figures

Graphical abstract

Back to TopTop