Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (3,162)

Search Parameters:
Keywords = human motion

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 1688 KB  
Article
A Data-Driven Framework for Modeling Car-Following Behavior Using Conditional Transfer Entropy and Dynamic Mode Decomposition
by Poorendra Ramlall and Subhradeep Roy
Appl. Sci. 2025, 15(17), 9700; https://doi.org/10.3390/app15179700 (registering DOI) - 3 Sep 2025
Abstract
Accurate modeling of car-following behavior is essential for understanding traffic dynamics and enabling predictive control in intelligent transportation systems. This study presents a novel data-driven framework that combines information-theoretic input selection via conditional transfer entropy (CTE) with dynamic mode decomposition with control (DMDc) [...] Read more.
Accurate modeling of car-following behavior is essential for understanding traffic dynamics and enabling predictive control in intelligent transportation systems. This study presents a novel data-driven framework that combines information-theoretic input selection via conditional transfer entropy (CTE) with dynamic mode decomposition with control (DMDc) for identifying and forecasting car-following dynamics. In the first step, CTE is employed to identify the specific vehicles that exert directional influence on a given subject vehicle, thereby systematically determining the relevant control inputs for modeling its behavior. In the second step, DMDc is applied to estimate and predict the dynamics by reconstructing the closed-form expression of the dynamical system governing the subject vehicle’s motion. Unlike conventional machine learning models that typically seek a single generalized representation across all drivers, our framework develops individualized models that explicitly preserve driver heterogeneity. Using both synthetic data from multiple traffic models and real-world naturalistic driving datasets, we demonstrate that DMDc accurately captures nonlinear vehicle interactions and achieves high-fidelity short-term predictions. Analysis of the estimated system matrices reveals that DMDc naturally approximates kinematic relationships, further reinforcing its interpretability. Importantly, this is the first study to apply DMDc to model and predict car-following behavior using real-world driving data. The proposed framework offers a computationally efficient and interpretable tool for traffic behavior analysis, with potential applications in adaptive traffic control, autonomous vehicle planning, and human-driver modeling. Full article
(This article belongs to the Section Transportation and Future Mobility)
Show Figures

Figure 1

23 pages, 3668 KB  
Article
Graph-Driven Micro-Expression Rendering with Emotionally Diverse Expressions for Lifelike Digital Humans
by Lei Fang, Fan Yang, Yichen Lin, Jing Zhang and Mincheol Whang
Biomimetics 2025, 10(9), 587; https://doi.org/10.3390/biomimetics10090587 - 3 Sep 2025
Abstract
Micro-expressions, characterized by brief and subtle facial muscle movements, are essential for conveying nuanced emotions in digital humans, yet existing rendering techniques often produce rigid or emotionally monotonous animations due to the inadequate modeling of temporal dynamics and action unit interdependencies. This paper [...] Read more.
Micro-expressions, characterized by brief and subtle facial muscle movements, are essential for conveying nuanced emotions in digital humans, yet existing rendering techniques often produce rigid or emotionally monotonous animations due to the inadequate modeling of temporal dynamics and action unit interdependencies. This paper proposes a graph-driven framework for micro-expression rendering that generates emotionally diverse and lifelike expressions. We employ a 3D-ResNet-18 backbone network to perform joint spatio-temporal feature extraction from facial video sequences, enhancing sensitivity to transient motion cues. Action units (AUs) are modeled as nodes in a symmetric graph, with edge weights derived from empirical co-occurrence probabilities and processed via a graph convolutional network to capture structural dependencies and symmetric interactions. This symmetry is justified by the inherent bilateral nature of human facial anatomy, where AU relationships are based on co-occurrence and facial anatomy analysis (as per the FACS), which are typically undirected and symmetric. Human faces are symmetric, and such relationships align with the design of classic spectral GCNs for undirected graphs, assuming that adjacency matrices are symmetric to model non-directional co-occurrences effectively. Predicted AU activations and timestamps are interpolated into continuous motion curves using B-spline functions and mapped to skeletal controls within a real-time animation pipeline (Unreal Engine). Experiments on the CASME II dataset demonstrate superior performance, achieving an F1-score of 77.93% and an accuracy of 84.80% (k-fold cross-validation, k = 5), outperforming baselines in temporal segmentation. Subjective evaluations confirm that the rendered digital human exhibits improvements in perceptual clarity, naturalness, and realism. This approach bridges micro-expression recognition and high-fidelity facial animation, enabling more expressive virtual interactions through curve extraction from AU values and timestamps. Full article
(This article belongs to the Section Bioinspired Sensorics, Information Processing and Control)
Show Figures

Figure 1

11 pages, 420 KB  
Article
Sensor-Agnostic, LSTM-Based Human Motion Prediction Using sEMG Data
by Bon Ho Koo, Ho Chit Siu and Lonnie G. Petersen
Sensors 2025, 25(17), 5474; https://doi.org/10.3390/s25175474 - 3 Sep 2025
Abstract
The use of surface electromyography (sEMG) for conventional motion classification and prediction has had limitations due to sensor hardware differences. With the popularization of deep learning-based approaches to the application of motion prediction, this study explores the effects that different hardware sensor platforms [...] Read more.
The use of surface electromyography (sEMG) for conventional motion classification and prediction has had limitations due to sensor hardware differences. With the popularization of deep learning-based approaches to the application of motion prediction, this study explores the effects that different hardware sensor platforms have on the performance of a deep learning neural network trained to predict the one-degree-of-freedom (DoF) angular trajectory of a human. Two different sEMG sensor platforms were used to collect raw data from subjects conducting exercises, which was used to train a neural network designed to predict the future angular trajectory of the arm. The results show that the raw data originating from different sensor hardware with different configurations (including the communication method, data acquisition unit (DAQ) usage, electrode configuration, buffering method, preprocessing method, and experimental variables like the sampling frequency) produced bi-LSTM networks that performed similarly. This points to the hardware-agnostic nature of such deep learning networks. Full article
Show Figures

Figure 1

22 pages, 3983 KB  
Article
System Integration of Multi-Source Wearable Sensors for Non-Invasive Blood Lactate Estimation: A Data Fusion Approach
by Jingjie Wu, Zhixuan Chen and Lixin Sun
Processes 2025, 13(9), 2810; https://doi.org/10.3390/pr13092810 - 2 Sep 2025
Viewed by 28
Abstract
Blood lactate (BLa) concentration is a pivotal biomarker of exercise intensity and physiological stress, which provides insights into athletic performance and recovery. However, traditional lactate measurement requires invasive blood sampling, which presents significant limitations, including procedural discomfort, infection risks, and impracticality for continuous [...] Read more.
Blood lactate (BLa) concentration is a pivotal biomarker of exercise intensity and physiological stress, which provides insights into athletic performance and recovery. However, traditional lactate measurement requires invasive blood sampling, which presents significant limitations, including procedural discomfort, infection risks, and impracticality for continuous monitoring. Though non-invasive measurements of BLa concentration have emerged, most rely on a single physiological indicator like heart rate and sweat rate, and their accuracy and reliability remain limited. To address these limitations, this study proposes an innovative multi-sensor fusion framework for non-invasive estimation of BLa. By leveraging the inherent multisystem and multidimensional coordination of human physiology during exercise, the framework integrates a range of physiological signals (e.g., heart rate variability and respiratory entropy) and biomechanical signals (e.g., motion data). We proposed a stacking ensemble model that leverages the complementary strengths of these signals and achieved exceptional predictive performance with near-perfect correlation (R2 = 0.9661) while maintaining high precision (MAE = 0.1816 mmol/L) and robustness (RMSE = 0.5891 mmol/L). Furthermore, the model’s exceptional capability extends to blood lactate threshold detection with 98.15% classification accuracy, which is a critical metric for training intensity optimization. This approach provides a robust, non-invasive solution for continuous exercise intensity monitoring, demonstrating significant potential for optimizing athletic performance through real-time physiological assessment and data-driven training modulation. Full article
(This article belongs to the Section AI-Enabled Process Engineering)
Show Figures

Figure 1

34 pages, 1965 KB  
Article
Smartphone-Based Markerless Motion Capture for Accessible Rehabilitation: A Computer Vision Study
by Bruno Cunha, José Maçães and Ivone Amorim
Sensors 2025, 25(17), 5428; https://doi.org/10.3390/s25175428 - 2 Sep 2025
Viewed by 42
Abstract
Physical rehabilitation is crucial for injury recovery, offering pain relief and faster healing. However, traditional methods rely heavily on in-person professional feedback, which can be time-consuming, expensive, and prone to human error, limiting accessibility and effectiveness. As a result, patients are often encouraged [...] Read more.
Physical rehabilitation is crucial for injury recovery, offering pain relief and faster healing. However, traditional methods rely heavily on in-person professional feedback, which can be time-consuming, expensive, and prone to human error, limiting accessibility and effectiveness. As a result, patients are often encouraged to perform exercises at home; however, due to the lack of professional guidance, motivation dwindles and adherence becomes a challenge. To address this, this paper proposes a smartphone-based solution that enables patients to receive exercise feedback independently. This paper reviews current Computer Vision systems for assessing rehabilitation exercises and introduces an intelligent system designed to assist patients in their recovery. Our proposed system uses motion tracking based on Computer Vision, analyzing videos recorded with a smartphone. With accessibility as a priority, the system is evaluated against the advanced Qualysis Motion Capture System using a dataset labeled by expert physicians. The framework focuses on human pose detection and movement quality assessment, aiming to reduce recovery times, minimize human error, and make rehabilitation more accessible. This proof-of-concept study was conducted as a pilot evaluation involving 15 participants, consistent with earlier work in the field, and serves to assess feasibility before scaling to larger datasets. This innovative approach has the potential to transform rehabilitation, providing accurate feedback and support to patients without the need for in-person supervision or specialized equipment. Full article
(This article belongs to the Special Issue Feature Papers in Biomedical Sensors 2025)
Show Figures

Figure 1

23 pages, 2203 KB  
Review
Gait Analysis in Multiple Sclerosis: A Scoping Review of Advanced Technologies for Adaptive Rehabilitation and Health Promotion
by Anna Tsiakiri, Spyridon Plakias, Georgios Giarmatzis, Georgia Tsakni, Foteini Christidi, Marianna Papadopoulou, Daphne Bakalidou, Konstantinos Vadikolias, Nikolaos Aggelousis and Pinelopi Vlotinou
Biomechanics 2025, 5(3), 65; https://doi.org/10.3390/biomechanics5030065 - 2 Sep 2025
Viewed by 76
Abstract
Background/Objectives: Multiple sclerosis (MS) often leads to gait impairments, even in early stages, and can affect autonomy and quality of life. Traditional assessment methods, while widely used, have been criticized because they lack sensitivity to subtle gait changes. This scoping review aims [...] Read more.
Background/Objectives: Multiple sclerosis (MS) often leads to gait impairments, even in early stages, and can affect autonomy and quality of life. Traditional assessment methods, while widely used, have been criticized because they lack sensitivity to subtle gait changes. This scoping review aims to map the landscape of advanced gait analysis technologies—both wearable and non-wearable—and evaluate their application in detecting, characterizing, and monitoring possible gait dysfunction in individuals with MS. Methods: A systematic search was conducted across PubMed and Scopus databases for peer-reviewed studies published in the last decade. Inclusion criteria focused on original human research using technological tools for gait assessment in individuals with MS. Data from 113 eligible studies were extracted and categorized based on gait parameters, technologies used, study design, and clinical relevance. Results: Findings highlight a growing integration of advanced technologies such as inertial measurement units, 3D motion capture, pressure insoles, and smartphone-based tools. Studies primarily focused on spatiotemporal parameters, joint kinematics, gait variability, and coordination, with many reporting strong correlations to MS subtype, disability level, fatigue, fall risk, and cognitive load. Real-world and dual-task assessments emerged as key methodologies for detecting subtle motor and cognitive-motor impairments. Digital gait biomarkers, such as stride regularity, asymmetry, and dynamic stability demonstrated high potential for early detection and monitoring. Conclusions: Advanced gait analysis technologies can provide a multidimensional, sensitive, and ecologically valid approach to evaluating and detecting motor function in MS. Their clinical integration supports personalized rehabilitation, early diagnosis, and long-term disease monitoring. Future research should focus on standardizing metrics, validating digital biomarkers, and leveraging AI-driven analytics for real-time, patient-centered care. Full article
(This article belongs to the Section Gait and Posture Biomechanics)
Show Figures

Figure 1

17 pages, 3688 KB  
Article
Feature-Based Modeling of Subject-Specific Lower Limb Skeletons from Medical Images
by Sentong Wang, Itsuki Fujita, Koun Yamauchi and Kazunori Hase
Biomechanics 2025, 5(3), 63; https://doi.org/10.3390/biomechanics5030063 - 1 Sep 2025
Viewed by 139
Abstract
Background/Objectives: In recent years, 3D shape models of the human body have been used for various purposes. In principle, CT and MRI tomographic images are necessary to create such models. However, CT imaging and MRI generally impose heavy physical and financial burdens on [...] Read more.
Background/Objectives: In recent years, 3D shape models of the human body have been used for various purposes. In principle, CT and MRI tomographic images are necessary to create such models. However, CT imaging and MRI generally impose heavy physical and financial burdens on the person being imaged, the model creator, and the hospital where the imaging facility is located. To reduce these burdens, the purpose of this study was to propose a method of creating individually adapted models by using simple X-ray images, which provide relatively little information and can therefore be easily acquired, and by transforming an existing base model. Methods: From medical images, anatomical feature values and scanning feature values that use the points that compose the contour line that can represent the shape of the femoral knee joint area were acquired, and deformed by free-form deformation. Free-form deformations were automatically performed to match the feature values using optimization calculations based on the confidence region method. The accuracy of the deformed model was evaluated by the distance between surfaces of the deformed model and the node points of the reference model. Results: Deformation and evaluation were performed for 13 cases, with a mean error of 1.54 mm and a maximum error of 12.88 mm. In addition, the deformation using scanning feature points was more accurate than the deformation using anatomical feature points. Conclusions: This method is useful because it requires only the acquisition of feature points from two medical images to create the model, and overall average accuracy is considered acceptable for applications in biomechanical modeling and motion analysis. Full article
(This article belongs to the Section Injury Biomechanics and Rehabilitation)
Show Figures

Figure 1

26 pages, 10383 KB  
Review
Flexible and Wearable Tactile Sensors for Intelligent Interfaces
by Xu Cui, Wei Zhang, Menghui Lv, Tianci Huang, Jianguo Xi and Zuqing Yuan
Materials 2025, 18(17), 4010; https://doi.org/10.3390/ma18174010 - 27 Aug 2025
Viewed by 445
Abstract
Rapid developments in intelligent interfaces across service, healthcare, and industry have led to unprecedented demands for advanced tactile perception systems. Traditional tactile sensors often struggle with adaptability on curved surfaces and lack sufficient feedback for delicate interactions. Flexible and wearable tactile sensors are [...] Read more.
Rapid developments in intelligent interfaces across service, healthcare, and industry have led to unprecedented demands for advanced tactile perception systems. Traditional tactile sensors often struggle with adaptability on curved surfaces and lack sufficient feedback for delicate interactions. Flexible and wearable tactile sensors are emerging as a revolutionary solution, driven by innovations in flexible electronics and micro-engineered materials. This paper reviews recent advancements in flexible tactile sensors, focusing on their mechanisms, multifunctional performance and applications in health monitoring, human–machine interactions, and robotics. The first section outlines the primary transduction mechanisms of piezoresistive (resistance changes), capacitive (capacitance changes), piezoelectric (piezoelectric effect), and triboelectric (contact electrification) sensors while examining material selection strategies for performance optimization. Next, we explore the structural design of multifunctional flexible tactile sensors and highlight potential applications in motion detection and wearable systems. Finally, a detailed discussion covers specific applications of these sensors in health monitoring, human–machine interactions, and robotics. This review examines their promising prospects across various fields, including medical care, virtual reality, precision agriculture, and ocean monitoring. Full article
(This article belongs to the Special Issue Advances in Flexible Electronics and Electronic Devices)
Show Figures

Figure 1

15 pages, 968 KB  
Article
Validity of AI-Driven Markerless Motion Capture for Spatiotemporal Gait Analysis in Stroke Survivors
by Balsam J. Alammari, Brandon Schoenwether, Zachary Ripic, Neva Kirk-Sanchez, Moataz Eltoukhy and Lauri Bishop
Sensors 2025, 25(17), 5315; https://doi.org/10.3390/s25175315 - 27 Aug 2025
Viewed by 452
Abstract
Gait recovery after stroke is a primary goal of rehabilitation, therefore it is imperative to develop technologies that accurately identify gait impairments after stroke. Markerless motion capture (MMC) is an emerging technology that has been validated in healthy individuals. Our study aims to [...] Read more.
Gait recovery after stroke is a primary goal of rehabilitation, therefore it is imperative to develop technologies that accurately identify gait impairments after stroke. Markerless motion capture (MMC) is an emerging technology that has been validated in healthy individuals. Our study aims to evaluate the validity of MMC against an instrumented walkway system (IWS) commonly used to evaluate gait in stroke survivors. Nineteen participants performed three comfortable speed (CS) and three fastest speed (FS) walking trials simultaneously recorded with IWS and MMC system, KinaTrax (HumanVersion 8.2, KinaTrax Inc., Boca Raton, FL, USA). Pearson’s correlation coefficient and intraclass correlation coefficient (ICC (3,1), 95%CI) were used to evaluate the agreement and consistency between systems. Furthermore, Bland–Altman plots were used to estimate bias and Limits of Agreement (LoA). For both CS and FS, agreements between MMC and IWS were good to excellent in all parameters except for non-paretic single-limb support time (SLS), which revealed moderate agreement during CS. Additionally, stride width and paretic SLS showed poor agreement in both conditions. Biases eliminated systematic errors, with variable LoAs in all parameters during both conditions. Findings indicated high validity of MMC in measuring spatiotemporal gait parameters in stroke survivors. Further validity work is warranted. Full article
Show Figures

Figure 1

25 pages, 4202 KB  
Article
Real-Time Paddle Stroke Classification and Wireless Monitoring in Open Water Using Wearable Inertial Nodes
by Vladut-Alexandru Dobra, Ionut-Marian Dobra and Silviu Folea
Sensors 2025, 25(17), 5307; https://doi.org/10.3390/s25175307 - 26 Aug 2025
Viewed by 540
Abstract
This study presents a low-cost wearable system for monitoring and classifying paddle strokes in open-water environments. Building upon our previous work in controlled aquatic and dryland settings, the proposed system consists of ESP32-based embedded nodes equipped with MPU6050 accelerometer–gyroscope sensors. These nodes communicate [...] Read more.
This study presents a low-cost wearable system for monitoring and classifying paddle strokes in open-water environments. Building upon our previous work in controlled aquatic and dryland settings, the proposed system consists of ESP32-based embedded nodes equipped with MPU6050 accelerometer–gyroscope sensors. These nodes communicate via the ESP-NOW protocol in a master–slave architecture. With minimal hardware modifications, the system implements gesture classification using Dynamic Time Warping (DTW) to distinguish between left and right paddle strokes. The collected data, including stroke type, count, and motion similarity, are transmitted in real time to a local interface for visualization. Field experiments were conducted on a calm lake using a paddleboard, where users performed a series of alternating strokes. In addition to gesture recognition, the study includes empirical testing of ESP-NOW communication range in the open lake environment. The results demonstrate reliable wireless communication over distances exceeding 100 m with minimal packet loss, confirming the suitability of ESP-NOW for low-latency data transfer in open-water conditions. The system achieved over 80% accuracy in stroke classification and sustained more than 3 h of operational battery life. This approach demonstrates the feasibility of real-time, wearable-based motion tracking for water sports in natural environments, with potential applications in kayaking, rowing, and aquatic training systems. Full article
(This article belongs to the Special Issue Sensors for Human Activity Recognition: 3rd Edition)
Show Figures

Figure 1

18 pages, 2756 KB  
Article
Triboelectric-Enhanced Piezoelectric Nanogenerator with Pressure-Processed Multi-Electrospun Fiber-Based Polymeric Layer for Wearable and Flexible Electronics
by Inkyum Kim, Jonghyeon Yun, Geunchul Kim and Daewon Kim
Polymers 2025, 17(17), 2295; https://doi.org/10.3390/polym17172295 - 25 Aug 2025
Viewed by 508
Abstract
A triboelectricity-enhanced piezoelectric nanogenerator (PENG) based on pressure-processed multi-electrospun polymeric layers is herein developed for efficient vibrational energy harvesting. The hybridization of piezoelectric and triboelectric mechanisms through electrospinning has been utilized to enhance electrical output by increasing contact areas and promoting alignment within [...] Read more.
A triboelectricity-enhanced piezoelectric nanogenerator (PENG) based on pressure-processed multi-electrospun polymeric layers is herein developed for efficient vibrational energy harvesting. The hybridization of piezoelectric and triboelectric mechanisms through electrospinning has been utilized to enhance electrical output by increasing contact areas and promoting alignment within piezoelectric materials. A multi-layer structure comprising alternating poly (vinylidene fluoride) (PVDF) and poly (hexamethylene adipamide) (PA 6/6) exhibits superior electrical performance. A lateral Janus configuration, providing distinct positive and negative triboelectric polarities, has further optimized device efficiency. This approach introduces a novel operational mechanism, enabling superior performance compared to conventional methods. The fiber-based architecture ensures exceptional flexibility, low weight, and a high surface-to-volume ratio, enabling enhanced energy harvesting. Experimentally, the PENG achieved an open-circuit voltage of 14.59 V, a short-circuit current of 205.7 nA, and a power density of 7.5 mW m−2 at a resistance of 30 MΩ with a five-layer structure subjected to post-processing under pressure. A theoretical model has mathematically elucidated the output results. Long-term durability (over 345,600 cycles) has confirmed its robustness. Demonstrations of practical applications include monitoring human joint motion and respiratory activity. These results highlight the potential of the proposed triboelectricity-enhanced PENG for vibrational energy harvesting in flexible and wearable electronic systems. Full article
(This article belongs to the Special Issue Advances in Polymer Composites for Nanogenerator Applications)
Show Figures

Graphical abstract

26 pages, 2959 KB  
Article
A Non-Invasive Gait-Based Screening Approach for Parkinson’s Disease Using Time-Series Analysis
by Hui Chen, Tee Connie, Vincent Wei Sheng Tan, Michael Kah Ong Goh, Nor Izzati Saedon, Ahmad Al-Khatib and Mahmoud Farfoura
Symmetry 2025, 17(9), 1385; https://doi.org/10.3390/sym17091385 - 25 Aug 2025
Viewed by 496
Abstract
Parkinson’s disease (PD) is a progressive neurodegenerative disorder that severely impacts motor function, necessitating early detection for effective management. However, current diagnostic methods are expensive and resource-intensive, limiting their accessibility. This study proposes a non-invasive, gait-based screening approach for PD using time-series analysis [...] Read more.
Parkinson’s disease (PD) is a progressive neurodegenerative disorder that severely impacts motor function, necessitating early detection for effective management. However, current diagnostic methods are expensive and resource-intensive, limiting their accessibility. This study proposes a non-invasive, gait-based screening approach for PD using time-series analysis of video-derived motion data. Gait patterns indicative of PD are analyzed using videos containing walking sequences of PD subjects. The video data are processed via computer vision and human pose estimation techniques to extract key body points. Classification is performed using K-Nearest Neighbors (KNN) and Long Short-Term Memory (LSTM) networks in conjunction with time-series techniques, including Dynamic Time Warping (DTW), Bag of Patterns (BoP), and Symbolic Aggregate Approximation (SAX). KNN classifies based on similarity measures derived from these methods, while LSTM captures complex temporal dependencies. Additionally, Shapelet-based Classification is independently explored for its ability to serve as a self-contained classifier by extracting discriminative motion patterns. On a self-collected dataset (43 instances: 8 PD and 35 healthy), DTW-based classification achieved 88.89% accuracy for both KNN and LSTM. On an external dataset (294 instances: 150 healthy and 144 PD with varying severity), KNN and LSTM achieved 71.19% and 57.63% accuracy, respectively. The proposed approach enhances PD detection through a cost-effective, non-invasive methodology, supporting early diagnosis and disease monitoring. By integrating machine learning with clinical insights, this study demonstrates the potential of AI-driven solutions in advancing PD screening and management. Full article
(This article belongs to the Special Issue Symmetry/Asymmetry in Image Processing and Computer Vision)
Show Figures

Figure 1

26 pages, 30652 KB  
Article
Hybrid ViT-RetinaNet with Explainable Ensemble Learning for Fine-Grained Vehicle Damage Classification
by Ananya Saha, Mahir Afser Pavel, Md Fahim Shahoriar Titu, Afifa Zain Apurba and Riasat Khan
Vehicles 2025, 7(3), 89; https://doi.org/10.3390/vehicles7030089 - 25 Aug 2025
Viewed by 400
Abstract
Efficient and explainable vehicle damage inspection is essential due to the increasing complexity and volume of vehicular incidents. Traditional manual inspection approaches are not time-effective, prone to human error, and lead to inefficiencies in insurance claims and repair workflows. Existing deep learning methods, [...] Read more.
Efficient and explainable vehicle damage inspection is essential due to the increasing complexity and volume of vehicular incidents. Traditional manual inspection approaches are not time-effective, prone to human error, and lead to inefficiencies in insurance claims and repair workflows. Existing deep learning methods, such as CNNs, often struggle with generalization, require large annotated datasets, and lack interpretability. This study presents a robust and interpretable deep learning framework for vehicle damage classification, integrating Vision Transformers (ViTs) and ensemble detection strategies. The proposed architecture employs a RetinaNet backbone with a ViT-enhanced detection head, implemented in PyTorch using the Detectron2 object detection technique. It is pretrained on COCO weights and fine-tuned through focal loss and aggressive augmentation techniques to improve generalization under real-world damage variability. The proposed system applies the Weighted Box Fusion (WBF) ensemble strategy to refine detection outputs from multiple models, offering improved spatial precision. To ensure interpretability and transparency, we adopt numerous explainability techniques—Grad-CAM, Grad-CAM++, and SHAP—offering semantic and visual insights into model decisions. A custom vehicle damage dataset with 4500 images has been built, consisting of approximately 60% curated images collected through targeted web scraping and crawling covering various damage types (such as bumper dents, panel scratches, and frontal impacts), along with 40% COCO dataset images to support model generalization. Comparative evaluations show that Hybrid ViT-RetinaNet achieves superior performance with an F1-score of 84.6%, mAP of 87.2%, and 22 FPS inference speed. In an ablation analysis, WBF, augmentation, transfer learning, and focal loss significantly improve performance, with focal loss increasing F1 by 6.3% for underrepresented classes and COCO pretraining boosting mAP by 8.7%. Additional architectural comparisons demonstrate that our full hybrid configuration not only maintains competitive accuracy but also achieves up to 150 FPS, making it well suited for real-time use cases. Robustness tests under challenging conditions, including real-world visual disturbances (smoke, fire, motion blur, varying lighting, and occlusions) and artificial noise (Gaussian; salt-and-pepper), confirm the model’s generalization ability. This work contributes a scalable, explainable, and high-performance solution for real-world vehicle damage diagnostics. Full article
Show Figures

Figure 1

22 pages, 3881 KB  
Article
A Novel Fish Pose Estimation Method Based on Semi-Supervised Temporal Context Network
by Yuanchang Wang, Ming Wang, Jianrong Cao, Chen Wang, Zhen Wu and He Gao
Biomimetics 2025, 10(9), 566; https://doi.org/10.3390/biomimetics10090566 - 25 Aug 2025
Viewed by 417
Abstract
Underwater biomimetic robotic fish are emerging as vital platforms for ocean exploration tasks such as environmental monitoring, biological observation, and seabed investigation, particularly in areas inaccessible to humans. Central to their effectiveness is high-precision fish pose estimation, which enables detailed analysis of swimming [...] Read more.
Underwater biomimetic robotic fish are emerging as vital platforms for ocean exploration tasks such as environmental monitoring, biological observation, and seabed investigation, particularly in areas inaccessible to humans. Central to their effectiveness is high-precision fish pose estimation, which enables detailed analysis of swimming patterns and ecological behavior, while informing the design of agile, efficient bio-inspired robots. To address the widespread scarcity of high-quality motion datasets in this domain, this study presents a custom-built dual-camera experimental platform that captures multi-view sequences of carp exhibiting three representative swimming behaviors—straight swimming, backward swimming, and turning—resulting in a richly annotated dataset. To overcome key limitations in existing pose estimation methods, including heavy reliance on labeled data and inadequate modeling of temporal dependencies, a novel Semi-supervised Temporal Context-Aware Network (STC-Net) is proposed. STC-Net incorporates two innovative unsupervised loss functions—temporal continuity loss and pose plausibility loss—to leverage both annotated and unannotated video frames, and integrates a Bi-directional Convolutional Recurrent Neural Network to model spatio-temporal correlations across adjacent frames. These enhancements are architecturally compatible and computationally efficient, preserving end-to-end trainability. Experimental results on the proposed dataset demonstrate that STC-Net achieves a keypoint detection RMSE of 9.71, providing a robust and scalable solution for biological pose estimation under complex motion scenarios. Full article
(This article belongs to the Special Issue Bionic Robotic Fish: 2nd Edition)
Show Figures

Figure 1

21 pages, 6280 KB  
Article
Advancing Remote Life Sensing for Search and Rescue: A Novel Framework for Precise Vital Signs Detection via Airborne UWB Radar
by Yu Jing, Yili Yan, Zhao Li, Fugui Qi, Tao Lei, Jianqi Wang and Guohua Lu
Sensors 2025, 25(17), 5232; https://doi.org/10.3390/s25175232 - 22 Aug 2025
Viewed by 505
Abstract
Non-contact vital signs detection of the survivors based on bio-radar to identify their life states is significant for field search and rescue. However, when transportation is interrupted, rescue workers and equipment are unable to arrive at the disaster area promptly. In this paper, [...] Read more.
Non-contact vital signs detection of the survivors based on bio-radar to identify their life states is significant for field search and rescue. However, when transportation is interrupted, rescue workers and equipment are unable to arrive at the disaster area promptly. In this paper, we report a hovering airborne radar for non-contact vital signs detection to overcome this challenge. The airborne radar system supports a wireless data link, enabling remote control and communication over distances of up to 3 km. In addition, a novel framework based on blind source separation is proposed for vital signals extraction. First, range migration caused by the platform motion is compensated for by the envelope alignment. Then, the respiratory waveform of the human target is extracted by the joint approximative diagonalization of eigenmatrices algorithm. Finally, the heartbeat signal is recovered by respiratory harmonic suppression through a feedback notch filter. The field experiment results demonstrate that the proposed method is capable of precisely extracting vital signals with outstanding robustness and adaptation in more cluttered environments. The work provides a technical basis for remote high-resolution vital signs detection to meet the increasing demands of actual rescue applications. Full article
Show Figures

Figure 1

Back to TopTop