Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (339)

Search Parameters:
Keywords = human motion prediction

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 3886 KB  
Article
3D Human Motion Prediction via the Decoupled Spatiotemporal Clue
by Mingrui Xu, Zheming Gu and Erping Li
Electronics 2025, 14(21), 4162; https://doi.org/10.3390/electronics14214162 (registering DOI) - 24 Oct 2025
Abstract
Human motion exhibits high-dimensional and stochastic characteristics, posing significant challenges for modeling and prediction. Existing approaches typically employ coupled spatiotemporal frameworks to generate future poses. However, the intrinsic nonlinearity of joint interactions over time, compounded by high-dimensional noise, often obscures meaningful motion features. [...] Read more.
Human motion exhibits high-dimensional and stochastic characteristics, posing significant challenges for modeling and prediction. Existing approaches typically employ coupled spatiotemporal frameworks to generate future poses. However, the intrinsic nonlinearity of joint interactions over time, compounded by high-dimensional noise, often obscures meaningful motion features. Notably, while adjacent joints demonstrate strong spatial correlations, their temporal trajectories frequently remain independent, adding further complexity to modeling efforts. To address these issues, we propose a novel framework for human motion prediction via the decoupled spatiotemporal clue (DSC), which explicitly disentangles and models spatial and temporal dependencies. Specifically, DSC comprises two core components: (i) a spatiotemporal decoupling module that dynamically identifies critical joints and their hierarchical relationships using graph attention combined with separable convolutions for efficient motion decomposition; and (ii) a pose generation module that integrates local motion denoising with global dynamics modeling through a spatiotemporal transformer that independently processes spatial and temporal correlations. Experiments on the widely used human motion datasets H3.6M and AMASS demonstrate the superiority of DSC, which achieves 13% average improvement in long-term prediction over state-of-the-art methods. Full article
(This article belongs to the Collection Computer Vision and Pattern Recognition Techniques)
Show Figures

Figure 1

17 pages, 1734 KB  
Review
Why Humans Prefer Phylogenetically Closer Species: An Evolutionary, Neurocognitive, and Cultural Synthesis
by Antonio Ragusa
Biology 2025, 14(10), 1438; https://doi.org/10.3390/biology14101438 - 18 Oct 2025
Viewed by 181
Abstract
Humans form deep attachments to some nonhuman animals, yet these attachments are unequally distributed across the tree of life. Drawing on evolutionary biology, comparative cognition, neuroscience, and cultural anthropology, this narrative review explains why empathy and affective preference are typically stronger for phylogenetically [...] Read more.
Humans form deep attachments to some nonhuman animals, yet these attachments are unequally distributed across the tree of life. Drawing on evolutionary biology, comparative cognition, neuroscience, and cultural anthropology, this narrative review explains why empathy and affective preference are typically stronger for phylogenetically closer species—especially mammals—than for distant taxa such as reptiles, fish, or arthropods. We synthesize evidence that signal recognizability (faces, gaze, vocal formants, biological motion) and predictive social cognition facilitate mind attribution to mammals; conserved neuroendocrine systems (e.g., oxytocin) further amplify affiliative exchange, particularly in domesticated dyads (e.g., dog–human). Ontogenetic learning and media narratives magnify these effects, while fear modules and disgust shape responses to some distant taxa. Notwithstanding this average gradient, boundary cases—cephalopods, cetaceans, parrots—show that perceived agency, sociality, and communicative transparency can overcome phylogenetic distance. We discuss measurement (behavioral, psychophysiological, neuroimaging), computational accounts in predictive-processing terms, and implications for animal welfare and conservation. Pragmatically, calibrated anthropomorphism, hands-on education, and messaging that highlights agency, parental care, or ecological function reliably broaden concern for under-represented taxa. Recognizing both evolved priors and cultural plasticity enables more equitable and effective science communication and policy. Expanding empathy beyond its ancestral anchors is not only an ethical imperative but a One Health necessity: safeguarding all species means safeguarding the integrity of our shared planetary life. Full article
Show Figures

Figure 1

17 pages, 3783 KB  
Article
A Dual-Task Improved Transformer Framework for Decoding Lower Limb Sit-to-Stand Movement from sEMG and IMU Data
by Xiaoyun Wang, Changhe Zhang, Zidong Yu, Yuan Liu and Chao Deng
Machines 2025, 13(10), 953; https://doi.org/10.3390/machines13100953 - 16 Oct 2025
Viewed by 216
Abstract
Recent advances in exoskeleton-assisted rehabilitation have highlighted the significance of lower limb movement intention recognition through deep learning. However, discrete motion phase classification and continuous real-time joint kinematics estimation are typically handled as independent tasks, leading to temporal misalignment or delayed assistance during [...] Read more.
Recent advances in exoskeleton-assisted rehabilitation have highlighted the significance of lower limb movement intention recognition through deep learning. However, discrete motion phase classification and continuous real-time joint kinematics estimation are typically handled as independent tasks, leading to temporal misalignment or delayed assistance during dynamic movements. To address this issue, this study presents iTransformer-DTL, a dual-task learning framework with an improved Transformer designed to identify end-to-end locomotion modes and predict joint trajectories during sit-to-stand transitions. Employing a learnable query mechanism and a non-autoregressive decoding approach, the proposed iTransformer-DTL can produce the complete output sequence at once, without relying on any previously generated elements. The proposed framework has been tested with a dataset of lower limb movements involving seven healthy individuals and seven stroke patients. The experimental results indicate that the proposed framework achieves satisfactory performance in dual tasks. An average angle prediction Mean Absolute Error (MAE) of 3.84° and a classification accuracy of 99.42% were obtained in the healthy group, while 4.62° MAE and 99.01% accuracy were achieved in the stroke group. These results suggest that iTransformer-DTL could support adaptable rehabilitation exoskeleton controllers, enhancing human–robot interactions. Full article
Show Figures

Figure 1

16 pages, 5302 KB  
Article
A Parallel Network for Continuous Motion Estimation of Finger Joint Angles with Surface Electromyographic Signals
by Chuang Lin and Shengshuo Zhou
Appl. Sci. 2025, 15(20), 11078; https://doi.org/10.3390/app152011078 - 16 Oct 2025
Viewed by 226
Abstract
The implementation of surface electromyographic (sEMG) signals in the interaction between human beings and machines is an important line of research. In the system of human–machine interaction, continuous-motion-estimation-based control plays an important role because it is more natural and intuitive than pattern recognition-based [...] Read more.
The implementation of surface electromyographic (sEMG) signals in the interaction between human beings and machines is an important line of research. In the system of human–machine interaction, continuous-motion-estimation-based control plays an important role because it is more natural and intuitive than pattern recognition-based control. In this paper, we propose a parallel network consisting of a CNN with a multi-head attention mechanism and a BiLSTM (bidirectional long short-term memory) network to improve the accuracy of continuous motion estimation. The proposed network is evaluated in the Ninapro dataset. Six finger movements of 10 subjects were tested in the Ninapro DB2 dataset to evaluate the performance of the neural network and calculate the PCC (Pearson Correlation Coefficient) between the predicted joint angle sequence and the actual joint angle sequence. The experimental results show that the average accuracy (PCC) of the proposed network reaches 0.87 ± 0.02, which is significantly better than that of the BiLSTM network (0.79 ± 0.04, p < 0.05), CNN-Attention (0.80 ± 0.01, p < 0.05), CNN (0.70 ± 0.03, p < 0.05), CNN-BiLSTM (0.83 ± 0.02, p < 0.05), and TCN (0.76 ± 0.05, p < 0.05). It is worth noting that in this work, we extract multiple features from the raw sEMG signals and fuse them. We found that better continuous estimation accuracy can be achieved using multi-feature sEMG data. The model proposed in this paper skillfully integrates the convolutional neural network, multi-head attention mechanism, and bidirectional long short-term memory network, and its performance has good stability and accuracy. The model realizes more natural and accurate human–computer interaction. Full article
Show Figures

Figure 1

16 pages, 10962 KB  
Article
Exploratory Proof-of-Concept: Predicting the Outcome of Tennis Serves Using Motion Capture and Deep Learning
by Gustav Durlind, Uriel Martinez-Hernandez and Tareq Assaf
Mach. Learn. Knowl. Extr. 2025, 7(4), 118; https://doi.org/10.3390/make7040118 - 14 Oct 2025
Viewed by 395
Abstract
Tennis serves heavily impact match outcomes, yet analysis by coaches is limited by human vision. The design of an automated tennis serve analysis system could facilitate enhanced performance analysis. As serve location and serve success are directly correlated, predicting the outcome of a [...] Read more.
Tennis serves heavily impact match outcomes, yet analysis by coaches is limited by human vision. The design of an automated tennis serve analysis system could facilitate enhanced performance analysis. As serve location and serve success are directly correlated, predicting the outcome of a serve could provide vital information for performance analysis. This article proposes a tennis serve analysis system powered by Machine Learning, which classifies the outcome of serves as “in”, “out” or “net”, and predicts the coordinate outcome of successful serves. Additionally, this work details the collection of three-dimensional spatio-temporal data on tennis serves, using marker-based optoelectronic motion capture. The classification uses a Stacked Bidirectional Long Short-Term Memory architecture, whilst a 3D Convolutional Neural Network architecture is harnessed for serve coordinate prediction. The proposed method achieves 89% accuracy for tennis serve classification, outperforming the current state-of-the-art whilst performing finer-grain classification. The results achieve an accuracy of 63% in predicting the serve coordinates, with a mean absolute error of 0.59 and a root mean squared error of 0.68, exceeding the current state-of-the-art with a new method. The system contributes towards the long-term goal of designing a non-invasive tennis serve analysis system that functions in training and match conditions. Full article
Show Figures

Figure 1

25 pages, 18664 KB  
Article
Study on Lower Limb Motion Intention Recognition Based on PO-SVMD-ResNet-GRU
by Wei Li, Mingsen Wang, Daxue Sun, Zhuoda Jia and Zhengwei Yue
Processes 2025, 13(10), 3252; https://doi.org/10.3390/pr13103252 - 13 Oct 2025
Viewed by 261
Abstract
This study aims to enhance the accuracy of human lower limb motion intention recognition based on surface electromyography (sEMG) signals and proposes a signal denoising method based on Sequential Variational Mode Decomposition (SVMD) optimized by the Parrot Optimization (PO) algorithm and a joint [...] Read more.
This study aims to enhance the accuracy of human lower limb motion intention recognition based on surface electromyography (sEMG) signals and proposes a signal denoising method based on Sequential Variational Mode Decomposition (SVMD) optimized by the Parrot Optimization (PO) algorithm and a joint motion angle prediction model combining Residual Network (ResNet) with Gated Recurrent Unit (GRU) for the two aspects of signal processing and predictive modeling, respectively. First, for the two motion conditions of level walking and stair climbing, sEMG signals from the rectus femoris, vastus lateralis, semitendinosus, and biceps femoris, as well as the motion angles of the hip and knee joints, were simultaneously collected from five healthy subjects, yielding a total of 400 gait cycle data points. The sEMG signals were denoised using the method combining PO-SVMD with wavelet thresholding. Compared with denoising methods such as Empirical Mode Decomposition, Partial Ensemble Empirical Mode Decomposition, Independent Component Analysis, and wavelet thresholding alone, the signal-to-noise ratio (SNR) of the proposed method was increased to a maximum of 23.42 dB. Then, the gait cycle information was divided into training and testing sets at a 4:1 ratio, and five models—ResNet-GRU, Transformer-LSTM, CNN-GRU, ResNet, and GRU—were trained and tested individually using the processed sEMG signals as input and the hip and knee joint movement angles as output. Finally, the root mean square error (RMSE), mean absolute error (MAE), and coefficient of determination (R2) were used as evaluation metrics for the test results. The results show that for both motion conditions, the evaluation metrics of the ResNet-GRU model in the test results are superior to those of the other four models. The optimal evaluation metrics for level walking are 2.512 ± 0.415°, 1.863 ± 0.265°, and 0.979 ± 0.007, respectively, while the optimal evaluation metrics for stair climbing are 2.475 ± 0.442°, 2.012 ± 0.336°, and 0.98 ± 0.009, respectively. The method proposed in this study achieves improvements in both signal processing and predictive modeling, providing a new method for research on lower limb motion intention recognition. Full article
Show Figures

Figure 1

28 pages, 13934 KB  
Article
Integration of Industrial Internet of Things (IIoT) and Digital Twin Technology for Intelligent Multi-Loop Oil-and-Gas Process Control
by Ali Saleh Allahloh, Mohammad Sarfraz, Atef M. Ghaleb, Abdulmajeed Dabwan, Adeeb A. Ahmed and Adel Al-Shayea
Machines 2025, 13(10), 940; https://doi.org/10.3390/machines13100940 - 13 Oct 2025
Viewed by 469
Abstract
The convergence of Industrial Internet of Things (IIoT) and digital twin technology offers new paradigms for process automation and control. This paper presents an integrated IIoT and digital twin framework for intelligent control of a gas–liquid separation unit with interacting flow, pressure, and [...] Read more.
The convergence of Industrial Internet of Things (IIoT) and digital twin technology offers new paradigms for process automation and control. This paper presents an integrated IIoT and digital twin framework for intelligent control of a gas–liquid separation unit with interacting flow, pressure, and differential pressure loops. A comprehensive dynamic model of the three-loop separator process is developed, linearized, and validated. Classical stability analyses using the Routh–Hurwitz criterion and Nyquist plots are employed to ensure stability of the control system. Decentralized multi-loop proportional–integral–derivative (PID) controllers are designed and optimized using the Integral Absolute Error (IAE) performance index. A digital twin of the separator is implemented to run in parallel with the physical process, synchronized via a Kalman filter to real-time sensor data for state estimation and anomaly detection. The digital twin also incorporates structured singular value (μ) analysis to assess robust stability under model uncertainties. The system architecture is realized with low-cost hardware (Arduino Mega 2560, MicroMotion Coriolis flowmeter, pneumatic control valves, DAC104S085 digital-to-analog converter, and ENC28J60 Ethernet module) and software tools (Proteus VSM 8.4 for simulation, VB.Net 2022 version based human–machine interface, and ML.Net 2022 version for predictive analytics). Experimental results demonstrate improved control performance with reduced overshoot and faster settling times, confirming the effectiveness of the IIoT–digital twin integration in handling loop interactions and disturbances. The discussion includes a comparative analysis with conventional control and outlines how advanced strategies such as model predictive control (MPC) can further augment the proposed approach. This work provides a practical pathway for applying IIoT and digital twins to industrial process control, with implications for enhanced autonomy, reliability, and efficiency in oil and gas operations. Full article
(This article belongs to the Special Issue Digital Twins Applications in Manufacturing Optimization)
Show Figures

Figure 1

29 pages, 8202 KB  
Article
Continuous Lower-Limb Joint Angle Prediction Under Body Weight-Supported Training Using AWDF Model
by Li Jin, Liuyi Ling, Zhipeng Yu, Liyu Wei and Yiming Liu
Fractal Fract. 2025, 9(10), 655; https://doi.org/10.3390/fractalfract9100655 - 11 Oct 2025
Viewed by 361
Abstract
Exoskeleton-assisted bodyweight support training (BWST) has demonstrated enhanced neurorehabilitation outcomes in which joint motion prediction serves as the critical foundation for adaptive human–machine interactive control. However, joint angle prediction under dynamic unloading conditions remains unexplored. This study introduces an adaptive wavelet-denoising fusion (AWDF) [...] Read more.
Exoskeleton-assisted bodyweight support training (BWST) has demonstrated enhanced neurorehabilitation outcomes in which joint motion prediction serves as the critical foundation for adaptive human–machine interactive control. However, joint angle prediction under dynamic unloading conditions remains unexplored. This study introduces an adaptive wavelet-denoising fusion (AWDF) model to predict lower-limb joint angles during BWST. Utilizing a custom human-tracking bodyweight support system, time series data of surface electromyography (sEMG), and inertial measurement unit (IMU) from ten adults were collected across graded bodyweight support levels (BWSLs) ranging from 0% to 40%. Systematic comparative experiments evaluated joint angle prediction performance among five models: the sEMG-based model, kinematic fusion model, wavelet-enhanced fusion model, late fusion model, and the proposed AWDF model, tested across prediction time horizons of 30–150 ms and BWSL gradients. Experimental results demonstrate that increasing BWSLs prolonged gait cycle duration and modified muscle activation patterns, with a concomitant decrease in the fractal dimension of sEMG signals. Extended prediction time degraded joint angle estimation accuracy, with 90 ms identified as the optimal tradeoff between system latency and prediction advancement. Crucially, this study reveals an enhancement in prediction performance with increased BWSLs. The proposed AWDF model demonstrated robust cross-condition adaptability for hip and knee angle prediction, achieving average root mean square errors (RMSE) of 1.468° and 2.626°, Pearson correlation coefficients (CC) of 0.983 and 0.973, and adjusted R2 values of 0.992 and 0.986, respectively. This work establishes the first computational framework for BWSL-adaptive joint prediction, advancing human–machine interaction in exoskeleton-assisted neurorehabilitation. Full article
Show Figures

Figure 1

17 pages, 2421 KB  
Article
Muscle Strength Estimation of Key Muscle–Tendon Units During Human Motion Using ICA-Enhanced sEMG Signals and BP Neural Network Modeling
by Hongyan Liu, Jongchul Park, Junghee Lee and Dandan Wang
Sensors 2025, 25(20), 6273; https://doi.org/10.3390/s25206273 - 10 Oct 2025
Viewed by 320
Abstract
Accurately predicting the muscle strength of key muscle–tendon units during human motion is vital for understanding movement mechanisms, optimizing exercise training, evaluating rehabilitation progress, and advancing prosthetic control technologies. Traditional prediction methods often suffer from low accuracy and high computational complexity. To address [...] Read more.
Accurately predicting the muscle strength of key muscle–tendon units during human motion is vital for understanding movement mechanisms, optimizing exercise training, evaluating rehabilitation progress, and advancing prosthetic control technologies. Traditional prediction methods often suffer from low accuracy and high computational complexity. To address these challenges, this study employs independent component analysis (ICA) to predict the muscle strength of tendon units in primary moving parts of the human body. The proposed method had the highest accuracy in localization, at 98% when the sample size was 20. When the sample size was 100, the proposed method had the shortest localization time, with a localization time of 0.025 s. The accuracy of muscle strength prediction based on backpropagation neural network for key muscle–tendon units in human motion was the highest, with an accuracy of 99% when the sample size was 100. The method can effectively optimize the accuracy and efficiency of muscle strength prediction for key muscle–tendon units in human motion and reduce computational complexity. Full article
Show Figures

Figure 1

33 pages, 3660 KB  
Review
Converging Extended Reality and Robotics for Innovation in the Food Industry
by Seongju Woo, Youngjin Kim and Sangoh Kim
AgriEngineering 2025, 7(10), 322; https://doi.org/10.3390/agriengineering7100322 - 1 Oct 2025
Viewed by 881
Abstract
Extended Reality (XR) technologies—including Virtual Reality, Augmented Reality, and Mixed Reality—are increasingly applied in the food industry to simulate sensory environments, support education, and influence consumer behavior, while robotics addresses labor shortages, hygiene, and efficiency in production. This review uniquely synthesizes their convergence [...] Read more.
Extended Reality (XR) technologies—including Virtual Reality, Augmented Reality, and Mixed Reality—are increasingly applied in the food industry to simulate sensory environments, support education, and influence consumer behavior, while robotics addresses labor shortages, hygiene, and efficiency in production. This review uniquely synthesizes their convergence through digital twin frameworks, combining XR’s immersive simulations with robotics’ precision and scalability. A systematic literature review and keyword co-occurrence analysis of over 800 titles revealed research clusters around consumer behavior, nutrition education, sensory experience, and system design. In parallel, robotics has expanded beyond traditional pick-and-place tasks into areas such as precision cleaning, chaotic mixing, and digital gastronomy. The integration of XR and robotics offers synergies including risk-free training, predictive task validation, and enhanced human–robot interaction but faces hurdles such as high hardware costs, motion sickness, and usability constraints. Future research should prioritize interoperability, ergonomic design, and cross-disciplinary collaboration to ensure that XR–robotics systems evolve not merely as tools, but as a paradigm shift in redefining the human–food–environment relationship. Full article
Show Figures

Figure 1

24 pages, 18326 KB  
Article
A Human Intention and Motion Prediction Framework for Applications in Human-Centric Digital Twins
by Usman Asad, Azfar Khalid, Waqas Akbar Lughmani, Shummaila Rasheed and Muhammad Mahabat Khan
Biomimetics 2025, 10(10), 656; https://doi.org/10.3390/biomimetics10100656 - 1 Oct 2025
Viewed by 705
Abstract
In manufacturing settings where humans and machines collaborate, understanding and predicting human intention is crucial for enabling the seamless execution of tasks. This knowledge is the basis for creating an intelligent, symbiotic, and collaborative environment. However, current foundation models often fall short in [...] Read more.
In manufacturing settings where humans and machines collaborate, understanding and predicting human intention is crucial for enabling the seamless execution of tasks. This knowledge is the basis for creating an intelligent, symbiotic, and collaborative environment. However, current foundation models often fall short in directly anticipating complex tasks and producing contextually appropriate motion. This paper proposes a modular framework that investigates strategies for structuring task knowledge and engineering context-rich prompts to guide Vision–Language Models in understanding and predicting human intention in semi-structured environments. Our evaluation, conducted across three use cases of varying complexity, reveals a critical tradeoff between prediction accuracy and latency. We demonstrate that a Rolling Context Window strategy, which uses a history of frames and the previously predicted state, achieves a strong balance of performance and efficiency. This approach significantly outperforms single-image inputs and computationally expensive in-context learning methods. Furthermore, incorporating egocentric video views yields a substantial 10.7% performance increase in complex tasks. For short-term motion forecasting, we show that the accuracy of joint position estimates is enhanced by using historical pose, gaze data, and in-context examples. Full article
Show Figures

Graphical abstract

22 pages, 8860 KB  
Article
Generating Multi-View Action Data from a Monocular Camera Video by Fusing Human Mesh Recovery and 3D Scene Reconstruction
by Hyunsu Kim and Yunsik Son
Appl. Sci. 2025, 15(19), 10372; https://doi.org/10.3390/app151910372 - 24 Sep 2025
Viewed by 608
Abstract
Multi-view data, captured from various perspectives, is crucial for training view-invariant human action recognition models, yet its acquisition is hindered by spatio-temporal constraints and high costs. This study aims to develop the Pose Scene EveryWhere (PSEW) framework, which automatically generates temporally consistent, multi-view [...] Read more.
Multi-view data, captured from various perspectives, is crucial for training view-invariant human action recognition models, yet its acquisition is hindered by spatio-temporal constraints and high costs. This study aims to develop the Pose Scene EveryWhere (PSEW) framework, which automatically generates temporally consistent, multi-view 3D human action data from a single monocular video. The proposed framework first predicts 3D human parameters from each video frame using a deep learning-based Human Mesh Recovery (HMR) model. Subsequently, it applies tracking, linear interpolation, and Kalman filtering to refine temporal consistency and produce naturalistic motion. The refined human meshes are then reconstructed into a virtual 3D scene by estimating a stable floor plane for alignment, and finally, novel-view videos are rendered using user-defined virtual cameras. As a result, the framework successfully generated multi-view data with realistic, jitter-free motion from a single video input. To assess fidelity to the original motion, we used Root Mean Square Error (RMSE) and Mean Per Joint Position Error (MPJPE) as metrics, achieving low average errors in both 2D (RMSE: 0.172; MPJPE: 0.202) and 3D (RMSE: 0.145; MPJPE: 0.206) space. PSEW provides an efficient, scalable, and low-cost solution that overcomes the limitations of traditional data collection methods, offering a remedy for the scarcity of training data for action recognition models. Full article
(This article belongs to the Special Issue Advanced Technologies Applied for Object Detection and Tracking)
Show Figures

Figure 1

17 pages, 3464 KB  
Article
A Novel Hand Motion Intention Recognition Method That Decodes EMG Signals Based on an Improved LSTM
by Tian-Ao Cao, Hongyou Zhou, Zhengkui Chen, Yiwei Dai, Min Fang, Chengze Wu, Lurong Jiang, Yanyun Dai and Jijun Tong
Symmetry 2025, 17(10), 1587; https://doi.org/10.3390/sym17101587 - 23 Sep 2025
Viewed by 462
Abstract
Electromyography (EMG) signals reflect hand motion intention and exhibit a certain degree of amplitude symmetry. Nowadays, recognition of hand motion intention based on EMG has enriched its burgeoning promotion in various applications, such as rehabilitation, prostheses, and intelligent supply chains. For instance, the [...] Read more.
Electromyography (EMG) signals reflect hand motion intention and exhibit a certain degree of amplitude symmetry. Nowadays, recognition of hand motion intention based on EMG has enriched its burgeoning promotion in various applications, such as rehabilitation, prostheses, and intelligent supply chains. For instance, the motion intentions of humans can be conveyed to logistics equipment, thereby improving the level of intelligence in a supply chain. To enhance the recognition accuracy of multiple hand motion intentions, this paper proposes a hand motion intention recognition method that decodes EMG signals based on improved long short-term memory (LSTM). Firstly, we performed preprocessing and utilized overlapping sliding windows on EMG segments. Secondly, we chose LSTM and improved it so as to capture features and enable prediction of hand motion intention. Specifically, we introduced the optimal key hyperparameter combination in the LSTM model using a genetic algorithm (GA). We found that our proposed method achieved relatively high accuracy in detecting hand motion intention, with average accuracies of 92.0% (five gestures) and 89.7% (seven gestures), while the highest accuracy reached 100.0% (seven gestures). Our paper may provide a way to predict the motion intention of the human hand for intention communication. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

26 pages, 2120 KB  
Article
Continuous Vibration-Driven Virtual Tactile Motion Perception Across Fingertips
by Mehdi Adibi
Sensors 2025, 25(18), 5918; https://doi.org/10.3390/s25185918 - 22 Sep 2025
Viewed by 577
Abstract
Motion perception is a fundamental function of the tactile system, essential for object exploration and manipulation. While human studies have largely focused on discrete or pulsed stimuli with staggered onsets, many natural tactile signals are continuous and rhythmically patterned. Here, we investigate whether [...] Read more.
Motion perception is a fundamental function of the tactile system, essential for object exploration and manipulation. While human studies have largely focused on discrete or pulsed stimuli with staggered onsets, many natural tactile signals are continuous and rhythmically patterned. Here, we investigate whether phase differences between “simultaneously” presented, “continuous” amplitude-modulated vibrations can induce the perception of motion across fingertips. Participants reliably perceived motion direction at modulation frequencies up to 1 Hz, with discrimination performance systematically dependent on the phase lag between vibrations. Critically, trial-level confidence reports revealed the lowest certainty for anti-phase (180°) conditions, consistent with stimulus ambiguity as predicted by the mathematical framework. I propose two candidate computational mechanisms for tactile motion processing. The first is a conventional cross-correlation computation over the envelopes; the second is a probabilistic model based on the uncertain detection of temporal reference points (e.g., envelope peaks) within threshold-defined windows. This model, despite having only a single parameter (uncertainty width determined by an amplitude discrimination threshold), accounts for both the non-linear shape and asymmetries of observed psychometric functions. These results demonstrate that the human tactile system can extract directional information from distributed phase-coded signals in the absence of spatial displacement, revealing a motion perception mechanism that parallels arthropod systems but potentially arises from distinct perceptual constraints. The findings underscore the feasibility of sparse, phase-coded stimulation as a lightweight and reproducible method for conveying motion cues in wearable, motion-capable haptic devices. Full article
Show Figures

Figure 1

25 pages, 1689 KB  
Article
A Data-Driven Framework for Modeling Car-Following Behavior Using Conditional Transfer Entropy and Dynamic Mode Decomposition
by Poorendra Ramlall and Subhradeep Roy
Appl. Sci. 2025, 15(17), 9700; https://doi.org/10.3390/app15179700 - 3 Sep 2025
Viewed by 720
Abstract
Accurate modeling of car-following behavior is essential for understanding traffic dynamics and enabling predictive control in intelligent transportation systems. This study presents a novel data-driven framework that combines information-theoretic input selection via conditional transfer entropy (CTE) with dynamic mode decomposition with control (DMDc) [...] Read more.
Accurate modeling of car-following behavior is essential for understanding traffic dynamics and enabling predictive control in intelligent transportation systems. This study presents a novel data-driven framework that combines information-theoretic input selection via conditional transfer entropy (CTE) with dynamic mode decomposition with control (DMDc) for identifying and forecasting car-following dynamics. In the first step, CTE is employed to identify the specific vehicles that exert directional influence on a given subject vehicle, thereby systematically determining the relevant control inputs for modeling its behavior. In the second step, DMDc is applied to estimate and predict the dynamics by reconstructing the closed-form expression of the dynamical system governing the subject vehicle’s motion. Unlike conventional machine learning models that typically seek a single generalized representation across all drivers, our framework develops individualized models that explicitly preserve driver heterogeneity. Using both synthetic data from multiple traffic models and real-world naturalistic driving datasets, we demonstrate that DMDc accurately captures nonlinear vehicle interactions and achieves high-fidelity short-term predictions. Analysis of the estimated system matrices reveals that DMDc naturally approximates kinematic relationships, further reinforcing its interpretability. Importantly, this is the first study to apply DMDc to model and predict car-following behavior using real-world driving data. The proposed framework offers a computationally efficient and interpretable tool for traffic behavior analysis, with potential applications in adaptive traffic control, autonomous vehicle planning, and human-driver modeling. Full article
(This article belongs to the Section Transportation and Future Mobility)
Show Figures

Figure 1

Back to TopTop