Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (756)

Search Parameters:
Keywords = kinematic features

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 4042 KB  
Article
Memory Cueing and Augmented Sensory Feedback in Virtual Reality as an Assistive Technology for Enhancing Hand Motor Performance
by Zachary Marvin, Sophie Dewil, Yu Shi, Noam Y. Harel and Raviraj Nataraj
Technologies 2026, 14(4), 217; https://doi.org/10.3390/technologies14040217 - 8 Apr 2026
Abstract
Neurological injuries and disorders affecting hand motor control can severely impair the ability to perform activities of daily living and substantially reduce quality of life. Technologies such as virtual reality (VR) are increasingly used to address fundamental challenges in therapy, including motivation and [...] Read more.
Neurological injuries and disorders affecting hand motor control can severely impair the ability to perform activities of daily living and substantially reduce quality of life. Technologies such as virtual reality (VR) are increasingly used to address fundamental challenges in therapy, including motivation and engagement; further, programmable features of digital interfaces offer additional opportunities to personalize and optimize motor training. In this proof-of-concept study, we developed and evaluated a novel VR-based training framework to support improved dexterity and hand function using physiological (sensory-driven) and cognitive (memory) cues designed to promote greater task-relevant neural engagement. The proposed approach leverages the integration of augmented sensory feedback (ASF) with memory-anchored cues for motor learning of target hand gestures. Using a within-subjects design, thirteen neurotypical adults completed four training conditions: (1) control (baseline gesture-matching in VR), (2) visual ASF (enhanced visualization and feedback of gesture accuracy), (3) memory-anchored cues (associating gestures with semantically meaningful entities, loosely analogous to American Sign Language), and (4) hybrid multimodal (visual ASF + memory-anchored cues). Training with the hybrid condition produced the fastest skill acquisition (9.3 trials to reach an 80% accuracy threshold) and the steepest initial learning slope (1.86 ± 0.12%/trial), with all conditions differing significantly in initial slope (all p < 0.002). Post-training assessment showed that the hybrid condition achieved the highest gesture accuracy (95.2%), greatest normalized post-training accuracy gain (14.3% above baseline), fastest execution time to target gesture (1.14 s), and lowest variability in gestural kinematics (SD = 3.9%). Both ASF and memory-anchored cue conditions each also independently outperformed the control condition on gesture accuracy (both p ≤ 0.002), with omnibus ANOVAs indicating significant condition effects across metrics. Together, these findings suggest that pairing ASF cues with memory-based cognitive scaffolding can yield additive benefits for motor skill acquisition and stability. Pending validation in clinical populations, such approaches may inform the design of VR-based motor training frameworks for rehabilitation. Full article
Show Figures

Figure 1

49 pages, 675 KB  
Review
Automated Assembly of Large-Scale Aerospace Components: A Structured Narrative Survey of Emerging Technologies
by Kuai Zhou, Wenmin Chu, Peng Zhao, Xiaoxu Ji and Lulu Huang
Sensors 2026, 26(8), 2294; https://doi.org/10.3390/s26082294 - 8 Apr 2026
Abstract
Large-scale aerospace components (e.g., wings, fuselage sections, wing boxes, and rocket segments) feature large dimensions, low stiffness, complex interfaces, and strict assembly tolerances. Traditional rigid tooling and manual alignment struggle to meet the demands of high precision, efficiency, and flexibility in modern aerospace [...] Read more.
Large-scale aerospace components (e.g., wings, fuselage sections, wing boxes, and rocket segments) feature large dimensions, low stiffness, complex interfaces, and strict assembly tolerances. Traditional rigid tooling and manual alignment struggle to meet the demands of high precision, efficiency, and flexibility in modern aerospace manufacturing. This paper presents a structured literature review on the automated assembly of large-scale aerospace components, summarizing advances in three core domains: pose adjustment and positioning mechanisms, digital measurement technologies, and trajectory planning and control. Particular emphasis is placed on two cross-cutting themes: measurement uncertainty analysis and flexible assembly, which are critical for high-quality docking. The review classifies pose adjustment mechanisms into four categories (NC positioners, parallel kinematic machines, industrial robots, and novel mechanisms) and digital measurement into five branches (vision metrology, large-scale metrology, measurement field construction, uncertainty analysis, and auxiliary techniques). It also outlines five trajectory planning and control routes, covering traditional methods, multi-sensor fusion, digital twins, flexible assembly, and emerging intelligent approaches. The analysis reveals that current research suffers from fragmentation among mechanism design, metrology, and control, with insufficient integration of uncertainty propagation and flexible deformation modeling. Future systems will rely on heterogeneous equipment collaboration, uncertainty-aware closed-loop control, high-fidelity flexible modeling, and digital twin-driven decision-making. This review provides a unified framework and a technical reference for developing reliable, flexible, and scalable automated assembly systems for next-generation aerospace structures. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

25 pages, 11063 KB  
Article
Tac-Mamba: A Pose-Guided Cross-Modal State Space Model with Trust-Aware Gating for mmWave Radar Human Activity Recognition
by Haiyi Wu, Kai Zhao, Wei Yao and Yong Xiong
Electronics 2026, 15(7), 1535; https://doi.org/10.3390/electronics15071535 - 7 Apr 2026
Abstract
Millimeter-wave (mmWave) radar point clouds offer a privacy-preserving solution for Human Activity Recognition (HAR), but their inherent sparsity and noise limit single-modal performance. While multimodal fusion mitigates this issue, existing methods often suffer from severe negative transfer during visual degradation and incur high [...] Read more.
Millimeter-wave (mmWave) radar point clouds offer a privacy-preserving solution for Human Activity Recognition (HAR), but their inherent sparsity and noise limit single-modal performance. While multimodal fusion mitigates this issue, existing methods often suffer from severe negative transfer during visual degradation and incur high computational costs, unsuitable for edge devices. To address these challenges, we propose Tac-Mamba, a lightweight cross-modal state space model. First, we introduce a topology-guided distillation scheme that uses a Spatial Mamba teacher to extract structural priors from visual skeletons. These priors are then explicitly distilled into a Point Transformer v3 (PTv3) radar student with a modality dropout strategy. We also developed a Trust-Aware Cross-Modal Attention (TACMA) module to prevent negative transfer. It evaluates the reliability of visual features through a SiLU-activated cross-modal bilinear interaction, smoothly degrading to a pure radar-driven fallback projection when visual inputs are corrupted. Finally, a Lightweight Temporal Mamba Block (LTMB) with a Zero-Parameter Cross-Gating (ZPCG) mechanism captures long-range kinematic dependencies with linear complexity. Experiments on the public MM-Fi dataset under strict cross-environment protocols demonstrate that Tac-Mamba achieves competitive accuracies of 95.37% (multimodal) and 87.54% (radar-only) with only 0.86M parameters and 1.89 ms inference latency. These results highlight the model’s exceptional robustness to modality missingness and its feasibility for edge deployment. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

17 pages, 2678 KB  
Article
A Novel Workflow to Estimate Limb Orientation from Wearable Sensors to Monitor Infant Motor Development
by David Song, William J. Kaiser, Sitaram Vangala and Rujuta B. Wilson
Sensors 2026, 26(7), 2274; https://doi.org/10.3390/s26072274 - 7 Apr 2026
Abstract
Background: Wearable sensors have gained increasing popularity as an objective method for remotely monitoring infant movement in naturalistic settings. Over the first year of life, infants generate a wide range of motions, from goal-directed to spontaneous movement. These include linear movements, such as [...] Read more.
Background: Wearable sensors have gained increasing popularity as an objective method for remotely monitoring infant movement in naturalistic settings. Over the first year of life, infants generate a wide range of motions, from goal-directed to spontaneous movement. These include linear movements, such as kicks, and orientation changes, such as postural transitions. Many sensor processing pipelines emphasize capturing linear movements through movement-generated acceleration while focusing less on information about orientation embedded in the gravitational part of the data. Here, we introduce a complementary gravity-referenced approach that extracts the gravitational component of accelerometer signals to estimate limb orientation, extending the reliable quantification of rich and detailed aspects of infant movement. Infant orientation has demonstrated clinical relevance, including associations with later neuromotor outcomes, and it can be used to chart infant motor development, motivating the development of objective methods to quantify orientation from sensor data. Methods: Wearable sensors (Opal APDM) were used to longitudinally evaluate infant motor activity recorded in sessions conducted at 3, 6, 9, and 12 months of age. We extracted data from a 5 min segment that has simultaneous video recordings. From these datasets, applying the gravity-referenced method, we computed pitch, roll, and yaw, angles that collectively describe limb orientation. We then quantified orientation variability using axis-specific circular standard deviations (SDs) for pitch, roll, and yaw and a multi-axis composite measure based on generalized variance. Results: Axis-specific circular SDs for pitch, roll, and yaw, as well as the composite generalized variance, increased significantly from 3 to 12 months (p ≤ 0.01 for each metric). Composite variability was strongly associated with Mullen gross motor outcomes at 9 and 12 months of age (r = 0.55, p < 0.001). Conclusions: Overall, gravity-referenced pitch, roll, and yaw provide rich orientation features that increased as infants develop more postural transitions. Furthermore, the orientation features correlated with standardized measures of infant motor function. These orientation metrics can complement traditional linear kinematic measures and improve our ability to granularly track infant motor development in the first year of life. Full article
(This article belongs to the Section Wearables)
Show Figures

Figure 1

21 pages, 5751 KB  
Article
A Hybrid VMD-Transformer-BiLSTM Framework with Cross-Attention Fusion for Aileron Fault Diagnosis in UAVs
by Yang Song, Weihang Zheng, Xiaoyu Zhang and Rong Guo
Sensors 2026, 26(7), 2256; https://doi.org/10.3390/s26072256 - 6 Apr 2026
Viewed by 178
Abstract
Aileron fault diagnosis in fixed-wing unmanned aerial vehicles (UAVs) faces significant challenges due to strong noise, multi-modal coupling, and limited fault samples. This paper presents a hybrid fault diagnosis framework that integrates variational mode decomposition (VMD) with a cross-attention-based feature fusion mechanism. First, [...] Read more.
Aileron fault diagnosis in fixed-wing unmanned aerial vehicles (UAVs) faces significant challenges due to strong noise, multi-modal coupling, and limited fault samples. This paper presents a hybrid fault diagnosis framework that integrates variational mode decomposition (VMD) with a cross-attention-based feature fusion mechanism. First, residual signals are generated from UAV kinematic models and decomposed into multi-scale intrinsic mode functions (IMFs) using VMD to extract multiscale frequency-localized features. An integrated framework is then constructed, where Transformer encoders capture the global features and bidirectional long short-term memory (BiLSTM) networks extract local temporal dynamics. To effectively combine these complementary features, a cross-attention fusion module is designed to focus on the discriminative time-frequency features. Furthermore, a hybrid pooling strategy integrating max pooling and attention pooling is introduced to enhance classification robustness. Experiments on the AirLab failure and anomaly (ALFA) dataset demonstrate that the proposed method achieves 95.12% accuracy with improved fault separability, outperforming VMD + BiLSTM (87.66%), VMD + Transformer (86.89%), Transformer + BiLSTM (84.83%), Transformer (72.24%), CNN + LSTM (94.05%), and HDMTL (94.86%). Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

24 pages, 17819 KB  
Article
GT-TD3: A Kinematics-Aware Graph-Transformer Framework for Stable Trajectory Tracking of High-Degree-of-Freedom (DOF) Manipulators
by Hanwen Miao, Haoran Hou, Zhaopeng Zhu, Zheng Chao and Rui Zhang
Machines 2026, 14(4), 397; https://doi.org/10.3390/machines14040397 - 5 Apr 2026
Viewed by 187
Abstract
Accurate trajectory tracking of redundant manipulators is difficult because the controller must simultaneously model local couplings between adjacent joints and global dependencies across the whole kinematic chain. Existing reinforcement learning methods typically employ multilayer perceptrons, which do not explicitly exploit manipulator structure and [...] Read more.
Accurate trajectory tracking of redundant manipulators is difficult because the controller must simultaneously model local couplings between adjacent joints and global dependencies across the whole kinematic chain. Existing reinforcement learning methods typically employ multilayer perceptrons, which do not explicitly exploit manipulator structure and therefore show limited stability and representation ability in high-dimensional continuous control tasks. This paper proposes GT-TD3, a Graph Transformer-enhanced-Twin Delayed Deep Deterministic Policy Gradient framework, for redundant manipulator trajectory tracking. The proposed actor first converts the raw system state into joint-level node features and uses a graph neural network to extract local kinematic coupling information. A Transformer is then employed to capture long-range dependencies among joints. To strengthen the use of structural priors, topology- and distance-related bias terms are incorporated into the attention mechanism, enabling the network to encode manipulator structure during global feature learning. Experiments on a 7-DoF KUKA iiwa manipulator in PyBullet demonstrate that GT-TD3 outperforms MLP, pure GNN, and pure Transformer baselines in tracking performance. The proposed method achieves more stable training, faster convergence, and smoother and more accurate end-effector motion. The results show that the integration of local graph modeling and structure-aware global attention provides an effective solution for high-precision trajectory tracking of redundant manipulators. Full article
(This article belongs to the Section Robotics, Mechatronics and Intelligent Machines)
Show Figures

Figure 1

28 pages, 5258 KB  
Article
Dual-View Entropy-Driven AIS–Sonar Fusion for Surface and Underwater Target Discrimination
by Xiaoshuang Zhang, Jiayi Che, Xiaodan Xiong, Yucheng Zhang, Xinbo He, Mengsha Deng and Dezhi Wang
J. Mar. Sci. Eng. 2026, 14(7), 675; https://doi.org/10.3390/jmse14070675 - 4 Apr 2026
Viewed by 184
Abstract
Distinguishing surfaces from underwater targets in complex marine environments is challenging when relying solely on physical sonar features. To address the high uncertainty inherent in single-modal features and the conflicts arising from heterogeneous data, we propose a Dual-View Entropy-Driven Negation Dempster–Shafer (DVE-NDS) fusion [...] Read more.
Distinguishing surfaces from underwater targets in complex marine environments is challenging when relying solely on physical sonar features. To address the high uncertainty inherent in single-modal features and the conflicts arising from heterogeneous data, we propose a Dual-View Entropy-Driven Negation Dempster–Shafer (DVE-NDS) fusion method that integrates AIS kinematic priors with passive sonar signals. First, a heterogeneous recognition framework is constructed. LOFAR and DEMON features are extracted via convolutional neural networks (CNNs), while a Negation Basic Probability Assignment (Negation BPA) strategy is introduced to transform AIS spatiotemporal mismatches into effective "negation support" for non-cooperative underwater targets. Instead of relying on a single conflict coefficient, the proposed method jointly considers evidence self-information and inter-source consistency. Evidence quality is quantified using improved Deng entropy and negation belief entropy, while mutual trust is evaluated via the Jousselme distance. Heterogeneous evidence is weighted and corrected by generated coupling weights, effectively suppressing low-quality evidence and sharpening decision boundaries. Simulation results confirm that DVE-NDS improves macro-F1 over classical fusion, indicating the framework’s potential for handling conflicting evidence, though the current validation remains simulation-based and should be regarded as a methodological proof-of-concept. Full article
(This article belongs to the Special Issue Emerging Computational Methods in Intelligent Marine Vehicles)
Show Figures

Figure 1

34 pages, 3911 KB  
Article
PAD-Guided Multimodal Hybrid Contrastive Emotion Recognition upon STEM-E2VA Dataset
by Shufei Duan, Wenjie Zhang, Liangqi Li, Ting Zhu, Fangyu Zhao, Fujiang Li and Huizhi Liang
Multimodal Technol. Interact. 2026, 10(4), 38; https://doi.org/10.3390/mti10040038 - 2 Apr 2026
Viewed by 163
Abstract
There are still challenges in speech emotion recognition, as the representation capability of single-modal information is limited, there are difficulties in capturing continuous emotional transitions in discrete emotion annotations, and the issues of modal structural differences and cross-sample alignment in multimodal fusion methods [...] Read more.
There are still challenges in speech emotion recognition, as the representation capability of single-modal information is limited, there are difficulties in capturing continuous emotional transitions in discrete emotion annotations, and the issues of modal structural differences and cross-sample alignment in multimodal fusion methods persist. To address these, this study undertakes work from both data and model perspectives. For data, a Chinese multimodal database STEM-E2VA was constructed, synchronously collecting four modalities of data: articulatory kinematics, acoustics, glottal signals, and videos. This covers seven discrete emotion categories and employs PAD continuous annotation. By integrating discrete and continuous dimensional annotations, it better represents the distinction between strong and weak emotions under the same discrete emotion label. Concurrently, to process the biases in PAD annotations, we employed the SCL-90 psychological questionnaire to analyze annotators’ cognitive and emotional perceptions, thereby ensuring data reliability. For model, this paper proposes a multimodal supervised contrastive fusion network incorporating PAD perception. It employs a PAD-enhanced hybrid contrastive loss function to optimize intra-model and inter-modal feature alignment. Utilizing a cross-attention mechanism combined with a GRU–Transformer network for temporal feature extraction, it achieves deep fusion of multimodal information, reducing inter-modal discrepancies and cross-class confusion. Experiments demonstrate that the proposed method achieves 85.47% accuracy in discrete sentiment recognition on STEM-E2VA, with a substantial reduction in RMSE for PAD dimension prediction. It also exhibits excellent generalization capability on IEMOCAP, providing a novel framework for integrating discrete and continuous sentiment representations. Full article
Show Figures

Figure 1

30 pages, 17575 KB  
Article
Optimal Cooperative Guidance Algorithm for Active Defense of EWA Under Dual Fighter Escort
by Yali Yang, Jiajin Li, Xiaoping Wang and Guorong Huang
Mathematics 2026, 14(7), 1187; https://doi.org/10.3390/math14071187 - 2 Apr 2026
Viewed by 149
Abstract
This paper investigates an optimal cooperative guidance strategy for the active defense of an early-warning aircraft (EWA) escorted by two fighters against an incoming missile. The proposed framework extends classical three-body defense models (Target–Missile–Interceptor) into a more realistic four-body engagement (Target–Missile–Interceptor 1–Interceptor 2), [...] Read more.
This paper investigates an optimal cooperative guidance strategy for the active defense of an early-warning aircraft (EWA) escorted by two fighters against an incoming missile. The proposed framework extends classical three-body defense models (Target–Missile–Interceptor) into a more realistic four-body engagement (Target–Missile–Interceptor 1–Interceptor 2), allowing explicit coordination among multiple defenders. By projecting the 3D engagement kinematics onto two orthogonal 2D planes—a validated simplification for typical aerial combat geometries—a tractable dynamic model is obtained. Within this model, an analytical cooperative guidance law is derived using optimal control theory and the calculus of variations, minimizing a multi-objective cost function that combines miss distance, control effort, intercept geometry, and coordination terms. Extensive Monte Carlo simulations across 23 attack directions and multiple initial ranges demonstrate that the proposed method achieves an interception success rate of 99%, with an average miss distance of below 5 m. Robustness tests further confirm stable performance under target maneuver uncertainty, sensor noise, and modeling deviations. The algorithm features closed-form control commands with low computational complexity, enabling real-time onboard implementation. Full article
(This article belongs to the Section E2: Control Theory and Mechanics)
Show Figures

Figure 1

20 pages, 5717 KB  
Article
An Improved YOLOv10 and DeepSORT Algorithm for Pedestrian Detection and Tracking in Crowd Navigation
by Shihang Hu and Changyong Li
Algorithms 2026, 19(4), 274; https://doi.org/10.3390/a19040274 - 1 Apr 2026
Viewed by 173
Abstract
In indoor crowd navigation, quickly and accurately acquiring the kinematic data of pedestrians within a robot’s field of view is a crucial factor determining success. Existing indoor pedestrian tracking methods have limitations in accuracy and real-time performance. To address these issues, a lightweight [...] Read more.
In indoor crowd navigation, quickly and accurately acquiring the kinematic data of pedestrians within a robot’s field of view is a crucial factor determining success. Existing indoor pedestrian tracking methods have limitations in accuracy and real-time performance. To address these issues, a lightweight pedestrian tracking method based on an improved YOLOv10s and DeepSORT is proposed. In the detection stage, a CPNGhostNetV2 module incorporating Ghost Convolution and attention mechanisms is first designed to replace the original C2f module in YOLOv10s. This achieves lightweight while effectively preserving global feature information. Secondly, the GSConv module is introduced to further reduce computational load and model parameters. Finally, the Focal Loss function is introduced to enhance the detection capability of the YOLOv10s model in dense scenes. In the tracking stage, a novel trajectory management mechanism is proposed to reduce the ID-switching problem under occlusion conditions. The experimental results show that the improved YOLOv10s reduces computational complexity by 33.9% and parameters by 17.4% compared to the original model. It also improves mAP@50 by 0.6%. The improved DeepSORT algorithm achieves a 7.0% increase in MOTA, a 1.4% increase in MOTP, and a 24.8% reduction in ID-switch counts compared to the original YOLOv10-DeepSORT. It outperforms traditional algorithms in terms of accuracy, real-time performance, and computational efficiency, demonstrating promising application prospects. Full article
Show Figures

Figure 1

22 pages, 5412 KB  
Article
Design and Verification of 6-DOF Robotic Arm for Captive Trajectory System Applications in Wind Tunnel
by Sadia Sadiq, Muhammad Umer Sohail, Muhammad Wasim, Farooq Kifayat Ullah and Zeashan Khan
Automation 2026, 7(2), 58; https://doi.org/10.3390/automation7020058 - 1 Apr 2026
Viewed by 307
Abstract
Accurate prediction of store trajectories at the point of release from an unmanned/manned aircraft is an essential requirement for safety and precision. Captive Trajectory System (CTS) is a well-known feature of wind-tunnel testing to simulate the dynamics of store separation. To accurately replicate [...] Read more.
Accurate prediction of store trajectories at the point of release from an unmanned/manned aircraft is an essential requirement for safety and precision. Captive Trajectory System (CTS) is a well-known feature of wind-tunnel testing to simulate the dynamics of store separation. To accurately replicate real-world aerodynamic conditions based on measured forces and moments, it utilizes a six-degree-of-freedom (6-DOF) robotic arm controlled by a closed-loop control system that solves the store’s equations of motion. In this study, a wing–pylon–store configuration is used as a sample case, and published experimental trajectories are used as input. A 6-DOF robotic arm named ROBO-S is designed to follow these trajectories in a CTS setup. The kinematic analysis of ROBO-S is performed in this study. The Denavit–Hartenberg (DH) method is used for the calculation of forward kinematics, whereas geometric techniques are used for inverse kinematics calculations. A simulation of kinematic analysis is performed in MATLAB R2021a. The mechanical design of ROBO-S is carried out in PTC CREO 9.0. MATLAB simulations confirm that the robotic arm can follow the trajectory obtained from published experimental results. To demonstrate the feasibility of the design, the robotic arm is fabricated using 3D printing. The results demonstrate the potential of the developed system in accurately following trajectories for wind-tunnel testing applications. Full article
Show Figures

Figure 1

36 pages, 2965 KB  
Article
Fourier-Encoded Plücker Line Fields for Globally Bounded Inverse Velocity Mapping of Axisymmetric Parallel Mechanisms
by Yinghao Yuan and Jiang Liu
Machines 2026, 14(4), 370; https://doi.org/10.3390/machines14040370 - 27 Mar 2026
Viewed by 210
Abstract
To address inverse-velocity amplification and numerical instability of axisymmetric parallel mechanisms near dead-point regions, this paper proposes a low-dimensional feature representation and stable inverse-solving framework based on Fourier-encoded Plücker line fields. The limb axes are first represented by normalized Plücker line vectors, and [...] Read more.
To address inverse-velocity amplification and numerical instability of axisymmetric parallel mechanisms near dead-point regions, this paper proposes a low-dimensional feature representation and stable inverse-solving framework based on Fourier-encoded Plücker line fields. The limb axes are first represented by normalized Plücker line vectors, and the discrete rod-axis set is lifted to a circumferential continuous line field. A compact feature vector composed of first-order Fourier coefficients is then constructed, from which the continuous feature coefficients and the corresponding feature Jacobian are derived in closed form. Under constant-length constraints, feasible sensitivity and worst-case gain are introduced to characterize local inverse amplification, and a weighted damped KKT inverse solver is formulated to obtain globally bounded inverse solutions for feature velocities. Numerical results show that, in the ideal axisymmetric model, higher-order harmonics remain at numerical-residual levels and the first-order truncation stays dominant, while the most unfavorable amplification location is governed by the trough of feasible sensitivity. For fully reachable targets, the proposed solver reduces the peak generalized velocity by about 4.32%. For targets containing unreachable components, the damped KKT inverse introduces only a small additional residual while keeping the velocity bounded. Additional tests under mild geometric perturbations show that non-ideal errors mainly affect low-order fitting accuracy and higher-order spectral leakage, whereas the peak worst-case gain and the peak-shaving ratio remain largely stable. These results demonstrate that the proposed framework provides a unified description for inverse velocity mapping of axisymmetric parallel mechanisms with analytical interpretability, global boundedness, and robustness under mild geometric imperfections. Full article
(This article belongs to the Special Issue Mechanical Design of Parallel Manipulators)
Show Figures

Figure 1

14 pages, 5398 KB  
Article
MLISB-RTK: Machine Learning Based on Inter-System Biases to Improve the Performance of RTK in Complex Environments
by Ruwei Zhang, Wenhao Zhao, Xiaowei Shao and Mingzhe Li
Sensors 2026, 26(7), 2080; https://doi.org/10.3390/s26072080 - 27 Mar 2026
Viewed by 245
Abstract
In challenging environments, there often exist problems of false alarms and missed detections in real-time kinematic (RTK) ambiguity resolution, which significantly reduce the reliability and availability of position information. To address these issues, a machine-learning method is proposed to conduct a correctness check [...] Read more.
In challenging environments, there often exist problems of false alarms and missed detections in real-time kinematic (RTK) ambiguity resolution, which significantly reduce the reliability and availability of position information. To address these issues, a machine-learning method is proposed to conduct a correctness check on RTK ambiguity fixing, aiming to reduce the occurrences of false alarms and missed detections. The inter-system differential RTK model is adopted. Compared with the traditional RTK model, this model can provide an effective feature, namely the differential inter-system biases (DISB), to improve the accuracy of machine-learning classification. This is because when the RTK ambiguity is correctly fixed, the DISB usually appears as a stable constant. In addition to DISB, features that are strongly related to ambiguity fixing, such as the ratio value, DOP value, and residuals, are also comprehensively utilized. This method is verified by an open-source, large-scale, and diverse GNSS/SINS dataset—SmartPNT-POS. The experimental results show that, compared with the traditional method of relying solely on the empirical ratio value for ambiguity fixing verification, the missed detection probability of this method is reduced by 2%, the false-alarm probability is decreased by 29%, and the positioning accuracy is improved by approximately 7%. Moreover, compared with other features, the DISB feature provides the highest contribution rate in the machine-learning classification model. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

19 pages, 4281 KB  
Article
Effect of Front and Rear Walls on Granular Flow Characteristics During Silo Discharge
by Yiyang Hu, Yingyi Chen, Xiaodong Yang, Hui Guo, Yan Gao, Chang Su and Xiaoxing Liu
Processes 2026, 14(7), 1062; https://doi.org/10.3390/pr14071062 - 26 Mar 2026
Viewed by 232
Abstract
This work investigated the influence of thickness-direction boundary conditions on the flow characteristics of granular material in a quasi-two-dimensional silo using the discrete element method (DEM). Two types of boundary conditions were considered in the thickness direction: wall conditions and periodic boundary conditions. [...] Read more.
This work investigated the influence of thickness-direction boundary conditions on the flow characteristics of granular material in a quasi-two-dimensional silo using the discrete element method (DEM). Two types of boundary conditions were considered in the thickness direction: wall conditions and periodic boundary conditions. The simulation results indicate that under wall conditions, velocity waves propagate upward, manifested by the formation of bubble-like sub-flow zones in the velocity field, and the particle motion in the upper bed region exhibits a clear stick–slip feature. In contrast, under periodic boundary conditions, particle motion displays a resonant mode. Further statistical analysis reveals that, despite the distinct macroscopic motion mode under the two boundary conditions, the probability distributions of particle vertical fluctuating velocities share similar characteristics: both exhibit fat-tailed and asymmetric features and deviate from Gaussian distribution. Additionally, under wall conditions, the horizontal distributions of particle vertical velocity conform to the kinematic model throughout the bed, whereas under periodic boundary conditions, the horizontal distributions in the upper bed region display plug flow characteristics. In summary, the results of this work demonstrate that thickness-direction boundary conditions play a crucial role in determining the flow characteristics of granular assembly in silos. Full article
(This article belongs to the Special Issue Discrete Element Method (DEM) and Its Engineering Applications)
Show Figures

Figure 1

19 pages, 34223 KB  
Article
A Real Time Multi Modal Computer Vision Framework for Automated Autism Spectrum Disorder Screening
by Lehel Dénes-Fazakas, Ioan Catalin Mateas, Alexandru George Berciu, László Szilágyi, Levente Kovács and Eva-H. Dulf
Electronics 2026, 15(6), 1287; https://doi.org/10.3390/electronics15061287 - 19 Mar 2026
Viewed by 357
Abstract
Background: The early detection of autism spectrum disorder (ASD) is imperative for enhancing long-term developmental outcomes. Nevertheless, conventional screening methods depend on time-consuming, expert-driven behavioral assessments and are characterized by limited scalability. Automated video-based analysis provides a noninvasive and objective approach for the [...] Read more.
Background: The early detection of autism spectrum disorder (ASD) is imperative for enhancing long-term developmental outcomes. Nevertheless, conventional screening methods depend on time-consuming, expert-driven behavioral assessments and are characterized by limited scalability. Automated video-based analysis provides a noninvasive and objective approach for the extraction of behavioral biomarkers from naturalistic recordings. Methods: A modular multimodal framework was developed that integrates motion-based video analysis and facial feature extraction for the purpose of ASD versus typically developing (TD) classification. The system is capable of processing RGB videos, skeleton/stickman representations, and motion trajectory streams. A comprehensive set of kinematic features was extracted, encompassing joint trajectories, velocity and acceleration profiles, posture variability, movement smoothness, and bilateral asymmetry. The repetitive stereotypical behaviors exhibited by the subjects were characterized using frequency-domain analysis via FFT within the 0.3–7.0 Hz band. Facial expression features derived from normalized face crops and landmark-based morphological descriptors were integrated as complementary modalities. The feature-level fusion process was executed subsequent to z-score normalization, and the classification procedure was conducted using a Random Forest model with stratified 5-fold cross validation. The implementation of GPU acceleration was instrumental in facilitating near real-time inference. Results: The motion-based ComplexVideos pipeline demonstrated a cross-validated accuracy of 94.2 ± 2.1% with an area under the ROC curve (AUC) of 0.93. Skeleton-based KinectStickman inputs demonstrated moderate performance, with an accuracy range of 60–80%. In contrast, facial-only models exhibited an accuracy of approximately 60%. The integration of multiple modalities through feature fusion has been demonstrated to enhance the robustness of classification algorithms and mitigate the occurrence of false negative outcomes, thereby surpassing the performance of single-modality models. The mean inference time remained below one second per video frame under standard operating conditions. Conclusions: The experimental results demonstrate that the integration of multimodal cues, including motion and facial features, facilitates the development of effective and efficient video-based screening methods for autism spectrum disorder (ASD). The proposed framework is designed to offer a scalable, extensible, and computationally efficient solution that can support early screening in clinical and remote assessment settings. Full article
(This article belongs to the Special Issue Computer Vision and Machine Learning for Biometric Systems)
Show Figures

Figure 1

Back to TopTop