Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (656)

Search Parameters:
Keywords = robot calibration

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 10903 KB  
Article
Robot-Driven Calibration and Accuracy Assessment of Meta Quest 3 Inside-Out Tracking Using a TECHMAN TM5-900 Collaborative Robot
by Josep Lopez-Xarbau, Marco Antonio Rodriguez-Fernandez, Marcos Faundez-Zanuy, Jordi Calvo-Sanz and Juan Jose Garcia-Tirado
Sensors 2026, 26(8), 2285; https://doi.org/10.3390/s26082285 - 8 Apr 2026
Abstract
We present a systematic evaluation of the positional and rotational tracking accuracy of the Meta Quest 3 mixed-reality headset using a TECHMAN TM5-900 collaborative robot (±0.05 mm repeatability) as a highly repeatable robot-driven reference. The headset was rigidly attached to the robot’s tool [...] Read more.
We present a systematic evaluation of the positional and rotational tracking accuracy of the Meta Quest 3 mixed-reality headset using a TECHMAN TM5-900 collaborative robot (±0.05 mm repeatability) as a highly repeatable robot-driven reference. The headset was rigidly attached to the robot’s tool flange and subjected to single-axis translational motions (200 mm along X, Y, and Z) and rotational motions (Roll ± 65°, Pitch ± 85°, and Yaw ± 85°). Each test was repeated three times, and the resulting trajectories were averaged to improve statistical robustness. Both data sources were integrated into a single Python-based application running on the same computer. The headset streamed its data via UDP, while the robot, implemented as an ROS2 node, published its data to the same host. This configuration enabled simultaneous acquisition of both streams, ensuring temporal consistency without the need for offline interpolation. All comparisons were performed in a relative reference frame, thereby avoiding the need for absolute hand–eye calibration. Coordinate-frame alignment was achieved using Singular Value Decomposition (SVD)-based rigid-body Procrustes analysis. Over 2848 synchronized samples spanning 151.46 s, the Meta Quest 3 achieved a mean translational RMSE of 0.346 mm (3D RMSE = 0.621 mm) and a mean rotational RMSE of 0.143°, with Pearson correlation coefficients greater than 0.9999 on all axes. These results show sub-millimeter positional tracking and sub-degree rotational tracking under controlled conditions, supporting the potential of the Meta Quest 3 for precision-oriented mixed-reality applications in industrial and research settings. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

20 pages, 4923 KB  
Article
Vision-Based Robotic System for Selective Weed Detection and Control in Precision Agriculture
by Rubén O. Hernández-Terrazas, Juan M. Xicoténcatl-Pérez, Julio C. Ramos-Fernández, Marco A. Márquez-Vera, José G. Benítez-Morales, Eucario G. Pérez-Pérez, Jorge A. Ruiz-Vanoye, Ocotlán Diaz-Parra, Francisco R. Trejo-Macotela and Alejandro Fuentes-Penna
Agriculture 2026, 16(7), 810; https://doi.org/10.3390/agriculture16070810 - 5 Apr 2026
Viewed by 208
Abstract
Precision agriculture is a key technology for addressing challenges such as increasing food demand, labour shortages, and the environmental impact of intensive agrochemical use. In this context, selective weed management remains a critical issue due to its direct effect on crop productivity and [...] Read more.
Precision agriculture is a key technology for addressing challenges such as increasing food demand, labour shortages, and the environmental impact of intensive agrochemical use. In this context, selective weed management remains a critical issue due to its direct effect on crop productivity and sustainability. This article presents a simulation-based framework for the design and evaluation of an agricultural robotic module for the detection, classification, and selective intervention of weeds. The proposed system integrates convolutional neural networks and the kinematic model of a 2DOF robot manipulator with 5 links for weed classification and treatment. The system is evaluated in a virtual environment, where camera calibration, perception accuracy, and the performance of the kinematic model are analysed. Quantitative results include detection accuracy, localization error, and intervention success rate under simulated field conditions. The results demonstrate selective weed management and the feasibility of simulation for developing weed control systems, while also identifying the main challenges for real-world deployment. Full article
(This article belongs to the Section Agricultural Technology)
Show Figures

Figure 1

32 pages, 9172 KB  
Article
Design, Modeling, Self-Calibration and Grasping Method for Modular Cable-Driven Parallel Robots
by Wanlin Mai, Yonghe Wang, Zhiquan Yang, Bin Zhu, Lin Liu and Jianqing Peng
Sensors 2026, 26(7), 2204; https://doi.org/10.3390/s26072204 - 2 Apr 2026
Viewed by 185
Abstract
Cable-driven parallel robots (CDPRs) are attractive for large-space manipulation because of their lightweight structure, large workspace, and reconfigurability. However, existing systems still face three practical challenges: limited modularity of the mechanical architecture, repeated calibration after reconfiguration, and insufficient integration between visual perception and [...] Read more.
Cable-driven parallel robots (CDPRs) are attractive for large-space manipulation because of their lightweight structure, large workspace, and reconfigurability. However, existing systems still face three practical challenges: limited modularity of the mechanical architecture, repeated calibration after reconfiguration, and insufficient integration between visual perception and grasp execution. To address these issues, this paper presents a modular cable-driven parallel robot (MCDPR), together with its kinematic modeling, vision-based self-calibration, and visual grasping methods. First, a modular mechanical architecture is developed in which the drive, sensing, and cable-guiding functions are integrated to support rapid assembly/disassembly, convenient debugging, and cable anti-slack operation. Second, a pulley-considered multilayer kinematic model is established, and a vision-based self-calibration method is proposed to identify the structural parameters after assembly using onboard sensing and AprilTag observations, thereby reducing the number of recalibrations required during robot operation after reconfiguration. Third, a vision-guided bin-picking method is developed by combining RGB-D perception, coordinate transformation, and the calibrated robot model. Simulation and prototype experiments are conducted to validate the proposed system. A software/hardware combined validation framework is established, in which the CoppeliaSim-based simulation and the hardware prototype are used together to verify the proposed design and methods. In simulation, self-calibration reduces the Euclidean grasping position error from 0.371 mm to 0.048 mm and the orientation error from 0.071° to 0.004°. In experiments, the relative position error is reduced by 58.33% after self-calibration. Full article
(This article belongs to the Special Issue Motor Control and Remote Handling in Robotic Applications)
Show Figures

Figure 1

23 pages, 2866 KB  
Article
A Cloud–Robot–Wearable System for Bilateral Reaching Rehabilitation: Affected-Side Identification and Quality Quantification
by Chia-Hau Chen, Li-Hsien Tang, Chang-Hsin Yeh, Eric Hsiao-Kuang Wu and Shih-Ching Yeh
Electronics 2026, 15(7), 1459; https://doi.org/10.3390/electronics15071459 - 1 Apr 2026
Viewed by 275
Abstract
Therapist shortages make home-based rehabilitation an essential component of post-stroke care, yet patients often exhibit reduced adherence when functional gains are difficult to quantify and interpret. This study presents a cloud-enabled assessment framework centered on a dynamic reaching task for upper-limb rehabilitation in [...] Read more.
Therapist shortages make home-based rehabilitation an essential component of post-stroke care, yet patients often exhibit reduced adherence when functional gains are difficult to quantify and interpret. This study presents a cloud-enabled assessment framework centered on a dynamic reaching task for upper-limb rehabilitation in individuals with mild stroke. The proposed system combines wearable sensing and Internet of Things (IoT) connectivity to stream kinematic data to the cloud for near real-time analysis, and integrates a force-feedback rehabilitation robot to deliver motion guidance during training. The pipeline proceeds in three stages. First, smoothness-related kinematic descriptors are extracted and fed into a deep multi-class classifier to discriminate the affected side (left, right, or healthy). Second, movement quality is modeled using a Gaussian Mixture Model (GMM) trained on IoT-acquired trajectories to quantify performance via probabilistic similarity. Third, a calibrated scoring function transforms GMM log-likelihood into a normalized 0–1 quality index, producing visual reports that support interpretable feedback for patients and therapists. The framework is validated using motion data collected from stroke patients at Taipei Veterans General Hospital. Experimental results demonstrate that the neural network multi-classifier achieved an F1-score of 0.95. Incorporating robot-derived interaction signals further improved classification performance by approximately 5%. For movement quality assessment, the derived scores showed a significant positive correlation (Pearson correlation = 0.632, p = 0.02) with therapist-defined gold reference standards for right-affected patients. Additionally, integrating robot force-feedback signals and AIoT-enabled dynamic streams improved score accuracy by 8% and score responsiveness by 10%. These quantitative outcomes substantiate the efficacy of combining IoT-driven sensing and robot-assisted training for objective, interpretable, and remotely deployable motor assessment. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

26 pages, 16104 KB  
Article
Multi-Slot Attention with State Guidance for Egocentric Robotic Manipulation
by Sofanit Wubeshet Beyene and Ji-Hyeong Han
Electronics 2026, 15(7), 1365; https://doi.org/10.3390/electronics15071365 - 25 Mar 2026
Viewed by 341
Abstract
Visual perception is fundamental to robotic manipulation for recognizing objects, goals, and contextual details. Third-person cameras provide global views but can miss contact-rich interactions and require calibration. Wrist-mounted egocentric cameras reduce these limitations but introduce occlusion, motion blur, and partial observability, which complicate [...] Read more.
Visual perception is fundamental to robotic manipulation for recognizing objects, goals, and contextual details. Third-person cameras provide global views but can miss contact-rich interactions and require calibration. Wrist-mounted egocentric cameras reduce these limitations but introduce occlusion, motion blur, and partial observability, which complicate visuomotor learning. Furthermore, existing perception modules that rely solely on pixels or fuse imagery with proprioception as flat vectors do not explicitly model structured scene representations in dynamic egocentric views. To address these challenges, a multi-slot attention fusion encoder for egocentric manipulation is introduced. Learnable slot queries extract localized visual features from image tokens, and Feature-wise Linear Modulation (FiLM) conditions each slot on the robot’s joint states, producing a structured slot-based latent representation that adapts to viewpoint and configuration changes without requiring object labels or external camera priors. The resulting structured slot-based latent representation is used as input to a Soft Actor–Critic (SAC) agent, which achieves a higher mean cumulative return than pixel-only CNN/DrQ and state-only baselines on a ManiSkill3 egocentric manipulation task. Probing experiments and real-camera evaluation further show that the learned representation remains stable under egocentric viewpoint shifts and partial occlusions, indicating robustness in practical manipulation settings. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

27 pages, 53719 KB  
Article
A Numerical Investigation into the Thrust Characteristics of the RAS-HA-X25 Autonomous Underwater Vehicle Through CFD-Based Simulation
by Aleksander Grm, Marko Peljhan, Roman Kamnik, Matej Dobrevski, Dominik Majcen and Andrej Androjna
J. Mar. Sci. Eng. 2026, 14(7), 600; https://doi.org/10.3390/jmse14070600 - 24 Mar 2026
Viewed by 203
Abstract
The rapid development of Autonomous Underwater Vehicles (AUVs) has increased the demand for propulsion systems that balance thrust density, hydrodynamic efficiency, and acoustic discretion. This study presents a comprehensive numerical investigation of the performance of the Blue Robotics T500 thruster, embedded within the [...] Read more.
The rapid development of Autonomous Underwater Vehicles (AUVs) has increased the demand for propulsion systems that balance thrust density, hydrodynamic efficiency, and acoustic discretion. This study presents a comprehensive numerical investigation of the performance of the Blue Robotics T500 thruster, embedded within the RAS-HA-X25 AUV’s internal conduit. Using transient Computational Fluid Dynamics (CFD) within the OpenFOAM framework, this research assesses the propulsive characteristics of the thruster across six distinct outlet geometries, including convergent jet nozzles and multi-lobed “daisy” configurations. To improve computational efficiency for parametric design, a calibrated actuator disc model was developed and validated against resolved-rotor simulations, revealing a 15% discrepancy attributed to tip leakage and hub vortex effects. Results show that at the operational advance ratio (J=0.167), the 60 mm convergent nozzle is the optimal configuration for maximising thrust, achieving a peak net thrust of 42 N. In contrast, the daisy-type lobed geometries, while causing a 50% reduction in absolute thrust compared to a standard cylindrical pipe, significantly homogenise the exit-plane velocity distribution and reduce swirl intensity. These findings indicate that lobed terminations provide a viable mechanism for reducing hydroacoustic signatures, offering a strategic “stealth” advantage for low-observable underwater platforms where acoustic discretion is prioritised over pure thrust density. This study establishes a robust methodology for optimising embedded propulsion modules in next-generation autonomous and hybrid underwater vehicles. Full article
Show Figures

Figure 1

29 pages, 8910 KB  
Article
Field Evaluation of a Robotic Apple Harvester with Negative-Pressure Driven End-Effectors on a Simplified 4-DoF Manipulator
by Guangrui Hu, Jianguo Zhou, Shiwei Wen, Ning Chen, Chen Chen, Fangmin Cheng, Yu Chen and Jun Chen
Agriculture 2026, 16(7), 717; https://doi.org/10.3390/agriculture16070717 - 24 Mar 2026
Viewed by 279
Abstract
Apple picking is an inherently labor-intensive, time-consuming, and costly task, and robotic harvesting represents a potential alternative to address this challenge. This study presents the development and field evaluation of an integrated robotic system for apple harvesting, which combines machine vision, a dual [...] Read more.
Apple picking is an inherently labor-intensive, time-consuming, and costly task, and robotic harvesting represents a potential alternative to address this challenge. This study presents the development and field evaluation of an integrated robotic system for apple harvesting, which combines machine vision, a dual four-degree-of-freedom (DoF) manipulator, and a mobile platform. The harvesting mechanism employed a streamlined 4-DoF manipulator driven by closed-loop stepper motors, incorporating a differential gear mechanism to execute yaw and pitch motions. Trajectory planning utilized linear interpolation with a harmonic acceleration/deceleration profile to ensure smooth end-effector movement. Fruit detection and localization within the canopy were performed by a stereo vision system running a lightweight deep neural network, achieving a mean hand-eye calibration accuracy of 4.7 ± 2.7 mm. Three negative-pressure driven soft end-effector designs—a suction soft end-effector (SSE), a grasping soft end-effector (GSE), and a suction-grasping soft end-effector (SGSE)—were assessed for their harvesting performance. Field trials conducted in a commercial spindle orchard demonstrated that the GSE achieved the highest performance, with a harvesting success rate of 80.80% among reachable fruits, a full-process success rate (from detection to collection) of 61.59%, an overall fruit damage rate of 10.89%, and an average single-fruit cycle time of 5.27 s. In contrast, the SSE and SGSE showed lower success rates (49.21% and 64.71%, respectively). This work provides a practical robotic harvesting solution. It validates the feasibility of a zoned, multi-manipulator harvesting strategy and delivers comparative data to guide the development of more efficient and robust harvesting robots. Full article
(This article belongs to the Section Agricultural Technology)
Show Figures

Graphical abstract

30 pages, 1741 KB  
Article
Inverse Analytical Formula for the Correction of Severe Barrel Lens Distortion Modelled by a Depressed Radial Distortion Polynomial
by Guy Blanchard Ikokou, Moreblessings Shoko and Naa Dedei Tagoe
Sensors 2026, 26(6), 1896; https://doi.org/10.3390/s26061896 - 17 Mar 2026
Viewed by 243
Abstract
Accurate correction of radial lens distortion is a fundamental requirement in computer vision and photogrammetry, as geometric inaccuracies directly affect 3D reconstruction, mapping, and geospatial measurements, particularly in high-precision imaging systems. In this study, we propose a fully analytical, non-iterative method for truncated [...] Read more.
Accurate correction of radial lens distortion is a fundamental requirement in computer vision and photogrammetry, as geometric inaccuracies directly affect 3D reconstruction, mapping, and geospatial measurements, particularly in high-precision imaging systems. In this study, we propose a fully analytical, non-iterative method for truncated inverse modeling of radial lens distortion, applicable to general radial distortion polynomials that contain constant terms. Unlike classical truncated Lagrange series reversion, which relies on recursive expansion and combinatorial series construction, the proposed formulation determines inverse distortion coefficients directly through a system of constrained algebraic inverse polynomials. This enables deterministic computation of inverse parameters without iterative refinement, numerical root finding, or combinatorial complexity. The method was evaluated using ultra-wide-angle smartphone camera imagery exhibiting severe barrel distortion modeled by an eighth-degree depressed radial distortion polynomial. Its performance was compared with a commonly used iterative inverse modeling approach. The analytical formulation demonstrated improved numerical stability and substantially reduced reprojection errors when correcting highly nonlinear distortion profiles, achieving sub-pixel accuracy in image rectification. In contrast, the iterative approach exhibited instability and significantly larger reprojection errors under identical conditions. These results demonstrate that the proposed framework provides a general, robust, and repeatable solution for inverse radial distortion modeling, particularly for high-order polynomial models. The method offers clear practical advantages for camera calibration pipelines in photogrammetry, remote sensing, robotics, and other applications requiring high-fidelity imaging. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

19 pages, 9029 KB  
Article
AIM-SEEM: Adapting SEEM for Open-Vocabulary Terrain Segmentation Across Arbitrary Imaging Modalities
by Yuqian Wang, Xuefu Xiang, Yongcun Wu, Yong Zhang and Xinyue Li
Sensors 2026, 26(6), 1869; https://doi.org/10.3390/s26061869 - 16 Mar 2026
Viewed by 317
Abstract
Terrain segmentation performance directly affects the reliability of robotic environmental perception and decision making, yet most existing methods are built upon the assumptions of fixed sensing configurations and closed label sets. As a result, they struggle to meet real world outdoor requirements where [...] Read more.
Terrain segmentation performance directly affects the reliability of robotic environmental perception and decision making, yet most existing methods are built upon the assumptions of fixed sensing configurations and closed label sets. As a result, they struggle to meet real world outdoor requirements where modalities can be dynamically available and semantic classes continually expand. This paper systematically studies open-vocabulary terrain segmentation under arbitrary imaging modality combinations and proposes a unified foundation model-based framework named AIM-SEEM (SEEM for Arbitrary Imaging Modalities). Built upon Segment Everything Everywhere All at Once (SEEM), AIM-SEEM performs stable input side adaptation and controlled fusion of heterogeneous modalities, maximizing the reuse of pre-trained visual priors to accommodate different modality types and counts. Furthermore, to address the distribution shifts and the resulting vision–text alignment degradation caused by modality extension, a vision-guided text calibration mechanism is introduced to preserve open-vocabulary segmentation capability under multi-modality combination inputs. Experiments on two benchmarks under three evaluation settings, including full-modality, modality-agnostic, and open-vocabulary, show that AIM-SEEM consistently outperforms prior methods. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

24 pages, 8894 KB  
Article
An Improved Robust ESKF Fusion Positioning Method with a Novel UWB-VIO Initialization
by Changqiang Wang, Biao Li, Yuzuo Duan, Xin Sui, Zhengxu Shi, Song Gao, Zhe Zhang and Ji Chen
Sensors 2026, 26(6), 1804; https://doi.org/10.3390/s26061804 - 12 Mar 2026
Viewed by 308
Abstract
Visual–inertial odometry (VIO) often struggles with illumination variations, sparse visual features, and inertial drift in complex indoor settings, leading to scale uncertainties and accumulated errors. To address these issues, this paper proposes a new UWB–VIO initialization method combined with an enhanced Robust error-state [...] Read more.
Visual–inertial odometry (VIO) often struggles with illumination variations, sparse visual features, and inertial drift in complex indoor settings, leading to scale uncertainties and accumulated errors. To address these issues, this paper proposes a new UWB–VIO initialization method combined with an enhanced Robust error-state Kalman filter (Robust ESKF) fusion technique for mobile robot localization. During initialization, common problems include scale drift and heading inconsistency. To solve these, a direction-consistent constrained initialization model is developed. By jointly optimizing the scale factor and yaw angle, this model ensures consistent alignment between the visual–inertial and ultra-wideband (UWB) coordinate frames. This approach removes the need for external calibration and independent coordinate transformation, which are typically required by traditional methods. In the fusion process, an improved residual-weighted robust filtering mechanism is employed to minimize the impact of abnormal UWB ranging data and noise interference. This mechanism adaptively suppresses outliers caused by UWB multipath reflections and non-line-of-sight (NLOS) propagation, thereby reducing VIO drift and improving the overall robustness and stability of the localization system. Experiments conducted in narrow-corridor environments, where both UWB and visual sensors are affected by interference, demonstrate that the proposed method significantly reduces trajectory drift and attitude jumps, resulting in better positioning accuracy and trajectory continuity. Compared to conventional UWB–VIO fusion algorithms, the proposed method enhances average localization accuracy by over 50% and maintains stable estimation even in severe multipath interference conditions, demonstrating high precision and strong robustness. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Graphical abstract

27 pages, 8343 KB  
Article
Modeling Human–Robot Impact Dynamics in Collaborative Applications
by Alessio Caneschi, Matteo Bottin and Giulio Rosati
Actuators 2026, 15(3), 165; https://doi.org/10.3390/act15030165 - 12 Mar 2026
Viewed by 369
Abstract
This study presents an integrated experimental and modeling framework to investigate human–robot collision dynamics involving a collaborative manipulator (KUKA LBR iiwa 14 R820). A dedicated impact test prototype was developed to reproduce controlled contact scenarios between the robot and human body analogues under [...] Read more.
This study presents an integrated experimental and modeling framework to investigate human–robot collision dynamics involving a collaborative manipulator (KUKA LBR iiwa 14 R820). A dedicated impact test prototype was developed to reproduce controlled contact scenarios between the robot and human body analogues under various dynamic conditions. The experimental setup enables the acquisition of synchronized force, velocities, and displacement signals during contact events. These data are used to calibrate and validate a set of contact models, ranging from classical formulations such as Hertz and Hunt–Crossley to more recent supervised machine learning models. The proposed methodology allows a quantitative assessment of model accuracy and physical consistency in replicating real collision phenomena. Furthermore, the effective mass of the robot along its kinematic chain is estimated to compute impact energy and predict the interaction severity according to ISO 10218-1/2:2025 safety limits. The results highlight the trade-off between model complexity and predictive capability, offering alternative guidelines for collision severity evaluation in collaborative robotics applications. Full article
Show Figures

Figure 1

16 pages, 1335 KB  
Article
Derivation-Based Calibration of IMUs Using Savitzky–Golay Filters
by Diogo Vieira, Miguel Oliveira and Rafael Arrais
Sensors 2026, 26(6), 1788; https://doi.org/10.3390/s26061788 - 12 Mar 2026
Viewed by 221
Abstract
For any robotic application, accurate sensor calibration is crucial. In the case of mobile platforms or flying drones, a sensor commonly utilized is the Inertial Measurement Unit (IMU). Current approaches to the calibration of IMU-equipped robotic systems focus on sensor-to-sensor calibration, meaning a [...] Read more.
For any robotic application, accurate sensor calibration is crucial. In the case of mobile platforms or flying drones, a sensor commonly utilized is the Inertial Measurement Unit (IMU). Current approaches to the calibration of IMU-equipped robotic systems focus on sensor-to-sensor calibration, meaning a second sensor is necessary for the calibration process. Furthermore, a great number of those rely on integrating the sensor measurements to obtain its pose, which leads to integration errors. In this work, we present a method for the extrinsic calibration of IMUs in robotic systems, which avoids the errors originating from IMU integration by instead taking a derivative approach using Savitzky–Golay filters. The proposed calibration method estimates the transformation between an IMU sensor and its parent frame in the system’s kinematic chain by minimizing the differences between derived linear accelerations and angular velocities and those measured by the sensor. Simulated data is used to establish a ground truth against which the calibration results are compared. Results indicate that the proposed method achieves a higher accuracy than the alternatives it is compared against, while also showing the method can be applied to industrial-grade IMUs. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

23 pages, 2347 KB  
Article
Tolerance Analysis and Experimental Validation of ROMI—A High-Precision Linear Delta Robot for Microsurgery
by Xiaoyu Huang, Jiazhe Tang, Elizabeth Rendon-Morales and Rodrigo Aviles-Espinosa
Designs 2026, 10(2), 31; https://doi.org/10.3390/designs10020031 - 11 Mar 2026
Viewed by 259
Abstract
In this paper we present the design of a tolerance analysis-based closed-loop system and a compensation framework applied to high-precision linear Delta robots. It considers the modelling of static and dynamic errors propagation arising from the structural tolerances and the end-effector’s positioning. This [...] Read more.
In this paper we present the design of a tolerance analysis-based closed-loop system and a compensation framework applied to high-precision linear Delta robots. It considers the modelling of static and dynamic errors propagation arising from the structural tolerances and the end-effector’s positioning. This approach is combined with a closed-loop control system implemented using high-resolution optical encoders. The model is applied to the ROMI robot, a high-precision experimental Delta robot designed for microsurgical applications. Our simulation results reveal a theoretical home position error (the centre of the robot’s platform) of 1.9 mm, which is effectively compensated through kinematic calibration and a tolerance analysis-based closed-loop system. The proposed framework is evaluated experimentally through proof-of-concept experiments mimicking a microsurgical resection task conducted on a human peripheral nerve sample. The results from executing micrometre scale parallelogram and circular trajectories showed error reduction rates of 92.3% and 51.2% respectively, after five trajectory iterations. These findings confirm that manufacturing-induced errors can be consistently compensated using the proposed methodology, thus eliminating the need for ultra-high-precision machined components. This work establishes a practical and scalable pathway for designing more affordable high-precision robotic systems suitable for microsurgical and other high-precision applications. Full article
Show Figures

Graphical abstract

25 pages, 12476 KB  
Article
Hybrid Neuro-Symbolic State-Space Modeling for Industrial Robot Calibration via Adaptive Wavelet Networks and PSO
by He Mao, Zhouyi Lai and Zhibin Li
Biomimetics 2026, 11(3), 171; https://doi.org/10.3390/biomimetics11030171 - 2 Mar 2026
Viewed by 373
Abstract
The absolute positioning accuracy of industrial manipulators is frequently bottlenecked by the interplay of geometric tolerances and complex, unmodeled non-geometric parameter drifts. Traditional static kinematic models, predicated on rigid-body assumptions, often struggle to characterize these state-dependent dynamic behaviors. To bridge this gap, this [...] Read more.
The absolute positioning accuracy of industrial manipulators is frequently bottlenecked by the interplay of geometric tolerances and complex, unmodeled non-geometric parameter drifts. Traditional static kinematic models, predicated on rigid-body assumptions, often struggle to characterize these state-dependent dynamic behaviors. To bridge this gap, this study introduces a PSO-Driven Neuro-Symbolic State-Space Framework incorporating Adaptive Wavelet Networks, drawing inspiration from two biological principles: the collective swarm intelligence observed in bird flocking and fish schooling, and the localized receptive field structure of mammalian visual cortex neurons. By reformulating calibration as a latent state estimation problem, we model kinematic parameters as stochastic states. Crucially, the observation model fuses symbolic Denavit–Hartenberg (D–H) predictions with an Adaptive Wavelet Network (AWNN). The AWNN utilizes Mexican Hat kernels, whose morphology mirrors the center-surround antagonism of cortical receptive fields, and leverages their precise time–frequency localization to effectively learn complex, configuration-dependent residuals. The framework employs a robust decoupled strategy. First, Particle Swarm Optimization (PSO) executes meta-optimization to autonomously determine hyperparameters, thereby mitigating initialization sensitivity. Second, a recursive inference engine estimates the hybrid states. Third, a global batch optimization refines the symbolic parameters against a frozen non-geometric error field. Experimental validation on an ABB IRB 120 robot (400 datasets) yielded a test RMSE of 0.73 mm. Compared to the standard Levenberg–Marquardt method, our approach reduced the RMSE by 40.16% and the maximum error by 35.71% (down to 0.99 mm). Moreover, it outperforms the state-of-the-art RPSO-DCFNN baseline by 12.05% while maintaining high computational efficiency (convergence within 20.15 s). These findings underscore the superiority of the proposed bio-inspired state-space fusion strategy for high-precision industrial applications. Full article
Show Figures

Figure 1

14 pages, 32973 KB  
Article
High Frequency Ultrasonic Condition Monitoring Framework Based on Edge-Computing and Telemetry Stack Approach
by Geoffrey Spencer, Pedro M. B. Torres, Vítor H. Pinto and Gil Gonçalves
Machines 2026, 14(3), 270; https://doi.org/10.3390/machines14030270 - 28 Feb 2026
Viewed by 567
Abstract
This paper presents initial developments towards a high-frequency condition monitoring framework designed for Autonomous Mobile Robots (AMRs) in Smart Factory environments. The proposed approach focuses on data acquisition and edge-level processing at the ultrasound range specifically (>20 kHz), using Micro-Electro-Mechanical System (MEMS) sensors. [...] Read more.
This paper presents initial developments towards a high-frequency condition monitoring framework designed for Autonomous Mobile Robots (AMRs) in Smart Factory environments. The proposed approach focuses on data acquisition and edge-level processing at the ultrasound range specifically (>20 kHz), using Micro-Electro-Mechanical System (MEMS) sensors. The system integrates real-time data acquisition, embedded fixed-point frequency-domain processing via a 1024-point FFT, and the integration of Industrial Internet-of-Things (IIoT) infrastructure based on the TIG (Telegraf, InfluxDB, and Grafana) stack, for data aggregation and remote visualization. To ensure timing precision at a sampling rate of 160 kHz, a software-based calibration routine is implemented to compensate for microcontroller overhead. Furthermore, the architecture’s alignment with IEEE 1451 principles is discussed to support interoperable and scalable sensor integration. Experimental results validate the reliable acquisition and processing of ultrasonic signals up to 80 kHz using controlled acoustic sources. This work provides a foundational infrastructure for condition-based monitoring, enabling future development of automated anomaly detection for mechanical components, such as bearings, which exhibit early-stage fault signatures in the ultrasonic spectrum. Full article
(This article belongs to the Special Issue Design and Manufacture of Advanced Machines, Volume II)
Show Figures

Figure 1

Back to TopTop