Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,852)

Search Parameters:
Keywords = human–robot interaction

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 5104 KB  
Article
Trust Isn’t Binary: Analysis of User Sentiment for Assistive Human–Robot Interaction
by Randyll Pandohie, Edgard M. Maboudou-Tchao, Nihad Habizada, Morris Beato and Aman Behal
Machines 2026, 14(5), 488; https://doi.org/10.3390/machines14050488 (registering DOI) - 27 Apr 2026
Abstract
Understanding how users perceive assistive robotic systems is critical for their successful adoption, particularly in rehabilitation settings where both patients and clinicians influence decision-making. While prior work has focused on technical performance and overall usability, affective responses such as trust, control, and perceived [...] Read more.
Understanding how users perceive assistive robotic systems is critical for their successful adoption, particularly in rehabilitation settings where both patients and clinicians influence decision-making. While prior work has focused on technical performance and overall usability, affective responses such as trust, control, and perceived independence are often captured using coarse, single-score measures that overlook important nuances. This study analyzes focus group discussions with individuals with spinal cord injury to examine how users evaluate different aspects of assistive robot design. A hybrid aspect-based sentiment analysis approach is applied, combining lexicon-based and transformer-based methods to capture both interpretable and context-sensitive sentiment. The analysis separates sentiment across key dimensions, including independence, functionality, safety, control, cost, and data sharing. Participants expressed consistently positive views toward independence and functional support, while responses related to safety, control, and data sharing were more conditional. In particular, trust emerged as something that depends on transparency, user control, and the ability to override system behavior, rather than a fixed attitude toward the technology. These findings suggest that successful assistive robotic systems must balance autonomy with user authority and provide clear, adaptable mechanisms for control and data governance. Full article
Show Figures

Figure 1

50 pages, 1956 KB  
Review
Combinations of Generative Artificial Intelligence and Robotics in K-12 and Higher Education: A Review
by Jim Prentzas and Ariadni Binopoulou
Electronics 2026, 15(9), 1835; https://doi.org/10.3390/electronics15091835 (registering DOI) - 26 Apr 2026
Abstract
Artificial Intelligence (AI) and robotics constitute two major technological fields frequently integrated into education. Both of them provide advantages to educational settings, stemming from approaches integrating them at all educational levels. The emergence of generative Artificial Intelligence and the growing popularity of related [...] Read more.
Artificial Intelligence (AI) and robotics constitute two major technological fields frequently integrated into education. Both of them provide advantages to educational settings, stemming from approaches integrating them at all educational levels. The emergence of generative Artificial Intelligence and the growing popularity of related tools have accelerated the integration of AI into education. An aspect of interest is to explore the combination of AI with robotics in education, aiming to benefit from the advantages of both technological schemes. This paper reviews work regarding the combination of generative Artificial Intelligence and robotics in K-12 and higher education. Scopus was used to search for relevant work. Fifty-four relevant papers were retrieved and analyzed after an exhaustive search. Trends in this combination are highlighted, taking into consideration learning, teaching, robot functionality and capabilities of generative AI tools, teaching subjects, sample size, and educational levels. Five main types of generative AI and robotics combinations are discerned. The overall combination benefits and challenges are analyzed. To the best of the authors’ knowledge, there is no other review discussing this subject in this specific context. Full article
24 pages, 3856 KB  
Article
Human–Robot Interaction: External Force Estimation and Variable Admittance Control Incorporating Passivity
by Jun Wan, Zihao Zhou, Nuo Yun, Kehong Wang and Xiaoyong Zhang
Robotics 2026, 15(5), 84; https://doi.org/10.3390/robotics15050084 - 22 Apr 2026
Viewed by 198
Abstract
In the context of Industry 5.0, human–robot collaboration increasingly demands intuitive, safe, and sensorless interaction for tasks such as hand-guided teaching and concurrent manipulation. However, conventional admittance control systems are prone to instability due to abrupt changes in human arm stiffness and their [...] Read more.
In the context of Industry 5.0, human–robot collaboration increasingly demands intuitive, safe, and sensorless interaction for tasks such as hand-guided teaching and concurrent manipulation. However, conventional admittance control systems are prone to instability due to abrupt changes in human arm stiffness and their reliance on accurate dynamic models. To address these challenges, this paper proposes a sensorless external force estimation and variable admittance control method that models robot dynamic uncertainties and interaction forces as normally distributed stochastic quantities. An improved particle swarm optimization algorithm is introduced to calibrate the variance parameters, enhancing estimation accuracy and robustness. Furthermore, an energy-based variable admittance control strategy is developed, which preserves system passivity by adaptively adjusting inertia and damping gains based on real-time energy variations. The proposed method was validated on a redundant robot platform. Experimental results show that the external force and torque estimation errors remain below 3 N and 3 N.m, respectively, with lower detection delays and errors than those of a first-order generalized momentum observer in collision detection. Variable admittance experiments demonstrate that the system maintains passivity and stable interaction even under sudden arm stiffness changes. The approach is well-suited for industrial applications requiring safe, sensorless, and compliant human–robot collaboration. Full article
(This article belongs to the Special Issue Human–Robot Collaboration in Industry 5.0)
Show Figures

Figure 1

18 pages, 2817 KB  
Article
Ultrathin Temporary Tattoo Electrodes Enable Prolonged Skin-Conformable EMG Sensing for Hip Exoskeleton Control
by Michele Foggetti, Marina Galliani, Andrea Pergolini, Aliria Poliziani, Emilio Trigili, Francesco Greco, Nicola Vitiello, Laura M. Ferrari and Simona Crea
Sensors 2026, 26(9), 2587; https://doi.org/10.3390/s26092587 - 22 Apr 2026
Viewed by 288
Abstract
Conventional gel electrodes are the gold standard for surface electromyography (sEMG), yet their bulkiness, stiffness, and limited gel lifetime prevents seamless day-long integration with wearable robots. We integrated ultrathin skin-conformal temporary tattoo electrodes with a powered unilateral hip exoskeleton and compared signal quality [...] Read more.
Conventional gel electrodes are the gold standard for surface electromyography (sEMG), yet their bulkiness, stiffness, and limited gel lifetime prevents seamless day-long integration with wearable robots. We integrated ultrathin skin-conformal temporary tattoo electrodes with a powered unilateral hip exoskeleton and compared signal quality during treadmill walking against gel. In this pilot study, five healthy participants completed three consecutive walking blocks at fixed speed: (1) using gel electrodes; (2) using tattoo electrodes to compare signal quality; and (3) using the same tattoo electrodes (not repositioned) after eight hours of wear to simulate a full day of typical device use and to evaluate potential degradation in signal quality over time. Electrodes were positioned on muscles not covered by the exoskeleton interface (tibialis anterior and gastrocnemius medialis), as well as on muscles located beneath the exoskeleton cuff, which were potentially subject to motion artifacts due to the application of external forces by the exoskeleton (rectus femoris and biceps femoris, BF). Across all muscles, for both gel and tattoo electrodes, the root mean square error (RMSE) between normalized sEMG envelopes and biological activation profile was 0.069 ± 0.048, and Pearson’s correlation coefficient (ρ) was 0.844 ± 0.091. Re-testing the same tattoo electrode pair after eight hours confirmed day-long stability without the need for recalibration. Statistical analysis revealed no significant differences in signal quality, also when applying assistive forces, between the two electrode types and across all muscles (RMSE, all p ≥ 0.3125; ρ, all p ≥ 0.1250), as well as no degradation after eight hours (RMSE and ρ: all p ≥ 0.0626, uncorrected). Finally, in a proof-of-concept session, BF activity measured with tattoo electrodes was found reliable to drive hip-extension assistance in real time. Collectively, these results show that tattoo electrodes deliver signal quality comparable to gel electrodes while offering a low-profile skin-conformal interface and day-long usability, making them a promising option for enhancing EMG-based control in wearable robots. Full article
(This article belongs to the Special Issue Advancing Medical Robotics Through Soft Sensing)
25 pages, 5544 KB  
Article
Retrofitting a Legacy Industrial Robot Through Monocular Computer Vision-Based Human-Arm Posture Tracking and 3-DoF Robot-Axis Control (A1–A3)
by Paúl A. Chasi-Pesantez, Eduardo J. Astudillo-Flores, Valeria A. Dueñas-López, Jorge O. Ordoñez-Ordoñez, Eldad Holdengreber and Luis Fernando Guerrero-Vásquez
Robotics 2026, 15(4), 82; https://doi.org/10.3390/robotics15040082 - 21 Apr 2026
Viewed by 293
Abstract
This paper presents a low-cost retrofitting pipeline for a legacy industrial robot that uses a single RGB webcam and monocular 2D keypoint tracking to estimate human-arm posture angles θ(h) and map them to robot-axis joint targets [...] Read more.
This paper presents a low-cost retrofitting pipeline for a legacy industrial robot that uses a single RGB webcam and monocular 2D keypoint tracking to estimate human-arm posture angles θ(h) and map them to robot-axis joint targets qcmd(r) for A1–A3 on a KUKA KR5-2 ARC HW, while keeping the wrist orientation (A4–A6) fixed. Rather than targeting full six-DoF manipulation, the main contribution is an experimental characterization of how far monocular 2D posture-to-axis mapping can be used reliably for coarse placement and safeguarded low-speed demonstrations on a legacy robot platform. Vision-side accuracy was evaluated per axis against goniometer-based reference angles θref(h), showing low errors for A2–A3 within the tested range and larger errors for A1 due to monocular yaw/depth ambiguity and occlusions. The study also analyzes failure modes during simultaneous multi-joint motion, where performance degrades notably, especially for A2 and A3, and reports practical mitigation directions such as improved viewpoints, multi-view/depth sensing, and stricter dropout handling. Runtime behavior is additionally characterized through a loop timing budget, with an end-to-end latency of 185.44 ms and an effective loop frequency of 5.39 Hz, which is consistent with low-speed online operation within the demonstrated scope. The system was implemented in a fenced industrial cell with restricted access and emergency stop; no collaborative operation is claimed. Full article
(This article belongs to the Special Issue Artificial Vision Systems for Robotics)
Show Figures

Figure 1

37 pages, 4888 KB  
Review
Robotics in Precision Agriculture: Task-, Platform-, and Evaluation-Oriented Review
by Natheer Almtireen and Mutaz Ryalat
Robotics 2026, 15(4), 81; https://doi.org/10.3390/robotics15040081 - 20 Apr 2026
Viewed by 454
Abstract
Robotics is increasingly positioned as an enabling technology for precision agriculture, where management actions must be spatially and temporally targeted under constraints on labour, input use, safety, and environmental impact. This review synthesises studies on agricultural field robotics and organises the literature along [...] Read more.
Robotics is increasingly positioned as an enabling technology for precision agriculture, where management actions must be spatially and temporally targeted under constraints on labour, input use, safety, and environmental impact. This review synthesises studies on agricultural field robotics and organises the literature along four complementary axes: task (monitoring, weeding, spraying, and harvesting), platform (UGV, UAV, gantry/fixed-structure, greenhouse robot, and hybrid systems), autonomy-stack module (perception, localisation, planning, control, actuation, safety, and human–robot interaction), and evaluation setting (lab, greenhouse, open-field single season, and open-field multi-season/multi-site). Across these dimensions, this review analyses how platform constraints shape sensing geometry, actuation capability, localisation reliability, energy/endurance, supervision burden, and safety requirements. It further examines enabling technologies that recur across tasks, including vision and multimodal perception under occlusion and illumination variability, localisation and mapping under weak or denied GNSS, uncertainty-aware planning in deformable and partially observed environments, and compliant end-effectors for contact-rich operations. Beyond cataloguing systems, this paper emphasises evaluation practice by synthesising core task-relevant metrics, comparing laboratory and field validation settings, and proposing a reporting checklist and benchmark ladder to improve reproducibility and cross-study comparability. This review identifies recurring bottlenecks in domain shift, long-term autonomy, calibration robustness, crop-safe actuation, and safety assurance near humans, and it concludes with a staged research roadmap linking near-term evaluation reform to longer-term credible multi-site autonomy. Overall, this paper provides a structured framework for interpreting agricultural robotic systems not only by application but also by deployment context, system maturity, and evaluation credibility. Full article
(This article belongs to the Special Issue Perception and AI for Field Robotics)
Show Figures

Figure 1

28 pages, 10998 KB  
Article
Introducing Brain–Computer Interfaces in Factories and Fabrication Lines for the Inclusion of Disabled Workers–Industry 5.0—A Modern Challenge and Opportunity
by Marian-Silviu Poboroniuc, Zoltán Nochta, Martin Klepal, Nina Hunter, Danut-Constantin Irimia, Alina Georgiana Baciu, Kelaja Schert, Tim Piotrowski and Alexandru Mitocaru
Multimodal Technol. Interact. 2026, 10(4), 41; https://doi.org/10.3390/mti10040041 - 17 Apr 2026
Viewed by 206
Abstract
Flexible factories and adaptive fabrication lines offer a testbed for advanced multimodal interaction concepts that can support the inclusion of disabled workers in Industry 5.0 manufacturing systems. The study synthesizes interdisciplinary data from ergonomics, industrial automation, and EU regulatory frameworks to establish a [...] Read more.
Flexible factories and adaptive fabrication lines offer a testbed for advanced multimodal interaction concepts that can support the inclusion of disabled workers in Industry 5.0 manufacturing systems. The study synthesizes interdisciplinary data from ergonomics, industrial automation, and EU regulatory frameworks to establish a conceptual model for human-machine interaction. Building on conceptual modeling and a structured literature analysis, the study proposes a six-step integration framework that links task demands, worker capabilities, and interaction modalities within human-in-the-loop manufacturing environments. Although no empirical case study was conducted in this phase, an exemplary application is presented for a semi-automated bike wheel manufacturing process. Detailed machine-based assembly line flows and simulated process data were utilized for illustrative purposes to depict the process and validate the proposed Capability–Task Matching Matrix. The results operationalize the human-centric vision of Industry 5.0 by providing a structured methodology for the inclusion of disabled workers within fabrication environments. The findings are organized into two primary components: the conceptual development of the Integration Approach and its practical application to a semi-automated industrial use-case. Finally, a particular focus is placed on Brain–Computer Interfaces (BCIs) as an emerging interaction channel that enables non-muscular control, attention monitoring, and neuroadaptive feedback, complementing conventional interfaces rather than replacing them. The framework is illustrated through application to the same semi-automated bicycle wheel assembly line, where BCI-supported interaction, augmented interfaces, and robotic assistance are mapped to specific production tasks and assessed in terms of feasibility and technological maturity. Drawing on the paper’s results, an explanatory 10-year roadmap outlines the feasibility and phased deployment of BCI solutions. It aligns technological advances with European regulations and a vision for a fully inclusive manufacturing enterprise. Full article
Show Figures

Figure 1

25 pages, 778 KB  
Review
Towards a Capability Taxonomy for Autonomous Robots in Affective Human–Robot Interaction
by Yunjia Sun and Tao Wang
Electronics 2026, 15(8), 1696; https://doi.org/10.3390/electronics15081696 - 17 Apr 2026
Viewed by 154
Abstract
Autonomous robots are increasingly integrated into social contexts, making affective human–robot interaction (HRI) critical for their effectiveness and acceptance. However, existing research remains dispersed across domains and techniques, lacking a unified framework to characterize core robotic capabilities. To address this gap, we adopt [...] Read more.
Autonomous robots are increasingly integrated into social contexts, making affective human–robot interaction (HRI) critical for their effectiveness and acceptance. However, existing research remains dispersed across domains and techniques, lacking a unified framework to characterize core robotic capabilities. To address this gap, we adopt a capability-oriented perspective and conduct a comprehensive literature review, through which we propose a structured taxonomy of capabilities for robots in affective HRI. The taxonomy comprises five core dimensions: perception (recognizing human internal states), strategy (planning responses based on human states and context), expression (conveying robot lifelikeness and social presence), sustainability (maintaining effective and reliable operation over time), and ethics (ensuring behavior within ethical constraints). By organizing diverse research efforts into a structured framework, this taxonomy provides a systematic foundation for designing socially competent robots and guiding future research. Full article
(This article belongs to the Special Issue Affective Computing in Human–Robot Interaction)
Show Figures

Figure 1

28 pages, 3548 KB  
Article
Edge Computing Approach to AI-Based Gesture for Human–Robot Interaction and Control
by Nikola Ivačko, Ivan Ćirić and Miloš Simonović
Computers 2026, 15(4), 241; https://doi.org/10.3390/computers15040241 - 14 Apr 2026
Viewed by 479
Abstract
This paper presents an edge-deployable vision-based framework for human–robot interaction using a xArm collaborative robot and a single RGB camera mounted on the robot wrist, and lightweight AI-based perception modules. The system enables intuitive, contact-free control by combining hand understanding and object detection [...] Read more.
This paper presents an edge-deployable vision-based framework for human–robot interaction using a xArm collaborative robot and a single RGB camera mounted on the robot wrist, and lightweight AI-based perception modules. The system enables intuitive, contact-free control by combining hand understanding and object detection within a unified perception–decision–control pipeline. Hand landmarks are extracted using MediaPipe Hands, from which continuous hand trajectories, static gestures, and dynamic gestures are derived. Task objects are detected using a YOLO-based model, and both hand and object observations are mapped into the robot workspace using ArUco-based planar calibration. To ensure stable robot motion, the hand control signal is smoothed using low-pass and Kalman filtering, while dynamic gestures such as waving are recognized using a lightweight LSTM classifier. The complete pipeline runs locally on edge hardware, specifically NVIDIA Jetson Orin Nano and Raspberry Pi 5 with a Hailo AI accelerator. Experimental evaluation includes trajectory stability, gesture recognition reliability, and runtime performance on both platforms. Results show that filtering significantly reduces hand-tracking jitter, gesture recognition provides stable command states for control, and both edge devices support real-time operation, with Jetson achieving consistently lower runtime than Raspberry Pi. The proposed system demonstrates the feasibility of low-cost edge AI solutions for responsive and practical human–robot interaction in collaborative industrial environments. Full article
(This article belongs to the Special Issue Intelligent Edge: When AI Meets Edge Computing)
Show Figures

Figure 1

21 pages, 3949 KB  
Article
From Biological Analogs to Robotic Embodiment: A Systematic Biomimetic Translation Framework Mediated by Traditional Craft
by Junbo Li, Fan Wu and Congrong Xiao
Biomimetics 2026, 11(4), 266; https://doi.org/10.3390/biomimetics11040266 - 12 Apr 2026
Viewed by 405
Abstract
This study investigates the effective translation of complex biological principles into viable engineering solutions within the field of biomimetic design. A critical challenge in current research is the “fuzzy front end” bridging initial biological observations and practical engineering applications. This gap primarily stems [...] Read more.
This study investigates the effective translation of complex biological principles into viable engineering solutions within the field of biomimetic design. A critical challenge in current research is the “fuzzy front end” bridging initial biological observations and practical engineering applications. This gap primarily stems from the lack of intermediary models capable of abstracting complex biomechanical data into manufacturable mechanical paradigms. To address this, we propose a systematic biomimetic translation framework that redefines traditional crafts as “Empirically Optimized Biological Analogues” (EOBAs), serving as a logical bridge between biological inspiration and engineering realization. This study contributes by integrating the Analytic Hierarchy Process (AHP) with the Fuzzy Comprehensive Evaluation (FCE) method to construct a quantitative assessment system. This system evaluates translation feasibility, engineering innovation potential, semantic interaction characteristics, and prototype manufacturability. Applying this framework to four intangible cultural heritages in Guangdong, combined with comprehensive expert and public evaluations, revealed that the Guangdong Lion Dance exhibits the highest biomimetic translation potential in terms of morphological clarity and dynamic behavioral characteristics. Consequently, we extracted the core principle of “embodied kinematics for communication” and developed a conceptual multi-segment biomimetic robotic prototype designated as “Kine-Lion”. Ultimately, this research provides a structured methodological reference for biomimetic robotic design, demonstrating that culturally abstracted biological behaviors can be systematically decoded into functional robotic structures. These findings indicate broad application prospects in the domains of human–robot interaction and biomimetic engineering. Full article
(This article belongs to the Special Issue Biomimetic Innovations for Human-Machine Interaction: 2nd Edition)
Show Figures

Figure 1

19 pages, 3273 KB  
Article
A Comprehensive Analysis of Human–Machine Interaction: Teaching Pendant vs. Gesture Control in Industrial Robotics
by Robert Kristof, Valentin Ciupe, Erwin-Christian Lovasz and Ghadeer Ismael
Actuators 2026, 15(4), 210; https://doi.org/10.3390/act15040210 - 8 Apr 2026
Viewed by 352
Abstract
In collaborative robotics, efficiency and user experience play a central role. This study looks at how perceived performance differs from measured performance when comparing two ways of controlling industrial robots: traditional teaching pendants and wearable EMG-based gesture control. A Myo Armband was used [...] Read more.
In collaborative robotics, efficiency and user experience play a central role. This study looks at how perceived performance differs from measured performance when comparing two ways of controlling industrial robots: traditional teaching pendants and wearable EMG-based gesture control. A Myo Armband was used as an accessible 8-channel EMG platform, and three experiments were carried out on a Universal Robots UR10e to test pick-and-place tasks and precision positioning. Time and accuracy data were gathered together with blind feedback from 13 participants through a multi-criteria analysis framework. Even though the teaching pendant turned out to be more accurate in every scenario, 85% of participants still rated gesture control higher in overall satisfaction. These results point to a notable gap between what users perceive and how they actually perform and suggest that user experience deserves more weight in the design of future robot control interfaces. Full article
(This article belongs to the Special Issue Actuation and Sensing of Intelligent Soft Robots—2nd Edition)
Show Figures

Figure 1

18 pages, 2109 KB  
Article
PAGF: Short-Horizon Forecasting of 3D Facial Landmarks
by Mingzhu Yan, Ye Yuan, Jian Liu and Fangyan Yang
Mathematics 2026, 14(7), 1222; https://doi.org/10.3390/math14071222 - 6 Apr 2026
Viewed by 317
Abstract
Short-term facial landmark forecasting is important for anticipatory facial behavior in human–robot interaction, yet models trained with pointwise reconstruction losses often suffer from mean reversion, producing low-error predictions with weakened motion dynamics. To address this issue, we propose a peak-aware gated recurrent unit [...] Read more.
Short-term facial landmark forecasting is important for anticipatory facial behavior in human–robot interaction, yet models trained with pointwise reconstruction losses often suffer from mean reversion, producing low-error predictions with weakened motion dynamics. To address this issue, we propose a peak-aware gated recurrent unit (GRU) framework that separates forecasting into peak planning and peak-conditioned trajectory generation. The planning stage estimates the timing and intensity of a salient motion peak within the forecast horizon together with a global motion direction, and the generation stage produces short-horizon landmark displacements through temporal gating and structured motion composition. The model is trained with reconstruction loss, peak supervision, peak-integrity regularization, and correlation-based temporal-shape regularization. Experiments on the MEAD dataset using 3D facial landmarks under a subject-independent protocol show a clear distortion–dynamics trade-off. Compared with static and sequence-to-sequence baselines, the proposed method better preserves peak-related facial dynamics while maintaining competitive 24-step prediction accuracy. Full article
Show Figures

Figure 1

20 pages, 6648 KB  
Article
Sensorless Collision Detection and Classification in Collaborative Robots Using Stacked GRU Networks
by Jong Hyeok Lee, Minjae Hong and Kyu Min Park
Actuators 2026, 15(4), 206; https://doi.org/10.3390/act15040206 - 4 Apr 2026
Viewed by 381
Abstract
The increasing deployment of collaborative robots in industrial manufacturing environments has enabled close human–robot collaboration, making rapid and reliable collision detection essential for worker safety. This paper presents a learning-based framework for real-time detection and classification of hard and soft collisions using stacked [...] Read more.
The increasing deployment of collaborative robots in industrial manufacturing environments has enabled close human–robot collaboration, making rapid and reliable collision detection essential for worker safety. This paper presents a learning-based framework for real-time detection and classification of hard and soft collisions using stacked Gated Recurrent Unit (GRU) networks. A two-stage pipeline is introduced, in which collision detection and collision type classification are performed sequentially using separate models, and its performance is validated through extensive experiments on a collision dataset collected from a six-joint collaborative robot executing random point-to-point motions. Without requiring joint torque sensors, unmodeled joint friction is implicitly compensated through learning for both detection and classification. Compared to our previous work, the proposed method achieves improved detection performance, and its robustness is further demonstrated through systematic generalization experiments under simulated dynamic model uncertainties. In addition, the classification model accurately distinguishes between hard and soft collisions, providing a basis for differentiated post-collision reaction strategies. Overall, the proposed sensorless collision detection and classification framework provides a practical and cost-effective solution for real-world industrial human–robot collaboration. Full article
(This article belongs to the Special Issue Machine Learning for Actuation and Control in Robotic Joint Systems)
Show Figures

Figure 1

24 pages, 5827 KB  
Article
Collision Avoidance with the Novel Advanced Shared Smooth Control in Teleoperated Mobile Robot Vehicles
by Teressa Talluri, Eugene Kim, Myeong-Hwan Hwang, Amarnathvarma Angani and Hyun-Rok Cha
Electronics 2026, 15(7), 1510; https://doi.org/10.3390/electronics15071510 - 3 Apr 2026
Viewed by 340
Abstract
To address collision risks in teleoperated mobile robotic vehicles, this study proposes a Human–Machine Interaction-based Advanced Smooth Shared Control (ASSC) system aimed at enhancing obstacle avoidance and achieving smooth shared control between human operators and the automation system. The ASSC system integrates a [...] Read more.
To address collision risks in teleoperated mobile robotic vehicles, this study proposes a Human–Machine Interaction-based Advanced Smooth Shared Control (ASSC) system aimed at enhancing obstacle avoidance and achieving smooth shared control between human operators and the automation system. The ASSC system integrates a novel approach using predictive vectors to represent the vehicle’s heading position, automatically adjusting the steering position upon obstacle detection to ensure smooth collision avoidance without changing the driver’s perception. Feedback forces applied to the steering wheel are calculated through an artificial potential field algorithm. Twenty participants were invited to operate the vehicle, providing feedback on the ASSC system’s performance relative to conventional obstacle avoidance methods. Performance metrics such as the effects of communication delays, Time to Complete the Task (TTC), ASSC effectiveness, performance of the delay impact on the ASSC system, and the Number of Obstacle Collisions (NOC) are analyzed. The results demonstrate that the ASSC system significantly outperforms traditional obstacle avoidance methods, providing more precise control in teleoperation. Statistical analysis indicates that the ASSC system improves safety, comfort and operational performance by 12.8%. This research highlights the ASSC system as a promising solution for enhancing automation, safety, and human–machine interaction in teleoperated mobile robotic vehicles. Full article
(This article belongs to the Special Issue Teleoperation of Semi-Autonomous Systems)
Show Figures

Figure 1

14 pages, 3018 KB  
Article
Optimized Haptic Feedback and Natural Prehension System for Robotics and Virtual Reality Applications
by Eve Hirel, Odin Le Morvan, Marwan Mahdouf, Prune Picot, Matteo Quinquis and Christophe Delebarre
Sensors 2026, 26(7), 2222; https://doi.org/10.3390/s26072222 - 3 Apr 2026
Viewed by 462
Abstract
As robotics prehension systems and virtual reality applications are in constant evolution, the need for high-fidelity haptic interaction increases. This helps ensure and enhance user immersion and handling precision. While commercial haptic interfaces offer high performance, their prohibitive cost limits their widespread adoption [...] Read more.
As robotics prehension systems and virtual reality applications are in constant evolution, the need for high-fidelity haptic interaction increases. This helps ensure and enhance user immersion and handling precision. While commercial haptic interfaces offer high performance, their prohibitive cost limits their widespread adoption in general-purpose robotics. Furthermore, many low-cost solutions suffer from limited transparency, where the operator constantly fights the friction of the actuator even during free motion. This article presents the design and development of an innovative, cost-effective master–slave robotic system aimed at democratizing efficient haptic feedback devices. The solution is intended for remote manipulation of objects with a maximum mass of 1 kg, while limiting the gripping force to 50 N, thus ensuring the integrity of objects being manipulated. The device includes a master haptic module in the form of a clamp that reproduces the thumb–index–middle finger gripping motion performed by the user. The system relies on a custom haptic interface measuring the angular position of the master gripper, which is transmitted in real time to the slave gripper, so as to adjust the position of the clamp accordingly, thus optimizing the grasping control loop. As soon as an object is detected, using a force sensor integrated into the slave gripper, the master motor renders a resistive force, preventing the user from closing the haptic module. The other part of the system is the slave mechanical gripper with three fingers, each with three phalanges based on human anatomy, allowing the clamp to mechanically conform to irregular object geometries with a single actuator. The last but not least innovative aspect lies in the implementation of a current sensor, which provides the haptic feedback. The force applied by the user is reproduced by the slave gripper using current sensors, eliminating the need for expensive force-torque sensors while maintaining a responsive feedback loop. Full article
Show Figures

Figure 1

Back to TopTop