Processing math: 100%
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (886)

Search Parameters:
Keywords = robotic arts

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 6945 KiB  
Article
Lightweight Underwater Target Detection Algorithm Based on YOLOv8n
by Dengke Song and Hua Huo
Electronics 2025, 14(9), 1810; https://doi.org/10.3390/electronics14091810 - 28 Apr 2025
Viewed by 146
Abstract
To address the challenges in underwater target detection, such as complex environments, image blurring, and high model parameter counts and computational complexity, an improved lightweight detection algorithm, RDL-YOLO, is proposed. This algorithm incorporates multiple optimizations based on the YOLOv8n model. The introduction of [...] Read more.
To address the challenges in underwater target detection, such as complex environments, image blurring, and high model parameter counts and computational complexity, an improved lightweight detection algorithm, RDL-YOLO, is proposed. This algorithm incorporates multiple optimizations based on the YOLOv8n model. The introduction of the RFAConv module optimizes the backbone network, enhancing feature extraction capabilities under complex backgrounds. The DySample dynamic upsampling module is used to effectively improve the model’s ability to capture edge information. A lightweight detection head based on shared convolutions is designed to achieve model lightweighting. The combination of the normalized wasserstein distance (NWD) loss function and CIoU loss improves the detection accuracy for small targets. Experimental results on the UPRC (Underwater Robot Prototype Competition) and RUOD (Real-World Underwater Object Detection) datasets show that the improved algorithm achieves an average precision (mAP) increase of 1.4% and 1.0%, respectively, while reducing parameter count and computational complexity by 19.3% and 14.8%. Compared to other state-of-the-art underwater target detection algorithms, the proposed RDL-YOLO not only improves detection accuracy but also achieves model lightweighting, demonstrating superior applicability in resource-constrained underwater environments. Full article
Show Figures

Figure 1

23 pages, 2040 KiB  
Review
Trajectory Planning for Robotic Manipulators in Automated Palletizing: A Comprehensive Review
by Samuel Romero, Jorge Valero, Andrea Valentina García, Carlos F. Rodríguez, Ana Maria Montes, Cesar Marín, Ruben Bolaños and David Álvarez-Martínez
Robotics 2025, 14(5), 55; https://doi.org/10.3390/robotics14050055 - 26 Apr 2025
Viewed by 158
Abstract
Recent industrial production paradigms have seen the promotion of the outsourcing of low-value-added operations to robotic cells as a service, particularly end-of-line packaging. As a result, various types of research have emerged, offering different approaches to the trajectory design optimization of robotic manipulators [...] Read more.
Recent industrial production paradigms have seen the promotion of the outsourcing of low-value-added operations to robotic cells as a service, particularly end-of-line packaging. As a result, various types of research have emerged, offering different approaches to the trajectory design optimization of robotic manipulators and their applications. Over time, numerous improvements and updates have been made to the proposed methodologies, addressing the limitations and restrictions of earlier work. This survey-type article compiles research articles published in recent years that focus on the main algorithms proposed for addressing placement and minimum-time path planning for a manipulator responsible for performing pick-and-place tasks. Specifically, the research examines the construction of an automated robotic cell for the palletizing of regular heterogeneous boxes on a collision-free mixed pallet. By reviewing and synthesizing the most recent research, this article sheds light on the state-of-the-art manipulator planning algorithms for pick-and-place tasks in palletizing applications. Full article
(This article belongs to the Section Industrial Robots and Automation)
Show Figures

Figure 1

17 pages, 3239 KiB  
Article
MSF-SLAM: Enhancing Dynamic Visual SLAM with Multi-Scale Feature Integration and Dynamic Object Filtering
by Yongjia Duan, Jing Luo and Xiong Zhou
Appl. Sci. 2025, 15(9), 4735; https://doi.org/10.3390/app15094735 - 24 Apr 2025
Viewed by 218
Abstract
Conventional visual SLAM systems often struggle with degraded pose estimation accuracy in dynamic environments due to the interference of moving objects and unstable feature tracking. To address this critical challenge, we present a groundbreaking enhancement to visual SLAM by introducing an innovative architecture [...] Read more.
Conventional visual SLAM systems often struggle with degraded pose estimation accuracy in dynamic environments due to the interference of moving objects and unstable feature tracking. To address this critical challenge, we present a groundbreaking enhancement to visual SLAM by introducing an innovative architecture that integrates advanced feature extraction and dynamic object filtering mechanisms. At the core of our approach lies a novel Multi-Scale Feature Consolidation (MSFConv) module, which we have developed to significantly boost the feature extraction capabilities of the YOLOv8 network. This module enables superior multi-scale feature representation, leading to significant improvements in object detection accuracy and robustness. Furthermore, we have developed a Dynamic Object Filtering Framework (DOFF) that seamlessly integrates with the ORB-SLAM3 architecture. By leveraging the Lucas-Kanade (LK) optical flow method, DOFF effectively distinguishes and removes dynamic feature points while preserving the integrity of static features. This ensures high-precision pose estimation in highly dynamic environments. Comprehensive experiments on the TUM RGB-D dataset validate the exceptional performance of our proposed method, demonstrating 93.34% and 94.43% improvements in pose estimation accuracy over the baseline ORB-SLAM3 in challenging dynamic sequences. These substantial improvements are achieved through the synergistic combination of enhanced feature extraction and precise dynamic object filtering. Our work represents a significant leap forward in visual SLAM technology, offering a robust solution to the long-standing problem of dynamic environment handling. The proposed innovations not only advance the state-of-the-art in SLAM research but also pave the way for more reliable real-world applications in robotics and autonomous systems. Full article
Show Figures

Figure 1

27 pages, 32676 KiB  
Article
Action Recognition via Multi-View Perception Feature Tracking for Human–Robot Interaction
by Chaitanya Bandi and Ulrike Thomas
Robotics 2025, 14(4), 53; https://doi.org/10.3390/robotics14040053 - 19 Apr 2025
Viewed by 234
Abstract
Human–Robot Interaction (HRI) depends on robust perception systems that enable intuitive and seamless interaction between humans and robots. This work introduces a multi-view perception framework designed for HRI, incorporating object detection and tracking, human body and hand pose estimation, unified hand–object pose estimation, [...] Read more.
Human–Robot Interaction (HRI) depends on robust perception systems that enable intuitive and seamless interaction between humans and robots. This work introduces a multi-view perception framework designed for HRI, incorporating object detection and tracking, human body and hand pose estimation, unified hand–object pose estimation, and action recognition. We use the state-of-the-art object detection architecture to understand the scene for object detection and segmentation, ensuring high accuracy and real-time performance. In interaction environments, 3D whole-body pose estimation is necessary, and we integrate an existing work with high inference speed. We propose a novel architecture for 3D unified hand–object pose estimation and tracking, capturing real-time spatial relationships between hands and objects. Furthermore, we incorporate action recognition by leveraging whole-body pose, unified hand–object pose estimation, and object tracking to determine the handover interaction state. The proposed architecture is evaluated on large-scale, open-source datasets, demonstrating competitive accuracy and faster inference times, making it well-suited for real-time HRI applications. Full article
(This article belongs to the Special Issue Human–AI–Robot Teaming (HART))
Show Figures

Figure 1

22 pages, 31401 KiB  
Article
BEV-CAM3D: A Unified Bird’s-Eye View Architecture for Autonomous Driving with Monocular Cameras and 3D Point Clouds
by Daniel Ayo Oladele, Elisha Didam Markus and Adnan M. Abu-Mahfouz
AI 2025, 6(4), 82; https://doi.org/10.3390/ai6040082 - 18 Apr 2025
Viewed by 372
Abstract
Three-dimensional (3D) visual perception is pivotal for understanding surrounding environments in applications such as autonomous driving and mobile robotics. While LiDAR-based models dominate due to accurate depth sensing, their cost and sparse outputs have driven interest in camera-based systems. However, challenges like cross-domain [...] Read more.
Three-dimensional (3D) visual perception is pivotal for understanding surrounding environments in applications such as autonomous driving and mobile robotics. While LiDAR-based models dominate due to accurate depth sensing, their cost and sparse outputs have driven interest in camera-based systems. However, challenges like cross-domain degradation and depth estimation inaccuracies persist. This paper introduces BEVCAM3D, a unified bird’s-eye view (BEV) architecture that fuses monocular cameras and LiDAR point clouds to overcome single-sensor limitations. BEVCAM3D integrates a deformable cross-modality attention module for feature alignment and a fast ground segmentation algorithm to reduce computational overhead by 40%. Evaluated on the nuScenes dataset, BEVCAM3D achieves state-of-the-art performance, with a 73.9% mAP and a 76.2% NDS, outperforming existing LiDAR-camera fusion methods like SparseFusion (72.0% mAP) and IS-Fusion (73.0% mAP). Notably, it excels in detecting pedestrians (91.0% AP) and traffic cones (89.9% AP), addressing the class imbalance in autonomous driving scenarios. The framework supports real-time inference at 11.2 FPS with an EfficientDet-B3 backbone and demonstrates robustness under low-light conditions (62.3% nighttime mAP). Full article
(This article belongs to the Section AI in Autonomous Systems)
Show Figures

Figure 1

25 pages, 2639 KiB  
Article
Advances in Aircraft Skin Defect Detection Using Computer Vision: A Survey and Comparison of YOLOv9 and RT-DETR Performance
by Nutchanon Suvittawat, Christian Kurniawan, Jetanat Datephanyawat, Jordan Tay, Zhihao Liu, De Wen Soh and Nuno Antunes Ribeiro
Aerospace 2025, 12(4), 356; https://doi.org/10.3390/aerospace12040356 - 17 Apr 2025
Viewed by 288
Abstract
Aircraft skin surface defect detection is critical for aviation safety but is currently mostly reliant on manual or visual inspections. Recent advancements in computer vision offer opportunities for automation. This paper reviews the current state of computer vision algorithms and their application in [...] Read more.
Aircraft skin surface defect detection is critical for aviation safety but is currently mostly reliant on manual or visual inspections. Recent advancements in computer vision offer opportunities for automation. This paper reviews the current state of computer vision algorithms and their application in aircraft defect detection, synthesizing insights from academic research (21 publications) and industry projects (18 initiatives). Beyond a detailed review, we experimentally evaluate the accuracy and feasibility of existing low-cost, easily deployable hardware (drone) and software solutions (computer vision algorithms). Specifically, real-world data were collected from an abandoned aircraft with visible defects using a drone to capture video footage, which was then processed with state-of-the-art computer vision models—YOLOv9 and RT-DETR. Both models achieved mAP50 scores of 0.70–0.75, with YOLOv9 demonstrating slightly better accuracy and inference speed, while RT-DETR exhibited faster training convergence. Additionally, a comparison between YOLOv5 and YOLOv9 revealed a 10% improvement in mAP50, highlighting the rapid advancements in computer vision in recent years. Lastly, we identify and discuss various alternative hardware solutions for data collection—in addition to drones, these include robotic platforms, climbing robots, and smart hangars—and discuss key challenges for their deployment, such as regulatory constraints, human–robot integration, and weather resilience. The fundamental contribution of this paper is to underscore the potential of computer vision for aircraft skin defect detection while emphasizing that further research is still required to address existing limitations. Full article
(This article belongs to the Section Aeronautics)
Show Figures

Figure 1

22 pages, 9287 KiB  
Article
On the Feasibility of Adapting the LiVec Tactile Sensing Principle to Non-Planar Surfaces: A Thin, Flexible Tactile Sensor
by Olivia Leslie, David Córdova Bulens and Stephen J. Redmond
Sensors 2025, 25(8), 2544; https://doi.org/10.3390/s25082544 - 17 Apr 2025
Viewed by 192
Abstract
Tactile sensation across the whole hand, including the fingers and palm, is essential for manipulation and, therefore, is expected to be similarly useful for enabling dexterous robot manipulation. Tactile sensation would ideally be distributed (over large surface areas), have a high precision, and [...] Read more.
Tactile sensation across the whole hand, including the fingers and palm, is essential for manipulation and, therefore, is expected to be similarly useful for enabling dexterous robot manipulation. Tactile sensation would ideally be distributed (over large surface areas), have a high precision, and provide measurements in multiple axes, allowing for effective manipulation and interaction with objects of varying shapes, textures, friction, and compliance. Given the complex geometries and articulation of state-of-the-art robotic grippers and hands, they would benefit greatly from their surface being instrumented with a thin, curved, and/or flexible tactile sensor technology. However, the majority of current sensor technologies measure tactile information across a planar sensing surface or instrument-curved skin using relatively bulky camera-based approaches; proportionally in the literature, thin and flexible tactile sensor arrays are an under-explored topic. This paper, presents a thin, flexible, non-camera-based optical tactile sensor design as an investigation into the feasibility of adapting our novel LiVec sensing principle to curved and flexible surfaces. To implement the flexible sensor, flexible PCB technology is utilized in combination with other soft components. This proof-of-concept design eliminates rigid circuit boards, creating a sensor capable of providing localized 3D force and 3D displacement measurements across an array of sensing units in a small-thickness, non-camera-based optical tactile sensor skin covering a curved surface. The sensor consists of 16 sensing units arranged in a uniform 4 × 4 grid with an overall size of 30 mm × 30 mm × 7.2 mm in length, width, and depth, respectively. The sensor successfully estimated local XYZ forces and displacements in a curved configuration across all sixteen sensing units, the average force bias values (ˉμ) were −1.04 mN, −0.32 mN, and −1.31 mN, and the average precision (¯SD) was 54.49 mN, 55.16 mN and 97.15 mN, for the X, Y, Z axes, respectively, the average displacement bias values (ˉμ) were 1.58 μm, 0.29 μm, and −1.99 μm, and the average precision values (¯SD) were 221.61 μm, 247.74 μm, and 44.93 μm for the X, Y, and Z axes, respectively. This work provides crucial insights into the design and calibration of future curved LiVec sensors for robotic fingers and palms, making it highly suitable for enhancing dexterous robotic manipulation in complex, real-world environments. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

22 pages, 6991 KiB  
Article
Robotic Rehabilitation Through Multilateral Shared Control Architecture
by Srikar Annamraju, Harris Nisar, Anne Christine Horowitz and Dušan Stipanović
Robotics 2025, 14(4), 50; https://doi.org/10.3390/robotics14040050 - 16 Apr 2025
Viewed by 220
Abstract
The shortage of therapists required for the rehabilitation of stroke patients, together with the patients’ lack of motivation in regular therapy, creates the need for a robotic rehabilitation platform. While studies on shared control architectures are present in the literature as a means [...] Read more.
The shortage of therapists required for the rehabilitation of stroke patients, together with the patients’ lack of motivation in regular therapy, creates the need for a robotic rehabilitation platform. While studies on shared control architectures are present in the literature as a means of training, state-of-the-art training systems involve a complex architecture and, moreover, have notable performance limitations. In this paper, a simplified training architecture is proposed that is particularly targeted for rehabilitation and also adds missing features, such as complete force feedback, enhanced learning rate, and dynamic monitoring of the patient’s performance. In addition to the novel architecture, the design of controllers to ensure system stability is presented. These controllers are analytically shown to meet the performance objectives and maintain the system’s passivity. An experimental setup is built to test the architecture and the controllers. A comparison with the state-of-the-art methods is also performed to demonstrate the superiority of the proposed method. It is further demonstrated that the proposed architecture facilitates correcting the inaccurate frequencies at which the patient might operate. This was achieved by defining attribute-wise individual recovery factors for the patient. Full article
(This article belongs to the Special Issue Development of Biomedical Robotics)
Show Figures

Figure 1

38 pages, 20801 KiB  
Article
A Hybrid Method to Solve the Multi-UAV Dynamic Task Assignment Problem
by Shahad Alqefari and Mohamed El Bachir Menai
Sensors 2025, 25(8), 2502; https://doi.org/10.3390/s25082502 - 16 Apr 2025
Viewed by 272
Abstract
In the rapidly evolving field of aerial robotics, the coordinated management of multiple unmanned aerial vehicle (multi-UAV) systems to address complex and dynamic environments is increasingly critical. Multi-UAV systems promise enhanced efficiency and effectiveness in various applications, from disaster response to infrastructure inspection, [...] Read more.
In the rapidly evolving field of aerial robotics, the coordinated management of multiple unmanned aerial vehicle (multi-UAV) systems to address complex and dynamic environments is increasingly critical. Multi-UAV systems promise enhanced efficiency and effectiveness in various applications, from disaster response to infrastructure inspection, by leveraging the collective capabilities of UAV fleets. However, the dynamic nature of such environments presents significant challenges in task allocation and real-time adaptability. This paper introduces a novel hybrid algorithm designed to optimize multi-UAV task assignments in dynamic environments. State-of-the-art solutions in this domain have exhibited limitations, particularly in rapidly responding to dynamic changes and effectively scaling to large-scale environments. The proposed solution bridges these gaps by combining clustering to group and assign tasks in an initial offline phase with a dynamic partial reassignment process that locally updates assignments in response to real-time changes, all within a centralized–distributed communication topology. The simulation results validate the superiority of the proposed solution and demonstrate its improvements in efficiency and responsiveness over existing solutions. Additionally, the results highlight the scalability of the solution in handling large-scale problems and demonstrate its ability to efficiently manage a growing number of UAVs and tasks. It also demonstrated robust adaptability and enhanced mission effectiveness across a wide range of dynamic events and different scale scenarios. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

11 pages, 4414 KiB  
Review
High-Speed 3D Vision Based on Structured Light Methods
by Leo Miyashita, Satoshi Tabata and Masatoshi Ishikawa
Metrology 2025, 5(2), 24; https://doi.org/10.3390/metrology5020024 - 15 Apr 2025
Viewed by 224
Abstract
Three-dimensional measurement technologies based on computer vision have been developed with the aim of achieving perceptual speeds equivalent to humans (30 fps). However, in a highly mechanized society, there is no need for computers and robots to work slowly to match the speed [...] Read more.
Three-dimensional measurement technologies based on computer vision have been developed with the aim of achieving perceptual speeds equivalent to humans (30 fps). However, in a highly mechanized society, there is no need for computers and robots to work slowly to match the speed of human perception. From this kind of circumstance, high-speed 3D vision with speeds far beyond that of humans, such as 1000 fps, has emerged. High-speed 3D measurement has great applicability not only for accurately recognizing a moving and deforming target but also for enabling real-time feedback, such as manipulation of the dynamic targets based on the measurement. In order to accelerate 3D vision and control the dynamic targets in real time, high-speed vision devices and high-speed image processing algorithms are essential. In this review, we revisit the basic strategy, triangulation as a suitable measurement principle for high-speed 3D vision, and introduce state-of-the-art 3D measurement methods based on high-speed vision devices and high-speed image processing utilizing structured light patterns. In addition, we introduce recent applications using high-speed 3D measurement and show that high-speed 3D measurement is one of the key technologies for real-time feedback in various fields such as robotics, mobility, security, interface, and XR. Full article
Show Figures

Figure 1

26 pages, 7455 KiB  
Article
Accuracy Optimization of Robotic Machining Using Grey-Box Modeling and Simulation Planning Assistance
by Minh Trinh, Michael Königs, Lukas Gründel, Marcel Beier, Oliver Petrovic and Christian Brecher
J. Manuf. Mater. Process. 2025, 9(4), 126; https://doi.org/10.3390/jmmp9040126 - 11 Apr 2025
Viewed by 306
Abstract
The aim of this paper is to develop an approach to increase the accuracy of industrial robots for machining processes. During machining tasks, process forces displace the end effector of the robot. A simulation of the various process influences is therefore necessary to [...] Read more.
The aim of this paper is to develop an approach to increase the accuracy of industrial robots for machining processes. During machining tasks, process forces displace the end effector of the robot. A simulation of the various process influences is therefore necessary to ensure stable machining during production planning in optimizing the process parameters. Realistic simulations require precise dynamics and stiffness models of the robot. Regarding the dynamics, the frictional component is highly complex and difficult to model. Therefore, this paper follows a grey-box approach to combine the advantages of the state-of-the-art Lund–Grenoble model (white-box) with those of a data-driven one (black-box) in the first part. The resulting grey-box LuGre model proves to be superior to the white- and black-box models. In the second part, a model-based simulation planning assistance tool is developed, which makes use of the grey-box LuGre model. The simulation assistance provides the manufacturing planner with process knowledge using the identified robot and cutting force models. Furthermore, it provides optimization methods such as a switching point analysis. Finally, the assistance tool gives predictions about the machining result and a process evaluation. The third part of the paper shows the evaluation of the simulation assistance on a real machining process and workpiece, showing an increase in accuracy using the tool. Full article
(This article belongs to the Special Issue Recent Progress in Robotic Machining)
Show Figures

Figure 1

30 pages, 24057 KiB  
Article
Enhancing Autonomous Orchard Navigation: A Real-Time Convolutional Neural Network-Based Obstacle Classification System for Distinguishing ‘Real’ and ‘Fake’ Obstacles in Agricultural Robotics
by Tabinda Naz Syed, Jun Zhou, Imran Ali Lakhiar, Francesco Marinello, Tamiru Tesfaye Gemechu, Luke Toroitich Rottok and Zhizhen Jiang
Agriculture 2025, 15(8), 827; https://doi.org/10.3390/agriculture15080827 - 10 Apr 2025
Viewed by 449
Abstract
Autonomous navigation in agricultural environments requires precise obstacle classification to ensure collision-free movement. This study proposes a convolutional neural network (CNN)-based model designed to enhance obstacle classification for agricultural robots, particularly in orchards. Building upon a previously developed YOLOv8n-based real-time detection system, the [...] Read more.
Autonomous navigation in agricultural environments requires precise obstacle classification to ensure collision-free movement. This study proposes a convolutional neural network (CNN)-based model designed to enhance obstacle classification for agricultural robots, particularly in orchards. Building upon a previously developed YOLOv8n-based real-time detection system, the model incorporates Ghost Modules and Squeeze-and-Excitation (SE) blocks to enhance feature extraction while maintaining computational efficiency. Obstacles are categorized as “Real”—those that physically impact navigation, such as tree trunks and persons—and “Fake”—those that do not, such as tall weeds and tree branches—allowing for precise navigation decisions. The model was trained on separate orchard and campus datasets and fine-tuned using Hyperband optimization and evaluated on an external test set to assess generalization to unseen obstacles. The model’s robustness was tested under varied lighting conditions, including low-light scenarios, to ensure real-world applicability. Computational efficiency was analyzed based on inference speed, memory consumption, and hardware requirements. Comparative analysis against state-of-the-art classification models (VGG16, ResNet50, MobileNetV3, DenseNet121, EfficientNetB0, and InceptionV3) confirmed the proposed model’s superior precision (p), recall (r), and F1-score, particularly in complex orchard scenarios. The model maintained strong generalization across diverse environmental conditions, including varying illumination and previously unseen obstacles. Furthermore, computational analysis revealed that the orchard-combined model achieved the highest inference speed at 2.31 FPS while maintaining a strong balance between accuracy and efficiency. When deployed in real-time, the model achieved 95.0% classification accuracy in orchards and 92.0% in campus environments. The real-time system demonstrated a false positive rate of 8.0% in the campus environment and 2.0% in the orchard, with a consistent false negative rate of 8.0% across both environments. These results validate the model’s effectiveness for real-time obstacle differentiation in agricultural settings. Its strong generalization, robustness to unseen obstacles, and computational efficiency make it well-suited for deployment in precision agriculture. Future work will focus on enhancing inference speed, improving performance under occlusion, and expanding dataset diversity to further strengthen real-world applicability. Full article
Show Figures

Figure 1

25 pages, 13761 KiB  
Article
Mobile Robot Navigation with Enhanced 2D Mapping and Multi-Sensor Fusion
by Basheer Al-Tawil, Adem Candemir, Magnus Jung and Ayoub Al-Hamadi
Sensors 2025, 25(8), 2408; https://doi.org/10.3390/s25082408 - 10 Apr 2025
Viewed by 357
Abstract
This paper presents an enhanced Simultaneous Localization and Mapping (SLAM) framework for mobile robot navigation. It integrates RGB-D cameras and 2D LiDAR sensors to improve both mapping accuracy and localization efficiency. We propose a data fusion strategy where RGB-D point clouds are projected [...] Read more.
This paper presents an enhanced Simultaneous Localization and Mapping (SLAM) framework for mobile robot navigation. It integrates RGB-D cameras and 2D LiDAR sensors to improve both mapping accuracy and localization efficiency. We propose a data fusion strategy where RGB-D point clouds are projected into 2D and denoised alongside LiDAR data. Late fusion is applied to combine the processed data, making it ready for use in the SLAM system. Additionally, we propose the enhanced Gmapping (EGM) algorithm by adding adaptive resampling and degeneracy handling to address particle depletion issues, thereby improving the robustness of the localization process. The system is evaluated through simulations and a small-scale real-world implementation using a Tiago robot. In simulations, the system was tested in environments of varying complexity and compared against state-of-the-art methods such as RTAB-Map SLAM and our EGM. Results show general improvements in navigation compared to state-of-the-art approaches: in simulation, an 8% reduction in traveled distance, a 13% reduction in processing time, and a 15% improvement in goal completion. In small-scale real-world tests, the EGM showed slight improvements over the classical GM method: a 3% reduction in traveled distance and a 9% decrease in execution time. Full article
(This article belongs to the Topic Multi-Sensor Integrated Navigation Systems)
Show Figures

Figure 1

24 pages, 1540 KiB  
Review
Myoelectric Control in Rehabilitative and Assistive Soft Exoskeletons: A Comprehensive Review of Trends, Challenges, and Integration with Soft Robotic Devices
by Alejandro Toro-Ossaba, Juan C. Tejada and Daniel Sanin-Villa
Biomimetics 2025, 10(4), 214; https://doi.org/10.3390/biomimetics10040214 - 1 Apr 2025
Viewed by 558
Abstract
Soft robotic exoskeletons have emerged as a transformative solution for rehabilitation and assistance, offering greater adaptability and comfort than rigid designs. Myoelectric control, based on electromyography (EMG) signals, plays a key role in enabling intuitive and adaptive interaction between the user and the [...] Read more.
Soft robotic exoskeletons have emerged as a transformative solution for rehabilitation and assistance, offering greater adaptability and comfort than rigid designs. Myoelectric control, based on electromyography (EMG) signals, plays a key role in enabling intuitive and adaptive interaction between the user and the exoskeleton. This review analyzes recent advancements in myoelectric control strategies, emphasizing their integration into soft robotic exoskeletons. Unlike previous studies, this work highlights the unique challenges posed by the deformability and compliance of soft structures, requiring novel approaches to motion intention estimation and control. Key contributions include critically evaluating machine learning-based motion prediction, model-free adaptive control methods, and real-time validation strategies to enhance rehabilitation outcomes. Additionally, we identify persistent challenges such as EMG signal variability, computational complexity, and the real-time adaptability of control algorithms, which limit clinical implementation. By interpreting recent trends, this review highlights the need for improved EMG acquisition techniques, robust adaptive control frameworks, and enhanced real-time learning to optimize human-exoskeleton interaction. Beyond summarizing the state of the art, this work provides an in-depth discussion of how myoelectric control can advance rehabilitation by ensuring more responsive and personalized exoskeleton assistance. Future research should focus on refining control schemes tailored to soft robotic architectures, ensuring seamless integration into rehabilitation protocols. This review is a foundation for developing intelligent soft exoskeletons that effectively support motor recovery and assistive applications. Full article
Show Figures

Figure 1

33 pages, 7877 KiB  
Article
GDCPlace: Geographic Distance Consistent Loss for Visual Place Recognition
by Shihao Shao and Qinghua Cui
Electronics 2025, 14(7), 1418; https://doi.org/10.3390/electronics14071418 - 31 Mar 2025
Viewed by 239
Abstract
Visual place recognition (VPR) is essential for robots and autonomous vehicles to understand their environment and navigate effectively. Inspired by face recognition, a recent trend for training a VPR model is to leverage classification objective, where the embeddings of images are trained to [...] Read more.
Visual place recognition (VPR) is essential for robots and autonomous vehicles to understand their environment and navigate effectively. Inspired by face recognition, a recent trend for training a VPR model is to leverage classification objective, where the embeddings of images are trained to be similar to corresponding class centers. Ideally, the predicted similarities should be negative correlated to the geographic distances. However, previous studies typically used loss functions from face recognition due to the similarity between the two tasks, which cannot guarantee the rank consistency above as face recognition is unrelated to geographic distance. Current methods for distance-similarity or ordinal constraint are either for sample-to-sample training, only partially meet the constraint, or are incapable for the VPR task. To this end, we provide a mathematical definition geographic distance consistent defining the above consistency that the loss function for VPR should adhere to. Based on it, we derive the upper bound of cross-entropy softmax loss under the desired constraint to minimize, and propose a novel loss function for VPR that is geographic distance consistent, called GDCPlace. To the best of our knowledge, GDCPlace is the first classification loss function designed for VPR. To evaluate our loss, we collected 11 benchmarks that have high domain variability to test on. As our contribution is on the loss function and previous classification-based VPR methods mostly adopt face recognition loss function, we collect several additional loss functions to compare, e.g., loss for face recognition, image retrieval, ordinal classification, and general purpose. The results show that GDCPlace performs the best among different losses and former state-of-the-art (SOTA) for VPR. It is also evaluated for ordinal classification tasks to show the generalizability of GDCPlace. Full article
(This article belongs to the Special Issue Machine Vision for Robotics and Autonomous Systems)
Show Figures

Figure 1

Back to TopTop