Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (138)

Search Parameters:
Keywords = graph SLAM

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 3921 KB  
Article
Tightly Coupled LiDAR-Inertial Odometry for Autonomous Driving via Self-Adaptive Filtering and Factor Graph Optimization
by Weiwei Lyu, Haoting Li, Shuanggen Jin, Haocai Huang, Xiaojuan Tian, Yunlong Zhang, Zheyuan Du and Jinling Wang
Machines 2025, 13(11), 977; https://doi.org/10.3390/machines13110977 - 23 Oct 2025
Viewed by 287
Abstract
Simultaneous Localization and Mapping (SLAM) has become a critical tool for fully autonomous driving. However, current methods suffer from inefficient data utilization and degraded navigation performance in complex and unknown environments. In this paper, an accurate and tightly coupled method of LiDAR-inertial odometry [...] Read more.
Simultaneous Localization and Mapping (SLAM) has become a critical tool for fully autonomous driving. However, current methods suffer from inefficient data utilization and degraded navigation performance in complex and unknown environments. In this paper, an accurate and tightly coupled method of LiDAR-inertial odometry is proposed. First, a self-adaptive voxel grid filter is developed to dynamically downsample the original point clouds based on environmental feature richness, aiming to balance navigation accuracy and real-time performance. Second, keyframe factors are selected based on thresholds of translation distance, rotation angle, and time interval and then introduced into the factor graph to improve global consistency. Additionally, high-quality Global Navigation Satellite System (GNSS) factors are selected and incorporated into the factor graph through linear interpolation, thereby improving the navigation accuracy in complex and unknown environments. The proposed method is evaluated using KITTI dataset over various scales and environments. Results show that the proposed method has demonstrated very promising better results when compared with the other methods, such as ALOAM, LIO-SAM, and SC-LeGO-LOAM. Especially in urban scenes, the trajectory accuracy of the proposed method has been improved by 33.13%, 57.56%, and 58.4%, respectively, illustrating excellent navigation and positioning capabilities. Full article
(This article belongs to the Section Vehicle Engineering)
Show Figures

Figure 1

24 pages, 3721 KB  
Article
Interactive Environment-Aware Planning System and Dialogue for Social Robots in Early Childhood Education
by Jiyoun Moon and Seung Min Song
Appl. Sci. 2025, 15(20), 11107; https://doi.org/10.3390/app152011107 - 16 Oct 2025
Viewed by 176
Abstract
In this study, we propose an interactive environment-aware dialog and planning system for social robots in early childhood education, aimed at supporting the learning and social interaction of young children. The proposed architecture consists of three core modules. First, semantic simultaneous localization and [...] Read more.
In this study, we propose an interactive environment-aware dialog and planning system for social robots in early childhood education, aimed at supporting the learning and social interaction of young children. The proposed architecture consists of three core modules. First, semantic simultaneous localization and mapping (SLAM) accurately perceives the environment by constructing a semantic scene representation that includes attributes such as position, size, color, purpose, and material of objects, as well as their positional relationships. Second, the automated planning system enables stable task execution even in changing environments through planning domain definition language (PDDL)-based planning and replanning capabilities. Third, the visual question answering module leverages scene graphs and SPARQL conversion of natural language queries to answer children’s questions and engage in context-based conversations. The experiment conducted in a real kindergarten classroom with children aged 6 to 7 years validated the accuracy of object recognition and attribute extraction for semantic SLAM, the task success rate of the automated planning system, and the natural language question answering performance of the visual question answering (VQA) module.The experimental results confirmed the proposed system’s potential to support natural social interaction with children and its applicability as an educational tool. Full article
(This article belongs to the Special Issue Robotics and Intelligent Systems: Technologies and Applications)
Show Figures

Figure 1

18 pages, 3895 KB  
Article
SFGS-SLAM: Lightweight Image Matching Combined with Gaussian Splatting for a Tracking and Mapping System
by Runmin Wang and Zhongliang Deng
Appl. Sci. 2025, 15(20), 10876; https://doi.org/10.3390/app152010876 - 10 Oct 2025
Viewed by 454
Abstract
The integration of SLAM with Gaussian splatting presents a significant challenge: achieving compatibility between real-time performance and high-quality rendering. This paper introduces a novel SLAM system named SFGS-SLAM (SuperFeats Gaussian Splatting SLAM), restructured from tracking to mapping, to address this issue. A new [...] Read more.
The integration of SLAM with Gaussian splatting presents a significant challenge: achieving compatibility between real-time performance and high-quality rendering. This paper introduces a novel SLAM system named SFGS-SLAM (SuperFeats Gaussian Splatting SLAM), restructured from tracking to mapping, to address this issue. A new keypoint detection network is designed and characterized by fewer parameters than existing networks such as SuperFeats, resulting in faster processing speeds. This keypoint detection network is augmented with a global factor graph incorporating the GICP (Generalized Iterative Closest Point) odometry, reprojection-error factors and loop-closure constraints to minimize drift. It is integrated with the Gaussian splatting as the mapping part. By leveraging the reprojection error, the proposed system further reduces odometry error and improves rendering quality without compromising speed. It is worth noting that SFGS-SLAM is primarily designed for static indoor environments and does not explicitly model or suppress dynamic disturbances. Comprehensive experiments were conducted on various datasets to evaluate the performance of our system. Extensive experiments on indoor and synthetic datasets show that SFGS-SLAM achieves accuracy comparable to state-of-the-art SLAM while running in real time. SuperFeats reduces matching latency by over 50%, and joint optimization significantly improves global consistency. Our results demonstrate the practicality of combining lightweight feature matching with dense Gaussian mapping, highlighting trade-offs between speed and accuracy. Full article
Show Figures

Figure 1

19 pages, 5861 KB  
Article
Topological Signal Processing from Stereo Visual SLAM
by Eleonora Di Salvo, Tommaso Latino, Maria Sanzone, Alessia Trozzo and Stefania Colonnese
Sensors 2025, 25(19), 6103; https://doi.org/10.3390/s25196103 - 3 Oct 2025
Viewed by 375
Abstract
Topological signal processing is emerging alongside Graph Signal Processing (GSP) in various applications, incorporating higher-order connectivity structures—such as faces—in addition to nodes and edges, for enriched connectivity modeling. Rich point clouds acquired by multi-camera systems in Visual Simultaneous Localization and Mapping (V-SLAM) are [...] Read more.
Topological signal processing is emerging alongside Graph Signal Processing (GSP) in various applications, incorporating higher-order connectivity structures—such as faces—in addition to nodes and edges, for enriched connectivity modeling. Rich point clouds acquired by multi-camera systems in Visual Simultaneous Localization and Mapping (V-SLAM) are typically processed using graph-based methods. In this work, we introduce a topological signal processing (TSP) framework that integrates texture information extracted from V-SLAM; we refer to this framework as TSP-SLAM. We show how TSP-SLAM enables the extension of graph-based point cloud processing to more advanced topological signal processing techniques. We demonstrate, on real stereo data, that TSP-SLAM enables a richer point cloud representation by associating signals not only with vertices but also with edges and faces of the mesh computed from the point cloud. Numerical results show that TSP-SLAM supports the design of topological filtering algorithms by exploiting the mapping between the 3D mesh faces, edges and vertices and their 2D image projections. These findings confirm the potential of TSP-SLAM for topological signal processing of point cloud data acquired in challenging V-SLAM environments. Full article
(This article belongs to the Special Issue Stereo Vision Sensing and Image Processing)
Show Figures

Figure 1

36 pages, 1495 KB  
Review
Decision-Making for Path Planning of Mobile Robots Under Uncertainty: A Review of Belief-Space Planning Simplifications
by Vineetha Malathi, Pramod Sreedharan, Rthuraj P R, Vyshnavi Anil Kumar, Anil Lal Sadasivan, Ganesha Udupa, Liam Pastorelli and Andrea Troppina
Robotics 2025, 14(9), 127; https://doi.org/10.3390/robotics14090127 - 15 Sep 2025
Viewed by 1998
Abstract
Uncertainty remains a central challenge in robotic navigation, exploration, and coordination. This paper examines how Partially Observable Markov Decision Processes (POMDPs) and their decentralized variants (Dec-POMDPs) provide a rigorous foundation for decision-making under partial observability across tasks such as Active Simultaneous Localization and [...] Read more.
Uncertainty remains a central challenge in robotic navigation, exploration, and coordination. This paper examines how Partially Observable Markov Decision Processes (POMDPs) and their decentralized variants (Dec-POMDPs) provide a rigorous foundation for decision-making under partial observability across tasks such as Active Simultaneous Localization and Mapping (A-SLAM), adaptive informative path planning, and multi-robot coordination. We review recent advances that integrate deep reinforcement learning (DRL) with POMDP formulations, highlighting improvements in scalability and adaptability as well as unresolved challenges of robustness, interpretability, and sim-to-real transfer. To complement learning-driven methods, we discuss emerging strategies that embed probabilistic reasoning directly into navigation, including belief-space planning, distributionally robust control formulations, and probabilistic graph models such as enhanced probabilistic roadmaps (PRMs) and Canadian Traveler Problem-based roadmaps. These approaches collectively demonstrate that uncertainty can be managed more effectively by coupling structured inference with data-driven adaptation. The survey concludes by outlining future research directions, emphasizing hybrid learning–planning architectures, neuro-symbolic reasoning, and socially aware navigation frameworks as critical steps toward resilient, transparent, and human-centered autonomy. Full article
(This article belongs to the Section Sensors and Control in Robotics)
Show Figures

Figure 1

27 pages, 12819 KB  
Article
A CPS-Based Architecture for Mobile Robotics: Design, Integration, and Localisation Experiments
by Dominika Líšková, Anna Jadlovská and Filip Pazdič
Sensors 2025, 25(18), 5715; https://doi.org/10.3390/s25185715 - 12 Sep 2025
Viewed by 707
Abstract
This paper presents the design and implementation of a mobile robotic platform modelled as a layered Cyber–Physical System (CPS). Inspired by architectures commonly used in industrial Distributed Control Systems (DCSs) and large-scale scientific infrastructures, the proposed system incorporates modular hardware, distributed embedded control, [...] Read more.
This paper presents the design and implementation of a mobile robotic platform modelled as a layered Cyber–Physical System (CPS). Inspired by architectures commonly used in industrial Distributed Control Systems (DCSs) and large-scale scientific infrastructures, the proposed system incorporates modular hardware, distributed embedded control, and multi-level coordination. The robotic platform, named MapBot, is structured according to a five-layer CPS model encompassing component, control, coordination, supervisory, and management layers. This structure facilitates modular development, system scalability, and integration of advanced features such as a digital twin. The platform is implemented using embedded computing elements, diverse sensors, and communication protocols including Ethernet and I2C. The system operates within the ROS2 framework, supporting flexible task distribution across processing nodes. As a use case, two localization techniques—Adaptive Monte Carlo Localization (AMCL) and pose graph SLAM—are deployed and evaluated, highlighting the performance trade-offs in map quality, update frequency, and computational load. The results demonstrate that CPS-based design principles offer clear advantages for robotic platforms in terms of modularity, maintainability, and real-time integration. The proposed approach can be generalised for other robotic or mechatronic systems requiring structured, layered control and embedded intelligence. Full article
Show Figures

Figure 1

19 pages, 2819 KB  
Article
DPCR-SLAM: A Dual-Point-Cloud-Registration SLAM Based on Line Features for Mapping an Indoor Mobile Robot
by Yibo Cao, Junheng Ni and Yonghao Huang
Sensors 2025, 25(17), 5561; https://doi.org/10.3390/s25175561 - 5 Sep 2025
Viewed by 1243
Abstract
Simultaneous Localization and Mapping (SLAM) systems require accurate and globally consistent mapping to ensure the long-term stable operation of robots or vehicles. However, for the commercial applications of indoor sweeping robots, the system needs to maintain accuracy while keeping computational and storage requirements [...] Read more.
Simultaneous Localization and Mapping (SLAM) systems require accurate and globally consistent mapping to ensure the long-term stable operation of robots or vehicles. However, for the commercial applications of indoor sweeping robots, the system needs to maintain accuracy while keeping computational and storage requirements low to ensure cost controllability. This paper proposes a dual-point-cloud-registration SLAM based on line features for the mapping of a mobile robot, named DPCR-SLAM. The front-end employs an improved Point-to-Line Iterative Closest Point (PLICP) algorithm for point cloud registration. It first aligns the point cloud and updates the submap. Subsequently, the submap is aligned with the regional map, which is then updated accordingly. The back-end uses the association between regional maps to perform graph optimization and update the global map. The experimental results show that, in the application scenario of indoor sweeping robots, the proposed method reduces the map storage space by 76.3%, the point cloud processing time by 55.8%, the graph optimization time by 77.7%, and the average localization error by 10.9% compared to the Cartographer, which is commonly used in the industry. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

27 pages, 5515 KB  
Article
Optimizing Multi-Camera Mobile Mapping Systems with Pose Graph and Feature-Based Approaches
by Ahmad El-Alailyi, Luca Morelli, Paweł Trybała, Francesco Fassi and Fabio Remondino
Remote Sens. 2025, 17(16), 2810; https://doi.org/10.3390/rs17162810 - 13 Aug 2025
Viewed by 1318
Abstract
Multi-camera Visual Simultaneous Localization and Mapping (V-SLAM) increases spatial coverage through multi-view image streams, improving localization accuracy and reducing data acquisition time. Despite its speed and generally robustness, V-SLAM often struggles to achieve precise camera poses necessary for accurate 3D reconstruction, especially in [...] Read more.
Multi-camera Visual Simultaneous Localization and Mapping (V-SLAM) increases spatial coverage through multi-view image streams, improving localization accuracy and reducing data acquisition time. Despite its speed and generally robustness, V-SLAM often struggles to achieve precise camera poses necessary for accurate 3D reconstruction, especially in complex environments. This study introduces two novel multi-camera optimization methods to enhance pose accuracy, reduce drift, and ensure loop closures. These methods refine multi-camera V-SLAM outputs within existing frameworks and are evaluated in two configurations: (1) multiple independent stereo V-SLAM instances operating on separate camera pairs; and (2) multi-view odometry processing all camera streams simultaneously. The proposed optimizations include (1) a multi-view feature-based optimization that integrates V-SLAM poses with rigid inter-camera constraints and bundle adjustment; and (2) a multi-camera pose graph optimization that fuses multiple trajectories using relative pose constraints and robust noise models. Validation is conducted through two complex 3D surveys using the ATOM-ANT3D multi-camera fisheye mobile mapping system. Results demonstrate survey-grade accuracy comparable to traditional photogrammetry, with reduced computational time, advancing toward near real-time 3D mapping of challenging environments. Full article
Show Figures

Graphical abstract

24 pages, 988 KB  
Article
Consistency-Oriented SLAM Approach: Theoretical Proof and Numerical Validation
by Zhan Wang, Alain Lambert, Yuwei Meng, Rongdong Yu, Jin Wang and Wei Wang
Electronics 2025, 14(15), 2966; https://doi.org/10.3390/electronics14152966 - 24 Jul 2025
Viewed by 511
Abstract
Simultaneous Localization and Mapping (SLAM) has long been a fundamental and challenging task in robotics literature, where safety and reliability are the critical issues for successfully autonomous applications of robots. Classically, the SLAM problem is tackled via probabilistic or optimization methods (such as [...] Read more.
Simultaneous Localization and Mapping (SLAM) has long been a fundamental and challenging task in robotics literature, where safety and reliability are the critical issues for successfully autonomous applications of robots. Classically, the SLAM problem is tackled via probabilistic or optimization methods (such as EKF-SLAM, Fast-SLAM, and Graph-SLAM). Despite their strong performance in real-world scenarios, these methods may exhibit inconsistency, which is caused by the inherent characteristic of model linearization or Gaussian noise assumption. In this paper, we propose an alternative monocular SLAM algorithm which theoretically relies on interval analysis (iMonoSLAM), to pursue guaranteed rather than probabilistically defined solutions. We consistently modeled and initialized the SLAM problem with a bounded-error parametric model. The state estimation process is then cast into an Interval Constraint Satisfaction Problem (ICSP) and resolved through interval constraint propagation techniques without any linearization or Gaussian noise assumption. Furthermore, we theoretically prove the obtained consistency and propose a versatile method for numerical validation. To the best of our knowledge, this is the first time such a proof has been proposed. A plethora of numerical experiments are carried to validate the consistency, and a preliminary comparison with classical EKF-SLAM in different noisy situations is also presented. Our proposed iMonoSLAM shows outstanding performance in obtaining reliable solutions, highlighting the potential application prospect in safety-critical scenarios of mobile robots. Full article
(This article belongs to the Special Issue Simultaneous Localization and Mapping (SLAM) of Mobile Robots)
Show Figures

Figure 1

33 pages, 4382 KB  
Article
A Distributed Multi-Robot Collaborative SLAM Method Based on Air–Ground Cross-Domain Cooperation
by Peng Liu, Yuxuan Bi, Caixia Wang and Xiaojiao Jiang
Drones 2025, 9(7), 504; https://doi.org/10.3390/drones9070504 - 18 Jul 2025
Viewed by 1997
Abstract
To overcome the limitations in the perception performance of individual robots and homogeneous robot teams, this paper presents a distributed multi-robot collaborative SLAM method based on air–ground cross-domain cooperation. By integrating environmental perception data from UAV and UGV teams across air and ground [...] Read more.
To overcome the limitations in the perception performance of individual robots and homogeneous robot teams, this paper presents a distributed multi-robot collaborative SLAM method based on air–ground cross-domain cooperation. By integrating environmental perception data from UAV and UGV teams across air and ground domains, this method enables more efficient, robust, and globally consistent autonomous positioning and mapping. First, to address the challenge of significant differences in the field of view between UAVs and UGVs, which complicates achieving a unified environmental understanding, this paper proposes an iterative registration method based on semantic and geometric features assistance. This method calculates the correspondence probability of the air–ground loop closure keyframes using these features and iteratively computes the rotation angle and translation vector to determine the coordinate transformation matrix. The resulting matrix provides strong initialization for back-end optimization, which helps to significantly reduce global pose estimation errors. Next, to overcome the convergence difficulties and high computational complexity of large-scale distributed back-end nonlinear pose graph optimization, this paper introduces a multi-level partitioning majorization–minimization DPGO method incorporating loss kernel optimization. This method constructs a multi-level, balanced pose subgraph based on the coupling degree of robot nodes. Then, it uses the minimization substitution function of non-trivial loss kernel optimization to gradually converge the distributed pose graph optimization problem to a first-order critical point, thereby significantly improving global pose estimation accuracy. Finally, experimental results on benchmark SLAM datasets and the GRACO dataset demonstrate that the proposed method effectively integrates environmental feature information from air–ground cross-domain UAV and UGV teams, achieving high-precision global pose estimation and map construction. Full article
Show Figures

Figure 1

19 pages, 1635 KB  
Article
Integrating AI-Driven Wearable Metaverse Technologies into Ubiquitous Blended Learning: A Framework Based on Embodied Interaction and Multi-Agent Collaboration
by Jiaqi Xu, Xuesong Zhai, Nian-Shing Chen, Usman Ghani, Andreja Istenic and Junyi Xin
Educ. Sci. 2025, 15(7), 900; https://doi.org/10.3390/educsci15070900 - 15 Jul 2025
Viewed by 1419
Abstract
Ubiquitous blended learning, leveraging mobile devices, has democratized education by enabling autonomous and readily accessible knowledge acquisition. However, its reliance on traditional interfaces often limits learner immersion and meaningful interaction. The emergence of the wearable metaverse offers a compelling solution, promising enhanced multisensory [...] Read more.
Ubiquitous blended learning, leveraging mobile devices, has democratized education by enabling autonomous and readily accessible knowledge acquisition. However, its reliance on traditional interfaces often limits learner immersion and meaningful interaction. The emergence of the wearable metaverse offers a compelling solution, promising enhanced multisensory experiences and adaptable learning environments that transcend the constraints of conventional ubiquitous learning. This research proposes a novel framework for ubiquitous blended learning in the wearable metaverse, aiming to address critical challenges, such as multi-source data fusion, effective human–computer collaboration, and efficient rendering on resource-constrained wearable devices, through the integration of embodied interaction and multi-agent collaboration. This framework leverages a real-time multi-modal data analysis architecture, powered by the MobileNetV4 and xLSTM neural networks, to facilitate the dynamic understanding of the learner’s context and environment. Furthermore, we introduced a multi-agent interaction model, utilizing CrewAI and spatio-temporal graph neural networks, to orchestrate collaborative learning experiences and provide personalized guidance. Finally, we incorporated lightweight SLAM algorithms, augmented using visual perception techniques, to enable accurate spatial awareness and seamless navigation within the metaverse environment. This innovative framework aims to create immersive, scalable, and cost-effective learning spaces within the wearable metaverse. Full article
Show Figures

Figure 1

20 pages, 3710 KB  
Article
An Accurate LiDAR-Inertial SLAM Based on Multi-Category Feature Extraction and Matching
by Nuo Li, Yiqing Yao, Xiaosu Xu, Shuai Zhou and Taihong Yang
Remote Sens. 2025, 17(14), 2425; https://doi.org/10.3390/rs17142425 - 12 Jul 2025
Viewed by 1275
Abstract
Light Detection and Ranging(LiDAR)-inertial simultaneous localization and mapping (SLAM) is a critical component in multi-sensor autonomous navigation systems, providing both accurate pose estimation and detailed environmental understanding. Despite its importance, existing optimization-based LiDAR-inertial SLAM methods often face key limitations: unreliable feature extraction, sensitivity [...] Read more.
Light Detection and Ranging(LiDAR)-inertial simultaneous localization and mapping (SLAM) is a critical component in multi-sensor autonomous navigation systems, providing both accurate pose estimation and detailed environmental understanding. Despite its importance, existing optimization-based LiDAR-inertial SLAM methods often face key limitations: unreliable feature extraction, sensitivity to noise and sparsity, and the inclusion of redundant or low-quality feature correspondences. These weaknesses hinder their performance in complex or dynamic environments and fail to meet the reliability requirements of autonomous systems. To overcome these challenges, we propose a novel and accurate LiDAR-inertial SLAM framework with three major contributions. First, we employ a robust multi-category feature extraction method based on principal component analysis (PCA), which effectively filters out noisy and weakly structured points, ensuring stable feature representation. Second, to suppress outlier correspondences and enhance pose estimation reliability, we introduce a coarse-to-fine two-stage feature correspondence selection strategy that evaluates geometric consistency and structural contribution. Third, we develop an adaptive weighted pose estimation scheme that considers both distance and directional consistency, improving the robustness of feature matching under varying scene conditions. These components are jointly optimized within a sliding-window-based factor graph, integrating LiDAR feature factors, IMU pre-integration, and loop closure constraints. Extensive experiments on public datasets (KITTI, M2DGR) and a custom-collected dataset validate the proposed method’s effectiveness. Results show that our system consistently outperforms state-of-the-art approaches in accuracy and robustness, particularly in scenes with sparse structure, motion distortion, and dynamic interference, demonstrating its suitability for reliable real-world deployment. Full article
(This article belongs to the Special Issue LiDAR Technology for Autonomous Navigation and Mapping)
Show Figures

Figure 1

18 pages, 16696 KB  
Technical Note
LIO-GC: LiDAR Inertial Odometry with Adaptive Ground Constraints
by Wenwen Tian, Juefei Wang, Puwei Yang, Wen Xiao and Sisi Zlatanova
Remote Sens. 2025, 17(14), 2376; https://doi.org/10.3390/rs17142376 - 10 Jul 2025
Cited by 1 | Viewed by 2476
Abstract
LiDAR-based simultaneous localization and mapping (SLAM) techniques are commonly applied in high-precision mapping and positioning for mobile platforms. However, the vertical resolution limitations of multi-beam spinning LiDAR sensors can significantly impair vertical estimation accuracy. This challenge is accentuated in scenarios involving fewer-line or [...] Read more.
LiDAR-based simultaneous localization and mapping (SLAM) techniques are commonly applied in high-precision mapping and positioning for mobile platforms. However, the vertical resolution limitations of multi-beam spinning LiDAR sensors can significantly impair vertical estimation accuracy. This challenge is accentuated in scenarios involving fewer-line or cost-effective spinning LiDARs, where vertical features are sparse. To address this issue, we introduce LIO-GC, which effectively extracts ground features and integrates them into a factor graph to rectify vertical accuracy. Unlike conventional methods relying on geometric features for ground plane segmentation, our approach leverages a self-adaptive strategy that considers the uneven point cloud distribution and inconsistency due to ground fluctuations. By optimizing laser range factors, ground feature constraints, and loop closure factors using graph optimization frameworks, our method surpasses current approaches, demonstrating superior performance through evaluation on open-source and newly collected datasets. Full article
Show Figures

Figure 1

34 pages, 5774 KB  
Article
Approach to Semantic Visual SLAM for Bionic Robots Based on Loop Closure Detection with Combinatorial Graph Entropy in Complex Dynamic Scenes
by Dazheng Wang and Jingwen Luo
Biomimetics 2025, 10(7), 446; https://doi.org/10.3390/biomimetics10070446 - 6 Jul 2025
Viewed by 706
Abstract
In complex dynamic environments, the performance of SLAM systems on bionic robots is susceptible to interference from dynamic objects or structural changes in the environment. To address this problem, we propose a semantic visual SLAM (vSLAM) algorithm based on loop closure detection with [...] Read more.
In complex dynamic environments, the performance of SLAM systems on bionic robots is susceptible to interference from dynamic objects or structural changes in the environment. To address this problem, we propose a semantic visual SLAM (vSLAM) algorithm based on loop closure detection with combinatorial graph entropy. First, in terms of the dynamic feature detection results of YOLOv8-seg, the feature points at the edges of the dynamic object are finely judged by calculating the mean absolute deviation (MAD) of the depth of the pixel points. Then, a high-quality keyframe selection strategy is constructed by combining the semantic information, the average coordinates of the semantic objects, and the degree of variation in the dense region of feature points. Subsequently, the unweighted and weighted graphs of keyframes are constructed according to the distribution of feature points, characterization points, and semantic information, and then a high-performance loop closure detection method based on combinatorial graph entropy is developed. The experimental results show that our loop closure detection approach exhibits higher precision and recall in real scenes compared to the bag-of-words (BoW) model. Compared with ORB-SLAM2, the absolute trajectory accuracy in high-dynamic sequences improved by an average of 97.01%, while the number of extracted keyframes decreased by an average of 61.20%. Full article
(This article belongs to the Special Issue Artificial Intelligence for Autonomous Robots: 3rd Edition)
Show Figures

Figure 1

27 pages, 3462 KB  
Article
Visual-Based Position Estimation for Underwater Vehicles Using Tightly Coupled Hybrid Constrained Approach
by Tiedong Zhang, Shuoshuo Ding, Xun Yan, Yanze Lu, Dapeng Jiang, Xinjie Qiu and Yu Lu
J. Mar. Sci. Eng. 2025, 13(7), 1216; https://doi.org/10.3390/jmse13071216 - 24 Jun 2025
Viewed by 616
Abstract
A tightly coupled hybrid monocular visual SLAM system for unmanned underwater vehicles (UUVs) is introduced in this paper. Specifically, we propose a robust three-step hybrid tracking strategy. The feature-based method initially provides a rough pose estimate, then the direct method refines it, and [...] Read more.
A tightly coupled hybrid monocular visual SLAM system for unmanned underwater vehicles (UUVs) is introduced in this paper. Specifically, we propose a robust three-step hybrid tracking strategy. The feature-based method initially provides a rough pose estimate, then the direct method refines it, and finally, the refined results are used to reproject map points to improve the number of features tracked and stability. Furthermore, a tightly coupled visual hybrid optimization method is presented to address the inaccuracy of the back-end pose optimization. The selection of features for stable tracking is achieved through the integration of two distinct residuals: geometric reprojection error and photometric error. The efficacy of the proposed system is demonstrated through quantitative and qualitative analyses in both artificial and natural underwater environments, demonstrating excellent stable tracking and accurate localization results. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

Back to TopTop