Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,374)

Search Parameters:
Keywords = slamming

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 25639 KB  
Article
Comparative Analysis of LiDAR-SLAM Systems: A Study of a Motorized Optomechanical LiDAR and an MEMS Scanner LiDAR
by Simone Fortuna, Sebastiano Chiodini, Andrea Valmorbida and Marco Pertile
Sensors 2025, 25(17), 5352; https://doi.org/10.3390/s25175352 - 29 Aug 2025
Viewed by 241
Abstract
Simultaneous Localization and Mapping (SLAM) is crucial for the safe navigation of autonomous systems. Its accuracy is not based solely on the robustness of the algorithm employed or the metrological performances of the sensor, but rather on a combination of both factors. In [...] Read more.
Simultaneous Localization and Mapping (SLAM) is crucial for the safe navigation of autonomous systems. Its accuracy is not based solely on the robustness of the algorithm employed or the metrological performances of the sensor, but rather on a combination of both factors. In this work, we present a comprehensive comparison framework for evaluating LiDAR-SLAM systems, focusing on key performance indicators including absolute trajectory error, uncertainty, number of tracked features, and computational time. Our case study compares two LiDAR-inertial SLAM configurations: one based on a motorized optomechanical scanner (the Ouster OS1) with a 360° field of view and the other based on MEMS scanners (the Livox Horizon) with a limited field of view and a non-repetitive scanning pattern. The sensors were mounted on a UGV for the experiments, where data were collected by driving the UGV along a predefined path at different speeds and angles. Despite substantial differences in field of view, detection range, and noise, both systems demonstrated comparable trajectory estimation performance, with average absolute trajectory errors of 0.25 m for the Livox-based system and 0.24 m for the Ouster-based system. These findings underscore the importance of sensor–algorithm co-design and demonstrate that even cost-effective, lower-field-of-view solutions can deliver competitive SLAM performance in real-world conditions. Full article
(This article belongs to the Special Issue Intelligent Control Systems for Autonomous Vehicles)
Show Figures

Figure 1

24 pages, 10964 KB  
Article
Enhancing LiDAR–IMU SLAM for Infrastructure Monitoring via Dynamic Coplanarity Constraints and Joint Observation
by Zhaosheng Feng, Jun Chen, Yaofeng Liang, Wenli Liu and Yongfeng Peng
Sensors 2025, 25(17), 5330; https://doi.org/10.3390/s25175330 - 27 Aug 2025
Viewed by 351
Abstract
Real-time acquisition of high-precision 3D spatial information is critical for intelligent maintenance of urban infrastructure. While SLAM technology based on LiDAR–IMU sensor fusion has become a core approach for infrastructure monitoring, its accuracy remains limited by vertical pose estimation drift. To address this [...] Read more.
Real-time acquisition of high-precision 3D spatial information is critical for intelligent maintenance of urban infrastructure. While SLAM technology based on LiDAR–IMU sensor fusion has become a core approach for infrastructure monitoring, its accuracy remains limited by vertical pose estimation drift. To address this challenge, this paper proposes a LiDAR–IMU fusion SLAM algorithm incorporating a dynamic coplanarity constraint and a joint observation model within an improved error-state Kalman filter framework. A threshold-driven ground segmentation method is developed to robustly extract planar features in structured environments, enabling dynamic activation of ground constraints to suppress vertical drift. Extensive experiments on a self-collected long-corridor dataset and the public M2DGR dataset demonstrate that the proposed method significantly improves pose estimation accuracy. In structured environments, the method reduces z-axis endpoint errors by 85.8% compared with Fast-LIO2, achieving an average z-axis RMSE of 0.0104 m. On the M2DGR Hall04 sequence, the algorithm attains a z-axis RMSE of 0.007 m, outperforming four mainstream LiDAR-based SLAM methods. These results validate the proposed approach as an effective solution for high-precision 3D mapping in infrastructure monitoring applications. Full article
Show Figures

Figure 1

23 pages, 4627 KB  
Article
Dynamic SLAM Dense Point Cloud Map by Fusion of Semantic Information and Bayesian Moving Probability
by Qing An, Shao Li, Yanglu Wan, Wei Xuan, Chao Chen, Bufan Zhao and Xijiang Chen
Sensors 2025, 25(17), 5304; https://doi.org/10.3390/s25175304 - 26 Aug 2025
Viewed by 481
Abstract
Most existing Simultaneous Localization and Mapping (SLAM) systems rely on the assumption of static environments to achieve reliable and efficient mapping. However, such methods often suffer from degraded localization accuracy and mapping consistency in dynamic settings, as they lack explicit mechanisms to distinguish [...] Read more.
Most existing Simultaneous Localization and Mapping (SLAM) systems rely on the assumption of static environments to achieve reliable and efficient mapping. However, such methods often suffer from degraded localization accuracy and mapping consistency in dynamic settings, as they lack explicit mechanisms to distinguish between static and dynamic elements. To overcome this limitation, we present BMP-SLAM, a vision-based SLAM approach that integrates semantic segmentation and Bayesian motion estimation to robustly handle dynamic indoor scenes. To enable real-time dynamic object detection, we integrate YOLOv5, a semantic segmentation network that identifies and localizes dynamic regions within the environment, into a dedicated dynamic target detection thread. Simultaneously, the data association Bayesian mobile probability proposed in this paper effectively eliminates dynamic feature points and successfully reduces the impact of dynamic targets in the environment on the SLAM system. To enhance complex indoor robotic navigation, the proposed system integrates semantic keyframe information with dynamic object detection outputs to reconstruct high-fidelity 3D point cloud maps of indoor environments. The evaluation conducted on the TUM RGB-D dataset indicates that the performance of BMP-SLAM is superior to that of ORB-SLAM3, with the trajectory tracking accuracy improved by 96.35%. Comparative evaluations demonstrate that the proposed system achieves superior performance in dynamic environments, exhibiting both lower trajectory drift and enhanced positioning precision relative to state-of-the-art dynamic SLAM methods. Full article
(This article belongs to the Special Issue Indoor Localization Technologies and Applications)
Show Figures

Figure 1

29 pages, 12889 KB  
Article
Development of a Multi-Robot System for Autonomous Inspection of Nuclear Waste Tank Pits
by Pengcheng Cao, Edward Kaleb Houck, Anthony D'Andrea, Robert Kinoshita, Kristan B. Egan, Porter J. Zohner and Yidong Xia
Appl. Sci. 2025, 15(17), 9307; https://doi.org/10.3390/app15179307 - 24 Aug 2025
Viewed by 725
Abstract
This paper introduces the overall design plan, development timeline, and preliminary progress of the Autonomous Pit Exploration System project. This project aims to develop an advanced multi-robot system for the efficient inspection of nuclear waste-storage tank pits. The project is structured into three [...] Read more.
This paper introduces the overall design plan, development timeline, and preliminary progress of the Autonomous Pit Exploration System project. This project aims to develop an advanced multi-robot system for the efficient inspection of nuclear waste-storage tank pits. The project is structured into three phases: Phase 1 involves data collection and interface definition in collaboration with Hanford Site experts and university partners, focusing on tank riser geometry and hardware solutions. Phase 2 includes the selection of sensors and robot components, detailed mechanical design, and prototyping. Phase 3 integrates all components into a cohesive system managed by a master control package which also incorporates digital twin and surrogate models, and culminates in comprehensive testing and validation at a simulated tank pit at the Idaho National Laboratory. Additionally, the system’s communication design ensures coordinated operation through shared data, power, and control signals. For transportation and deployment, an electric vehicle (EV) is chosen to support the system for a full 10 h shift with better regulatory compliance for field deployment. A telescopic arm design is selected for its simple configuration and superior reach capability and controllability. Preliminary testing utilizes an educational robot to demonstrate the feasibility of splitting computational tasks between edge and cloud computers. Successful simultaneous localization and mapping (SLAM) tasks validate our distributed computing approach. More design considerations are also discussed, including radiation hardness assurance, SLAM performance, software transferability, and digital twinning strategies. Full article
(This article belongs to the Special Issue Mechatronic Systems Design and Optimization)
Show Figures

Figure 1

27 pages, 7285 KB  
Article
Towards Biologically-Inspired Visual SLAM in Dynamic Environments: IPL-SLAM with Instance Segmentation and Point-Line Feature Fusion
by Jian Liu, Donghao Yao, Na Liu and Ye Yuan
Biomimetics 2025, 10(9), 558; https://doi.org/10.3390/biomimetics10090558 - 22 Aug 2025
Viewed by 461
Abstract
Simultaneous Localization and Mapping (SLAM) is a fundamental technique in mobile robotics, enabling autonomous navigation and environmental reconstruction. However, dynamic elements in real-world scenes—such as walking pedestrians, moving vehicles, and swinging doors—often degrade SLAM performance by introducing unreliable features that cause localization errors. [...] Read more.
Simultaneous Localization and Mapping (SLAM) is a fundamental technique in mobile robotics, enabling autonomous navigation and environmental reconstruction. However, dynamic elements in real-world scenes—such as walking pedestrians, moving vehicles, and swinging doors—often degrade SLAM performance by introducing unreliable features that cause localization errors. In this paper, we define dynamic regions as areas in the scene containing moving objects, and dynamic features as the visual features extracted from these regions that may adversely affect localization accuracy. Inspired by biological perception strategies that integrate semantic awareness and geometric cues, we propose Instance-level Point-Line SLAM (IPL-SLAM), a robust visual SLAM framework for dynamic environments. The system employs YOLOv8-based instance segmentation to detect potential dynamic regions and construct semantic priors, while simultaneously extracting point and line features using Oriented FAST (Features from Accelerated Segment Test) and Rotated BRIEF (Binary Robust Independent Elementary Features), collectively known as ORB, and Line Segment Detector (LSD) algorithms. Motion consistency checks and angular deviation analysis are applied to filter dynamic features, and pose optimization is conducted using an adaptive-weight error function. A static semantic point cloud map is further constructed to enhance scene understanding. Experimental results on the TUM RGB-D dataset demonstrate that IPL-SLAM significantly outperforms existing dynamic SLAM systems—including DS-SLAM and ORB-SLAM2—in terms of trajectory accuracy and robustness in complex indoor environments. Full article
(This article belongs to the Section Biomimetic Design, Constructions and Devices)
Show Figures

Figure 1

18 pages, 6560 KB  
Article
Global Phylogenetic Analysis of the CDV Hemagglutinin Gene Reveals Positive Selection on Key Receptor-Binding Sites
by Tuba Çiğdem Oğuzoğlu and B. Taylan Koç
Viruses 2025, 17(9), 1149; https://doi.org/10.3390/v17091149 - 22 Aug 2025
Viewed by 454
Abstract
Canine distemper virus (CDV) is a multi-host morbillivirus whose evolution and host-switching capacity are largely determined by its hemagglutinin (H) gene. To reconsider the molecular evolution of this critical gene, we performed comprehensive phylogenetic, selection, and structural analyses on a curated dataset of [...] Read more.
Canine distemper virus (CDV) is a multi-host morbillivirus whose evolution and host-switching capacity are largely determined by its hemagglutinin (H) gene. To reconsider the molecular evolution of this critical gene, we performed comprehensive phylogenetic, selection, and structural analyses on a curated dataset of 68 representative global H gene sequences. Our phylogenetic reconstruction confirmed the segregation of sequences into distinct, geographically associated lineages. To provide stronger evidence for viral adaptation, we performed a site-specific selection analysis, which identified 15 amino acid sites in the H protein undergoing significant episodic positive selection. Crucially, the majority of the known SLAM and Nectin-4 receptor-binding residues were found to be among these positively selected sites. We further contextualized these findings by mapping the sites onto a 3D homology model of the H protein, which confirmed their location on the exposed surfaces of the receptor-binding domain. This compilation provides quantitative evidence that the key functional regions of the H protein are direct targets for adaptive evolution, which has significant implications for understanding host tropism and the ongoing challenge of vaccine mismatch. Full article
(This article belongs to the Special Issue Canine Distemper Virus)
Show Figures

Figure 1

17 pages, 2708 KB  
Article
Simulation and Implementation of the Modeling of Forklift with Tricycle in Warehouse Systems for ROS
by Kuo-Yang Tu, Che-Ping Hung, Hong-Yu Lin and Kaun-Yu Lin
Sensors 2025, 25(16), 5206; https://doi.org/10.3390/s25165206 - 21 Aug 2025
Viewed by 461
Abstract
In the age of labor shortage, increasing the throughput of warehouses is a good issue. In the recent two decades, automatic warehouses designed to reduce human labor have therefore become a very hot research topic. Tricycle forklifts being able to carry heavy goods [...] Read more.
In the age of labor shortage, increasing the throughput of warehouses is a good issue. In the recent two decades, automatic warehouses designed to reduce human labor have therefore become a very hot research topic. Tricycle forklifts being able to carry heavy goods can play important roles in automatic warehouses. Meanwhile, Robot Operating System (ROS) is a very famous and popular platform for developing the software of robotics. Its powerful communication function makes lots of warehouse information exchange easy. Therefore, ROS installed as the communication backbone of warehouse is very popular. However, the software modules of ROS do not offer tricycle forklifts. Therefore, in this research, the model of a tricycle forklift developed for ROS systems in warehouse applications is constructed. In spite of the developed model, the existing software modules must be modified for compatible connection such that the tricycle forklift can be navigated and controlled by constructed ROS. For the function of Simultaneous Localization And Mapping (SLAM) and the control of self-guided navigation, the constructed system is verified by Gazebo simulation. In addition, the experiments of a real tricycle forklift to demonstrate the developed ROS for enough accuracy of warehouse application are also included. Full article
(This article belongs to the Special Issue New Challenges and Sensor Techniques in Robot Positioning)
Show Figures

Figure 1

19 pages, 5092 KB  
Article
Estimating Position, Diameter at Breast Height, and Total Height of Eucalyptus Trees Using Portable Laser Scanning
by Milena Duarte Machado, Gilson Fernandes da Silva, André Quintão de Almeida, Adriano Ribeiro de Mendonça, Rorai Pereira Martins-Neto and Marcos Benedito Schimalski
Remote Sens. 2025, 17(16), 2904; https://doi.org/10.3390/rs17162904 - 20 Aug 2025
Viewed by 581
Abstract
Forest management planning depends on accurately collecting information on available resources, gathered by forest inventories. However, due to the extent of the planted areas in the world, collecting information traditionally has become challenging. Terrestrial light detection and ranging (LiDAR) has emerged as a [...] Read more.
Forest management planning depends on accurately collecting information on available resources, gathered by forest inventories. However, due to the extent of the planted areas in the world, collecting information traditionally has become challenging. Terrestrial light detection and ranging (LiDAR) has emerged as a promising tool to enhance forest inventory. However, selecting the optimal 3D point cloud density for accurately estimating tree attributes remains an open question. The objective of this study was to evaluate the accuracy of different point densities (points per square meter) in point clouds obtained through portable laser scanning combined with simultaneous localization and mapping (PLS-SLAM). The study aimed to identify tree positions and estimate the diameter at breast height (DBH) and total height (H) of 71 trees in a eucalyptus plantation in Brazil. We also tested a semi-automatic method for estimating total height. Point clouds with densities greater than 100 points/m2 enabled the detection of over 88.7% of individual trees. The root mean square error (RMSE) of the best DBH measurement was 1.6 cm (RMSE = 5.9%) and the best H measurement (semi-automatic method) was 1.2 m (RMSE = 4.2%) for the point cloud with 36,000 points/m2. When measuring the total heights of the largest trees (H > 31.4 m) using LiDAR, the values were always underestimated considering a reference value, and their measurements were significantly different (p-value < 0.05 by the t-test). For point clouds with a density of 36,000 points/m2, the automated DBH and total tree height estimations yielded RMSEs of 5.9% and 14.4%, with biases of 4.8% and −1.4%, respectively. When using point clouds of 10 points/m2, RMSE values increased to 18.8% for DBH and 28.4% for total tree height, while the bias was 6.2% and 18.4%, respectively. Additionally, total tree height estimations obtained via a semi-automatic method resulted in a lower RMSE of 4.2% and a bias of 1.5%. These findings indicate that point clouds acquired through PLS-SLAM with densities exceeding 100 points/m2 are suitable for automated DBH estimation in the studied plantation. Despite the increased processing time required, the semi-automatic method is recommended for total tree height estimation due to its superior accuracy. Full article
(This article belongs to the Section Forest Remote Sensing)
Show Figures

Figure 1

14 pages, 831 KB  
Article
Migratory Bird-Inspired Adaptive Kalman Filtering for Robust Navigation of Autonomous Agricultural Planters in Unstructured Terrains
by Zijie Zhou, Yitao Huang and Jiyu Sun
Biomimetics 2025, 10(8), 543; https://doi.org/10.3390/biomimetics10080543 - 19 Aug 2025
Viewed by 305
Abstract
This paper presents a bionic extended Kalman filter (EKF) state estimation algorithm for agricultural planters, inspired by the bionic mechanism of migratory birds navigating in complex environments, where migratory birds achieve precise localization behaviors by fusing multi-sensory information (e.g., geomagnetic field, visual landmarks, [...] Read more.
This paper presents a bionic extended Kalman filter (EKF) state estimation algorithm for agricultural planters, inspired by the bionic mechanism of migratory birds navigating in complex environments, where migratory birds achieve precise localization behaviors by fusing multi-sensory information (e.g., geomagnetic field, visual landmarks, and somatosensory balance). The algorithm mimics the migratory bird’s ability to integrate multimodal information by fusing laser SLAM, inertial measurement unit (IMU), and GPS data to estimate the position, velocity, and attitude of the planter in real time. Adopting a nonlinear processing approach, the EKF effectively handles nonlinear dynamic characteristics in complex terrain, similar to the adaptive response of a biological nervous system to environmental perturbations. The algorithm demonstrates bio-inspired robustness through the derivation of the nonlinear dynamic teaching model and measurement model and is able to provide high-precision state estimation in complex environments such as mountainous or hilly terrain. Simulation results show that the algorithm significantly improves the navigation accuracy of the planter in unstructured environments. A new method of bio-inspired adaptive state estimation is provided. Full article
(This article belongs to the Special Issue Computer-Aided Biomimetics: 3rd Edition)
Show Figures

Figure 1

23 pages, 13423 KB  
Article
A Lightweight LiDAR–Visual Odometry Based on Centroid Distance in a Similar Indoor Environment
by Zongkun Zhou, Weiping Jiang, Chi Guo, Yibo Liu and Xingyu Zhou
Remote Sens. 2025, 17(16), 2850; https://doi.org/10.3390/rs17162850 - 16 Aug 2025
Viewed by 643
Abstract
Simultaneous Localization and Mapping (SLAM) is a critical technology for robot intelligence. Compared to cameras, Light Detection and Ranging (LiDAR) sensors achieve higher accuracy and stability in indoor environments. However, LiDAR can only capture the geometric structure of the environment, and LiDAR-based SLAM [...] Read more.
Simultaneous Localization and Mapping (SLAM) is a critical technology for robot intelligence. Compared to cameras, Light Detection and Ranging (LiDAR) sensors achieve higher accuracy and stability in indoor environments. However, LiDAR can only capture the geometric structure of the environment, and LiDAR-based SLAM often fails in scenarios with insufficient geometric features or highly similar structures. Furthermore, low-cost mechanical LiDARs, constrained by sparse point cloud density, are particularly prone to odometry drift along the Z-axis, especially in environments such as tunnels or long corridors. To address the localization issues in such scenarios, we propose a forward-enhanced SLAM algorithm. Utilizing a 16-line LiDAR and a monocular camera, we construct a dense colored point cloud input and apply an efficient multi-modal feature extraction algorithm based on centroid distance to extract a set of feature points with significant geometric and color features. These points are then optimized in the back end based on constraints from points, lines, and planes. We compare our method with several classic SLAM algorithms in terms of feature extraction, localization, and elevation constraint. Experimental results demonstrate that our method achieves high-precision real-time operation and exhibits excellent adaptability to indoor environments with similar structures. Full article
Show Figures

Figure 1

22 pages, 4524 KB  
Article
RAEM-SLAM: A Robust Adaptive End-to-End Monocular SLAM Framework for AUVs in Underwater Environments
by Yekai Wu, Yongjie Li, Wenda Luo and Xin Ding
Drones 2025, 9(8), 579; https://doi.org/10.3390/drones9080579 - 15 Aug 2025
Viewed by 561
Abstract
Autonomous Underwater Vehicles (AUVs) play a critical role in ocean exploration. However, due to the inherent limitations of most sensors in underwater environments, achieving accurate navigation and localization in complex underwater scenarios remains a significant challenge. While vision-based Simultaneous Localization and Mapping (SLAM) [...] Read more.
Autonomous Underwater Vehicles (AUVs) play a critical role in ocean exploration. However, due to the inherent limitations of most sensors in underwater environments, achieving accurate navigation and localization in complex underwater scenarios remains a significant challenge. While vision-based Simultaneous Localization and Mapping (SLAM) provides a cost-effective alternative for AUV navigation, existing methods are primarily designed for terrestrial applications and struggle to address underwater-specific issues, such as poor illumination, dynamic interference, and sparse features. To tackle these challenges, we propose RAEM-SLAM, a robust adaptive end-to-end monocular SLAM framework for AUVs in underwater environments. Specifically, we propose a Physics-guided Underwater Adaptive Augmentation (PUAA) method that dynamically converts terrestrial scene datasets into physically realistic pseudo-underwater images for the augmentation training of RAEM-SLAM, improving the system’s generalization and adaptability in complex underwater scenes. We also introduce a Residual Semantic–Spatial Attention Module (RSSA), which utilizes a dual-branch attention mechanism to effectively fuse semantic and spatial information. This design enables adaptive enhancement of key feature regions and suppression of noise interference, resulting in more discriminative feature representations. Furthermore, we incorporate a Local–Global Perception Block (LGP), which integrates multi-scale local details with global contextual dependencies to significantly improve AUV pose estimation accuracy in dynamic underwater scenes. Experimental results on real-world underwater datasets demonstrate that RAEM-SLAM outperforms state-of-the-art SLAM approaches in enabling precise and robust navigation for AUVs. Full article
Show Figures

Figure 1

27 pages, 5515 KB  
Article
Optimizing Multi-Camera Mobile Mapping Systems with Pose Graph and Feature-Based Approaches
by Ahmad El-Alailyi, Luca Morelli, Paweł Trybała, Francesco Fassi and Fabio Remondino
Remote Sens. 2025, 17(16), 2810; https://doi.org/10.3390/rs17162810 - 13 Aug 2025
Viewed by 520
Abstract
Multi-camera Visual Simultaneous Localization and Mapping (V-SLAM) increases spatial coverage through multi-view image streams, improving localization accuracy and reducing data acquisition time. Despite its speed and generally robustness, V-SLAM often struggles to achieve precise camera poses necessary for accurate 3D reconstruction, especially in [...] Read more.
Multi-camera Visual Simultaneous Localization and Mapping (V-SLAM) increases spatial coverage through multi-view image streams, improving localization accuracy and reducing data acquisition time. Despite its speed and generally robustness, V-SLAM often struggles to achieve precise camera poses necessary for accurate 3D reconstruction, especially in complex environments. This study introduces two novel multi-camera optimization methods to enhance pose accuracy, reduce drift, and ensure loop closures. These methods refine multi-camera V-SLAM outputs within existing frameworks and are evaluated in two configurations: (1) multiple independent stereo V-SLAM instances operating on separate camera pairs; and (2) multi-view odometry processing all camera streams simultaneously. The proposed optimizations include (1) a multi-view feature-based optimization that integrates V-SLAM poses with rigid inter-camera constraints and bundle adjustment; and (2) a multi-camera pose graph optimization that fuses multiple trajectories using relative pose constraints and robust noise models. Validation is conducted through two complex 3D surveys using the ATOM-ANT3D multi-camera fisheye mobile mapping system. Results demonstrate survey-grade accuracy comparable to traditional photogrammetry, with reduced computational time, advancing toward near real-time 3D mapping of challenging environments. Full article
Show Figures

Graphical abstract

17 pages, 2380 KB  
Article
Robust Visual-Inertial Odometry with Learning-Based Line Features in a Illumination-Changing Environment
by Xinkai Li, Cong Liu and Xu Yan
Sensors 2025, 25(16), 5029; https://doi.org/10.3390/s25165029 - 13 Aug 2025
Viewed by 454
Abstract
Visual-Inertial Odometry (VIO) systems often suffer from degraded performance in environments with low texture. Although some previous works have combined line features with point features to mitigate this problem, the line features still degrade under more challenging conditions, such as varying illumination. To [...] Read more.
Visual-Inertial Odometry (VIO) systems often suffer from degraded performance in environments with low texture. Although some previous works have combined line features with point features to mitigate this problem, the line features still degrade under more challenging conditions, such as varying illumination. To tackle this, we propose DeepLine-VIO, a robust VIO framework that integrates learned line features extracted via an attraction-field-based deep network. These features are geometrically consistent and illumination-invariant, offering improved visual robustness in challenging conditions. Our system tightly couples these learned line features with point observations and inertial data within a sliding-window optimization framework. We further introduce a geometry-aware filtering and parameterization strategy to ensure the reliability of extracted line segments. Extensive experiments on the EuRoC dataset under synthetic illumination perturbations show that DeepLine-VIO consistently outperforms existing point- and line-based methods. On the most challenging sequences under illumination-changing conditions, our approach reduces Absolute Trajectory Error (ATE) by up to 15.87% and improves Relative Pose Error (RPE) in translation by up to 58.45% compared to PL-VINS. These results highlight the robustness and accuracy of DeepLine-VIO in visually degraded environments. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

22 pages, 3506 KB  
Article
UAV Navigation Using EKF-MonoSLAM Aided by Range-to-Base Measurements
by Rodrigo Munguia, Juan-Carlos Trujillo and Antoni Grau
Drones 2025, 9(8), 570; https://doi.org/10.3390/drones9080570 - 12 Aug 2025
Viewed by 259
Abstract
This study introduces an innovative refinement to EKF-based monocular SLAM by incorporating attitude, altitude, and range-to-base data to enhance system observability and minimize drift. In particular, by utilizing a single range measurement relative to a fixed reference point, the method enables unmanned aerial [...] Read more.
This study introduces an innovative refinement to EKF-based monocular SLAM by incorporating attitude, altitude, and range-to-base data to enhance system observability and minimize drift. In particular, by utilizing a single range measurement relative to a fixed reference point, the method enables unmanned aerial vehicles (UAVs) to mitigate error accumulation, preserve map consistency, and operate reliably in environments without GPS. This integration facilitates sustained autonomous navigation with estimation error remaining bounded over extended trajectories. Theoretical validation is provided through a nonlinear observability analysis, highlighting the general benefits of integrating range data into the SLAM framework. The system’s performance is evaluated through both virtual experiments and real-world flight data. The real-data experiments confirm the practical relevance of the approach and its ability to improve estimation accuracy in realistic scenarios. Full article
(This article belongs to the Section Drone Design and Development)
Show Figures

Figure 1

21 pages, 812 KB  
Review
A Frontier Review of Semantic SLAM Technologies Applied to the Open World
by Le Miao, Wen Liu and Zhongliang Deng
Sensors 2025, 25(16), 4994; https://doi.org/10.3390/s25164994 - 12 Aug 2025
Viewed by 687
Abstract
With the growing demand for autonomous robotic operations in complex and unstructured environments, traditional semantic SLAM systems—which rely on closed-set semantic vocabularies—are increasingly limited in their ability to robustly perceive and understand diverse and dynamic scenes. This paper focuses on the paradigm shift [...] Read more.
With the growing demand for autonomous robotic operations in complex and unstructured environments, traditional semantic SLAM systems—which rely on closed-set semantic vocabularies—are increasingly limited in their ability to robustly perceive and understand diverse and dynamic scenes. This paper focuses on the paradigm shift toward open-world semantic scene understanding in SLAM and provides a comprehensive review of the technological evolution from closed-world assumptions to open-world frameworks. We survey the current state of research in open-world semantic SLAM, highlighting key challenges and frontiers. In particular, we conduct an in-depth analysis of three critical areas: zero-shot open-vocabulary understanding, dynamic semantic expansion, and multimodal semantic fusion. These capabilities are examined for their crucial roles in unknown class identification, incremental semantic updates, and multisensor perceptual integration. Our main contribution is presenting the first systematic algorithmic benchmarking and performance comparison of representative open-world semantic SLAM systems, revealing the potential of these core techniques to enhance semantic understanding in complex environments. Finally, we propose several promising directions for future research, including lightweight model deployment, real-time performance optimization, and collaborative multimodal perception, and offering a systematic reference and methodological guidance for continued advancements in this emerging field. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

Back to TopTop