Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (158)

Search Parameters:
Keywords = visual-inertial odometry

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 2309 KB  
Article
Robust Visual–Inertial Odometry via Multi-Scale Deep Feature Extraction and Flow-Consistency Filtering
by Hae Min Cho
Appl. Sci. 2025, 15(20), 10935; https://doi.org/10.3390/app152010935 - 11 Oct 2025
Viewed by 368
Abstract
We present a visual–inertial odometry (VIO) system that integrates a deep feature extraction and filtering strategy with optical flow to improve tracking robustness. While many traditional VIO methods rely on hand-crafted features, they often struggle to remain robust under challenging visual conditions, such [...] Read more.
We present a visual–inertial odometry (VIO) system that integrates a deep feature extraction and filtering strategy with optical flow to improve tracking robustness. While many traditional VIO methods rely on hand-crafted features, they often struggle to remain robust under challenging visual conditions, such as low texture, motion blur, or lighting variation. These methods tend to exhibit large performance variance across different environments, primarily due to the limited repeatability and adaptability of hand-crafted keypoints. In contrast, learning-based features offer richer representations and can generalize across diverse domains thanks to data-driven training. However, they often suffer from uneven spatial distribution and temporal instability, which can degrade tracking performance. To address these issues, we propose a hybrid front-end that combines a lightweight deep feature extractor with an image pyramid and grid-based keypoint sampling to enhance spatial diversity. Additionally, a forward–backward optical-flow-consistency check is applied to filter unstable keypoints. The system improves feature tracking stability by enforcing spatial and temporal consistency while maintaining real-time efficiency. Finally, the effectiveness of the proposed VIO system is validated on the EuRoC MAV benchmark, showing a 19.35% reduction in trajectory RMSE and improved consistency across multiple sequences compared with previous methods. Full article
(This article belongs to the Special Issue Advances in Autonomous Driving: Detection and Tracking)
Show Figures

Figure 1

19 pages, 4672 KB  
Article
Monocular Visual/IMU/GNSS Integration System Using Deep Learning-Based Optical Flow for Intelligent Vehicle Localization
by Jeongmin Kang
Sensors 2025, 25(19), 6050; https://doi.org/10.3390/s25196050 - 1 Oct 2025
Viewed by 599
Abstract
Accurate and reliable vehicle localization is essential for autonomous driving in complex outdoor environments. Traditional feature-based visual–inertial odometry (VIO) suffers from sparse features and sensitivity to illumination, limiting robustness in outdoor scenes. Deep learning-based optical flow offers dense and illumination-robust motion cues. However, [...] Read more.
Accurate and reliable vehicle localization is essential for autonomous driving in complex outdoor environments. Traditional feature-based visual–inertial odometry (VIO) suffers from sparse features and sensitivity to illumination, limiting robustness in outdoor scenes. Deep learning-based optical flow offers dense and illumination-robust motion cues. However, existing methods rely on simple bidirectional consistency checks that yield unreliable flow in low-texture or ambiguous regions. Global navigation satellite system (GNSS) measurements can complement VIO, but often degrade in urban areas due to multipath interference. This paper proposes a multi-sensor fusion system that integrates monocular VIO with GNSS measurements to achieve robust and drift-free localization. The proposed approach employs a hybrid VIO framework that utilizes a deep learning-based optical flow network, with an enhanced consistency constraint that incorporates local structure and motion coherence to extract robust flow measurements. The extracted optical flow serves as visual measurements, which are then fused with inertial measurements to improve localization accuracy. GNSS updates further enhance global localization stability by mitigating long-term drift. The proposed method is evaluated on the publicly available KITTI dataset. Extensive experiments demonstrate its superior localization performance compared to previous similar methods. The results show that the filter-based multi-sensor fusion framework with optical flow refined by the enhanced consistency constraint ensures accurate and reliable localization in large-scale outdoor environments. Full article
(This article belongs to the Special Issue AI-Driving for Autonomous Vehicles)
Show Figures

Figure 1

19 pages, 6114 KB  
Article
RWKV-VIO: An Efficient and Low-Drift Visual–Inertial Odometry Using an End-to-End Deep Network
by Jiaxi Yang, Xiaoming Xu, Zeyuan Xu, Zhigang Wu and Weimeng Chu
Sensors 2025, 25(18), 5737; https://doi.org/10.3390/s25185737 - 15 Sep 2025
Viewed by 2190
Abstract
Visual–Inertial Odometry (VIO) is a foundational technology for autonomous navigation and robotics. However, existing deep learning-based methods face key challenges in temporal modeling and computational efficiency. Conventional approaches, such as Long Short-Term Memory (LSTM) networks and Transformers methods, often struggle to handle dependencies [...] Read more.
Visual–Inertial Odometry (VIO) is a foundational technology for autonomous navigation and robotics. However, existing deep learning-based methods face key challenges in temporal modeling and computational efficiency. Conventional approaches, such as Long Short-Term Memory (LSTM) networks and Transformers methods, often struggle to handle dependencies across different temporal scales while causing high computational costs. To address these issues, this work introduces Receptance Weighted Key Value (RWKV)-VIO, a novel framework based on the RWKV architecture. The proposed framework is designed with a lightweight structure and linear computational complexity, which effectively reduces the computational burden in temporal modeling. Furthermore, a newly developed Inertial Measurement Unit (IMU) encoder is included to improve the effectiveness of feature extraction using residual connections and channel alignment, allowing the efficient use of historical inertial data. A parallel encoding strategy uses two independently initialized encoders. Features are extracted from different dimensions by this strategy, strengthening the model’s ability to detect complex patterns. Experimental results for publicly shared datasets show that RWKV-VIO prioritizes computational efficiency and lightweight design. It significantly reduces model size and inference time compared to existing advanced methods while achieving top-ranked positioning accuracy among evaluated approaches. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

16 pages, 5892 KB  
Article
RGB-Based Visual–Inertial Odometry via Knowledge Distillation from Self-Supervised Depth Estimation with Foundation Models
by Jimin Song and Sang Jun Lee
Sensors 2025, 25(17), 5366; https://doi.org/10.3390/s25175366 - 30 Aug 2025
Viewed by 831
Abstract
Autonomous driving represents a transformative advancement with the potential to significantly impact daily mobility, including enabling independent vehicle operation for individuals with visual disabilities. The commercialization of autonomous driving requires guaranteed safety and accuracy, underscoring the need for robust localization and environmental perception [...] Read more.
Autonomous driving represents a transformative advancement with the potential to significantly impact daily mobility, including enabling independent vehicle operation for individuals with visual disabilities. The commercialization of autonomous driving requires guaranteed safety and accuracy, underscoring the need for robust localization and environmental perception algorithms. In cost-sensitive platforms such as delivery robots and electric vehicles, cameras are increasingly favored for their ability to provide rich visual information at low cost. Despite recent progress, existing visual–inertial odometry systems still suffer from degraded accuracy in challenging conditions, which limits their reliability in real-world autonomous navigation scenarios. Estimating 3D positional changes using only 2D image sequences remains a fundamental challenge primarily due to inherent scale ambiguity and the presence of dynamic scene elements. In this paper, we present a visual–inertial odometry framework incorporating a depth estimation model trained without ground-truth depth supervision. Our approach leverages a self-supervised learning pipeline enhanced with knowledge distillation via foundation models, including both self-distillation and geometry-aware distillation. The proposed method improves depth estimation performance and consequently enhances odometry estimation without modifying the network architecture or increasing the number of parameters. The effectiveness of the proposed method is demonstrated through comparative evaluations on both the public KITTI dataset and a custom campus driving dataset, showing performance improvements over existing approaches. Full article
(This article belongs to the Special Issue Sensors for Intelligent Vehicles and Autonomous Driving)
Show Figures

Figure 1

24 pages, 1735 KB  
Article
A Multi-Sensor Fusion-Based Localization Method for a Magnetic Adhesion Wall-Climbing Robot
by Xiaowei Han, Hao Li, Nanmu Hui, Jiaying Zhang and Gaofeng Yue
Sensors 2025, 25(16), 5051; https://doi.org/10.3390/s25165051 - 14 Aug 2025
Cited by 1 | Viewed by 871
Abstract
To address the decline in the localization accuracy of magnetic adhesion wall-climbing robots operating on large steel structures, caused by visual occlusion, sensor drift, and environmental interference, this study proposes a simulation-based multi-sensor fusion localization method that integrates an Inertial Measurement Unit (IMU), [...] Read more.
To address the decline in the localization accuracy of magnetic adhesion wall-climbing robots operating on large steel structures, caused by visual occlusion, sensor drift, and environmental interference, this study proposes a simulation-based multi-sensor fusion localization method that integrates an Inertial Measurement Unit (IMU), Wheel Odometry (Odom), and Ultra-Wideband (UWB). An Extended Kalman Filter (EKF) is employed to integrate IMU and Odom measurements through a complementary filtering model, while a geometric residual-based weighting mechanism is introduced to optimize raw UWB ranging data. This enhances the accuracy and robustness of both the prediction and observation stages. All evaluations were conducted in a simulated environment, including scenarios on flat plates and spherical tank-shaped steel surfaces. The proposed method maintained a maximum localization error within 5 cm in both linear and closed-loop trajectories and achieved over 30% improvement in horizontal accuracy compared to baseline EKF-based approaches. The system exhibited consistent localization performance across varying surface geometries, providing technical support for robotic operations on large steel infrastructures. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

17 pages, 2380 KB  
Article
Robust Visual-Inertial Odometry with Learning-Based Line Features in a Illumination-Changing Environment
by Xinkai Li, Cong Liu and Xu Yan
Sensors 2025, 25(16), 5029; https://doi.org/10.3390/s25165029 - 13 Aug 2025
Viewed by 1240
Abstract
Visual-Inertial Odometry (VIO) systems often suffer from degraded performance in environments with low texture. Although some previous works have combined line features with point features to mitigate this problem, the line features still degrade under more challenging conditions, such as varying illumination. To [...] Read more.
Visual-Inertial Odometry (VIO) systems often suffer from degraded performance in environments with low texture. Although some previous works have combined line features with point features to mitigate this problem, the line features still degrade under more challenging conditions, such as varying illumination. To tackle this, we propose DeepLine-VIO, a robust VIO framework that integrates learned line features extracted via an attraction-field-based deep network. These features are geometrically consistent and illumination-invariant, offering improved visual robustness in challenging conditions. Our system tightly couples these learned line features with point observations and inertial data within a sliding-window optimization framework. We further introduce a geometry-aware filtering and parameterization strategy to ensure the reliability of extracted line segments. Extensive experiments on the EuRoC dataset under synthetic illumination perturbations show that DeepLine-VIO consistently outperforms existing point- and line-based methods. On the most challenging sequences under illumination-changing conditions, our approach reduces Absolute Trajectory Error (ATE) by up to 15.87% and improves Relative Pose Error (RPE) in translation by up to 58.45% compared to PL-VINS. These results highlight the robustness and accuracy of DeepLine-VIO in visually degraded environments. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

20 pages, 27328 KB  
Article
GDVI-Fusion: Enhancing Accuracy with Optimal Geometry Matching and Deep Nearest Neighbor Optimization
by Jincheng Peng, Xiaoli Zhang, Kefei Yuan, Xiafu Peng and Gongliu Yang
Appl. Sci. 2025, 15(16), 8875; https://doi.org/10.3390/app15168875 - 12 Aug 2025
Cited by 1 | Viewed by 2717
Abstract
The visual–inertial odometry (VIO) system is not robust enough in long time operation. Especially, the visual–inertial and Global Navigation Satellite System (GNSS) coupled system is prone to dispersion of system position information in case of failure of visual information or GNSS information. To [...] Read more.
The visual–inertial odometry (VIO) system is not robust enough in long time operation. Especially, the visual–inertial and Global Navigation Satellite System (GNSS) coupled system is prone to dispersion of system position information in case of failure of visual information or GNSS information. To address the above problems, this paper proposes a tightly coupled nonlinear optimized localization system of RGBD visual, inertial measurement unit (IMU), and global position (GDVI-Fusion) to solve the problems of insufficient robustness of carrier position estimation and inaccurate localization information in environments where visual information or GNSS information fails. The preprocessing of depth information in the initialization process is proposed to solve the influence of an RGBD camera by lighting and physical structure and to improve the accuracy of the depth information of image feature points so as to improve the robustness of the localization system. Based on the K-Nearest-Neighbors (KNN) algorithm, to process the feature points, the matching points construct the best geometric constraints and eliminate the feature matching points with an abnormal length and slope of the matching line, which improves the rapidity and accuracy of the feature point matching, resulting in the improvement of the system’s localization accuracy. The lightweight monocular GDVI-Fusion system proposed in this paper achieves a 54.2% improvement in operational efficiency and a 37.1% improvement in positioning accuracy compared with the GVINS system. We have verified the system’s operational efficiency and positioning accuracy using a public dataset and on a prototype. Full article
Show Figures

Figure 1

13 pages, 4728 KB  
Article
Stereo Direct Sparse Visual–Inertial Odometry with Efficient Second-Order Minimization
by Chenhui Fu and Jiangang Lu
Sensors 2025, 25(15), 4852; https://doi.org/10.3390/s25154852 - 7 Aug 2025
Viewed by 1279
Abstract
Visual–inertial odometry (VIO) is the primary supporting technology for autonomous systems, but it faces three major challenges: initialization sensitivity, dynamic illumination, and multi-sensor fusion. In order to overcome these challenges, this paper proposes stereo direct sparse visual–inertial odometry with efficient second-order minimization. It [...] Read more.
Visual–inertial odometry (VIO) is the primary supporting technology for autonomous systems, but it faces three major challenges: initialization sensitivity, dynamic illumination, and multi-sensor fusion. In order to overcome these challenges, this paper proposes stereo direct sparse visual–inertial odometry with efficient second-order minimization. It is entirely implemented using the direct method, which includes a depth initialization module based on visual–inertial alignment, a stereo image tracking module, and a marginalization module. Inertial measurement unit (IMU) data is first aligned with a stereo image to initialize the system effectively. Then, based on the efficient second-order minimization (ESM) algorithm, the photometric error and the inertial error are minimized to jointly optimize camera poses and sparse scene geometry. IMU information is accumulated between several frames using measurement preintegration and is inserted into the optimization as an additional constraint between keyframes. A marginalization module is added to reduce the computation complexity of the optimization and maintain the information about the previous states. The proposed system is evaluated on the KITTI visual odometry benchmark and the EuRoC dataset. The experimental results demonstrate that the proposed system achieves state-of-the-art performance in terms of accuracy and robustness. Full article
(This article belongs to the Section Vehicular Sensing)
Show Figures

Figure 1

25 pages, 8468 KB  
Article
An Autonomous Localization Vest System Based on Advanced Adaptive PDR with Binocular Vision Assistance
by Tianqi Tian, Yanzhu Hu, Xinghao Zhao, Hui Zhao, Yingjian Wang and Zhen Liang
Micromachines 2025, 16(8), 890; https://doi.org/10.3390/mi16080890 - 30 Jul 2025
Viewed by 519
Abstract
Despite significant advancements in indoor navigation technology over recent decades, it still faces challenges due to excessive dependency on external infrastructure and unreliable positioning in complex environments. This paper proposes an autonomous localization system that integrates advanced adaptive pedestrian dead reckoning (APDR) and [...] Read more.
Despite significant advancements in indoor navigation technology over recent decades, it still faces challenges due to excessive dependency on external infrastructure and unreliable positioning in complex environments. This paper proposes an autonomous localization system that integrates advanced adaptive pedestrian dead reckoning (APDR) and binocular vision, designed to provide a low-cost, high-reliability, and high-precision solution for rescuers. By analyzing the characteristics of measurement data from various body parts, the chest is identified as the optimal placement for sensors. A chest-mounted advanced APDR method based on dynamic step segmentation detection and adaptive step length estimation has been developed. Furthermore, step length features are innovatively integrated into the visual tracking algorithm to constrain errors. Visual data is fused with dead reckoning data through an extended Kalman filter (EKF), which notably enhances the reliability and accuracy of the positioning system. A wearable autonomous localization vest system was designed and tested in indoor corridors, underground parking lots, and tunnel environments. Results show that the system decreases the average positioning error by 45.14% and endpoint error by 38.6% when compared to visual–inertial odometry (VIO). This low-cost, wearable solution effectively meets the autonomous positioning needs of rescuers in disaster scenarios. Full article
(This article belongs to the Special Issue Artificial Intelligence for Micro Inertial Sensors)
Show Figures

Figure 1

20 pages, 5843 KB  
Article
Accurate and Robust Train Localization: Fusing Degeneracy-Aware LiDAR-Inertial Odometry and Visual Landmark Correction
by Lin Yue, Peng Wang, Jinchao Mu, Chen Cai, Dingyi Wang and Hao Ren
Sensors 2025, 25(15), 4637; https://doi.org/10.3390/s25154637 - 26 Jul 2025
Viewed by 996
Abstract
To overcome the limitations of current train positioning systems, including low positioning accuracy and heavy reliance on track transponders or GNSS signals, this paper proposes a novel LiDAR-inertial and visual landmark fusion framework. Firstly, an IMU preintegration factor considering the Earth’s rotation and [...] Read more.
To overcome the limitations of current train positioning systems, including low positioning accuracy and heavy reliance on track transponders or GNSS signals, this paper proposes a novel LiDAR-inertial and visual landmark fusion framework. Firstly, an IMU preintegration factor considering the Earth’s rotation and a LiDAR-inertial odometry factor accounting for degenerate states are constructed to adapt to railway train operating environments. Subsequently, a lightweight network based on YOLO improvement is used for recognizing reflective kilometer posts, while PaddleOCR extracts numerical codes. High-precision vertex coordinates of kilometer posts are obtained by jointly using LiDAR point cloud and an image detection box. Next, a kilometer post factor is constructed, and multi-source information is optimized within a factor graph framework. Finally, onboard experiments conducted on real railway vehicles demonstrate high-precision landmark detection at 35 FPS with 94.8% average precision. The proposed method delivers robust positioning within 5 m RMSE accuracy for high-speed, long-distance train travel, establishing a novel framework for intelligent railway development. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

18 pages, 3315 KB  
Article
Real-Time Geo-Localization for Land Vehicles Using LIV-SLAM and Referenced Satellite Imagery
by Yating Yao, Jing Dong, Songlai Han, Haiqiao Liu, Quanfu Hu and Zhikang Chen
Appl. Sci. 2025, 15(15), 8257; https://doi.org/10.3390/app15158257 - 24 Jul 2025
Viewed by 611
Abstract
Existing Simultaneous Localization and Mapping (SLAM) algorithms provide precise local pose estimation and real-time scene reconstruction, widely applied in autonomous navigation for land vehicles. However, the odometry of SLAM algorithms exhibits localization drift and error divergence over long-distance operations due to the lack [...] Read more.
Existing Simultaneous Localization and Mapping (SLAM) algorithms provide precise local pose estimation and real-time scene reconstruction, widely applied in autonomous navigation for land vehicles. However, the odometry of SLAM algorithms exhibits localization drift and error divergence over long-distance operations due to the lack of inherent global constraints. In this paper, we propose a real-time geo-localization method for land vehicles, which only relies on a LiDAR-inertial-visual SLAM (LIV-SLAM) and a referenced image. The proposed method enables long-distance navigation without requiring GPS or loop closure, while eliminating accumulated localization errors. To achieve this, the local map constructed by SLAM is real-timely projected onto a downward-view image, and a highly efficient cross modal matching algorithm is proposed to estimate the global position by aligning the projected local image to a geo-referenced satellite image. The cross-modal algorithm leverages dense texture orientation features, ensuring robustness against cross-modal distortion and local scene changes, and supports efficient correlation in the frequency domain for real-time performance. We also propose a novel adaptive Kalman filter (AKF) to integrate the global position provided by the cross-modal matching and the pose estimated by LIV-SLAM. The proposed AKF is designed to effectively handle observation delays and asynchronous updates while simultaneously rejecting the impact of erroneous matches through an Observation-Aware Gain Scaling (OAGS) mechanism. We verify the proposed algorithm through R3LIVE and NCLT datasets, demonstrating superior computational efficiency, reliability, and accuracy compared to existing methods. Full article
(This article belongs to the Special Issue Navigation and Positioning Based on Multi-Sensor Fusion Technology)
Show Figures

Figure 1

19 pages, 3176 KB  
Article
Deploying an Educational Mobile Robot
by Dorina Plókai, Borsa Détár, Tamás Haidegger and Enikő Nagy
Machines 2025, 13(7), 591; https://doi.org/10.3390/machines13070591 - 8 Jul 2025
Cited by 1 | Viewed by 3175
Abstract
This study presents the development of a software solution for processing, analyzing, and visualizing sensor data collected by an educational mobile robot. The focus is on statistical analysis and identifying correlations between diverse datasets. The research utilized the PlatypOUs mobile robot platform, equipped [...] Read more.
This study presents the development of a software solution for processing, analyzing, and visualizing sensor data collected by an educational mobile robot. The focus is on statistical analysis and identifying correlations between diverse datasets. The research utilized the PlatypOUs mobile robot platform, equipped with odometry and inertial measurement units (IMUs), to gather comprehensive motion data. To enhance the reliability and interpretability of the data, advanced data processing techniques—such as moving averages, correlation analysis, and exponential smoothing—were employed. Python-based tools, including Matplotlib and Visual Studio Code, were used for data visualization and analysis. The analysis provided key insights into the robot’s motion dynamics; specifically, its stability during linear movements and variability during turns. By applying moving average filtering and exponential smoothing, noise in the sensor data was significantly reduced, enabling clearer identification of motion patterns. Correlation analysis revealed meaningful relationships between velocity and acceleration during various motion states. These findings underscore the value of advanced data processing techniques in improving the performance and reliability of educational mobile robots. The insights gained in this pilot project contribute to the optimization of navigation algorithms and motion control systems, enhancing the robot’s future potential in STEM education applications. Full article
Show Figures

Figure 1

19 pages, 4219 KB  
Article
Schur Complement Optimized Iterative EKF for Visual–Inertial Odometry in Autonomous Vehicles
by Guo Ma, Cong Li, Hui Jing, Bing Kuang, Ming Li, Xiang Wang and Guangyu Jia
Machines 2025, 13(7), 582; https://doi.org/10.3390/machines13070582 - 4 Jul 2025
Cited by 1 | Viewed by 891
Abstract
Accuracy and nonlinear processing capabilities are critical to the positioning and navigation of autonomous vehicles in visual–inertial odometry (VIO). Existing filtering-based VIO methods struggle to deal with strongly nonlinear systems and often exhibit low precision. To this end, this paper proposes a VIO [...] Read more.
Accuracy and nonlinear processing capabilities are critical to the positioning and navigation of autonomous vehicles in visual–inertial odometry (VIO). Existing filtering-based VIO methods struggle to deal with strongly nonlinear systems and often exhibit low precision. To this end, this paper proposes a VIO method based on the Schur complement and Iterated Extended Kalman Filtering (IEKF). The algorithm first enhances ORB (Oriented FAST and Rotated BRIEF) features using Multi-Layer Perceptron (MLP) and Transformer architectures to improve feature robustness. It then integrates visual information and Inertial Measurement Unit (IMU) data through IEKF, constructing a complete residual model. The Schur complement is applied during covariance updates to compress the state dimension, improving computational efficiency and significantly enhancing the system’s ability to handle nonlinearities while maintaining real-time performance. Compared to traditional Extended Kalman Filtering (EKF), the proposed method demonstrates stronger stability and accuracy in high-dynamic scenarios. The experimental results show that the algorithm achieves superior state estimation performance on several typical visual–inertial datasets, demonstrating excellent accuracy and robustness. Full article
(This article belongs to the Topic Advances in Mobile Robotics Navigation, 2nd Volume)
Show Figures

Figure 1

29 pages, 4413 KB  
Article
Advancing Road Infrastructure Safety with the Remotely Piloted Safety Cone
by Francisco Javier García-Corbeira, David Alvarez-Moyano, Pedro Arias Sánchez and Joaquin Martinez-Sanchez
Infrastructures 2025, 10(7), 160; https://doi.org/10.3390/infrastructures10070160 - 27 Jun 2025
Viewed by 906
Abstract
This article presents the design, implementation, and validation of a Remotely Piloted Safety Cone (RPSC), an autonomous robotic system developed to enhance safety and operational efficiency in road maintenance. The RPSC addresses challenges associated with road works, including workers’ exposure to traffic hazards [...] Read more.
This article presents the design, implementation, and validation of a Remotely Piloted Safety Cone (RPSC), an autonomous robotic system developed to enhance safety and operational efficiency in road maintenance. The RPSC addresses challenges associated with road works, including workers’ exposure to traffic hazards and inefficiencies of traditional traffic cones, such as manual placement and retrieval, limited visibility in low-light conditions, and inability to adapt to dynamic changes in work zones. In contrast, the RPSC offers autonomous mobility, advanced visual signalling, and real-time communication capabilities, significantly improving safety and operational flexibility during maintenance tasks. The RPSC integrates sensor fusion, combining Global Navigation Satellite System (GNSS) with Real-Time Kinematic (RTK) for precise positioning, Inertial Measurement Unit (IMU) and encoders for accurate odometry, and obstacle detection sensors within an optimised navigation framework using Robot Operating System (ROS2) and Micro Air Vehicle Link (MAVLink) protocols. Complying with European regulations, the RPSC ensures structural integrity, visibility, stability, and regulatory compliance. Safety features include emergency stop capabilities, visual alarms, autonomous safety routines, and edge computing for rapid responsiveness. Field tests validated positioning accuracy below 30 cm, route deviations under 15 cm, and obstacle detection up to 4 m, significantly improved by Kalman filtering, aligning with digitalisation, sustainability, and occupational risk prevention objectives. Full article
Show Figures

Figure 1

19 pages, 2531 KB  
Article
Fusion-Based Localization System Integrating UWB, IMU, and Vision
by Zhongliang Deng, Haiming Luo, Xiangchuan Gao and Peijia Liu
Appl. Sci. 2025, 15(12), 6501; https://doi.org/10.3390/app15126501 - 9 Jun 2025
Viewed by 2666
Abstract
Accurate indoor positioning services have become increasingly important in modern applications. Various new indoor positioning methods have been developed. Among them, visual–inertial odometry (VIO)-based techniques are notably limited by lighting conditions, while ultrawideband (UWB)-based algorithms are highly susceptible to environmental interference. To address [...] Read more.
Accurate indoor positioning services have become increasingly important in modern applications. Various new indoor positioning methods have been developed. Among them, visual–inertial odometry (VIO)-based techniques are notably limited by lighting conditions, while ultrawideband (UWB)-based algorithms are highly susceptible to environmental interference. To address these limitations, this study proposes a hybrid indoor positioning algorithm that combines UWB and VIO. The method first utilizes a tightly coupled UWB/inertial measurement unit (IMU) fusion algorithm based on a sliding-window factor graph to obtain initial position estimates. These estimates are then combined with VIO outputs to formulate the system’s motion and observation models. Finally, an extended Kalman filter (EKF) is applied for data fusion to achieve optimal state estimation. The proposed hybrid positioning algorithm is validated on a self-developed mobile platform in an indoor environment. Experimental results show that, in indoor environments, the proposed method reduces the root mean square error (RMSE) by 67.6% and the maximum error by approximately 67.9% compared with the standalone UWB method. Compared with the stereo VIO model, the RMSE and maximum error are reduced by 55.4% and 60.4%, respectively. Furthermore, compared with the UWB/IMU fusion model, the proposed method achieves a 50.0% reduction in RMSE and a 59.1% reduction in maximum error. Full article
Show Figures

Figure 1

Back to TopTop