sensors-logo

Journal Browser

Journal Browser

Simultaneous Localization and Mapping (SLAM) for Mobile Robot Navigation

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensors and Robotics".

Deadline for manuscript submissions: closed (31 August 2023) | Viewed by 49832

Special Issue Editors


E-Mail Website
Guest Editor
School of Computing Science, University of Glasgow, Glasgow G12 8RZ, UK
Interests: cyber-physical security; localization/navigation with wireless communication system; Internet of Things (IoT) using Machine Learning (ML) or Artificial Intelligence methodology (AI)
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Weston Robot Pte Ltd, 81 Science Park Dr, #01-01, The Chadwick, Singapore 118257, Singapore
Interests: robotics; UWB localization; computer vision

E-Mail Website
Guest Editor
Samsung Israel R&D, Tel Aviv University, P.O. Box 39040, Tel Aviv 6997801, Israel
Interests: AI; machine learning; behavioral analysis; computer vision; optimization

E-Mail Website
Guest Editor
School of Electrical and Electronic Engineering, Nanyang Technological University Singapore, Singapore, Singapore
Interests: wireless and IMU localization and navigation
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Recent years has seen a surge of mobile robot technologies entering our daily lives. This trend accelerated during the COVID-19 pandemic amplifying the need for automated mobile solutions, e.g. for delivery, surveillance, inspection, or mapping applications. However, for mobile robots to be deployed in a meaningful fashion, they need to be able to navigate safely in dynamic, possibly even unknown, environments and interact naturally with humans. Simultaneous Localization and Mapping (SLAM) is seen as one of the key enablers for the successful deployment of mobile robots.

Despite the popularity of SLAM, it remains a challenging task for SLAM algorithms to work robustly in dynamic, poorly lit, featureless or unknown environments. In fully autonomous operation, data from computer vision, inertial, LiDAR and other time-of-flight sensors are typically coupled with the latest Artificial Intelligence (AI) and Machine Learning techniques such as Gaussian Process Regression and Graph Signal Processing for SLAM algorithms to overcome these technical hurdles.

The aim of this special issue is to present the current state-of-the-art and novel techniques in SLAM enabling future applications of intelligent mobile robots in realistic environments. We look forward to the latest research proposing new algorithms and/or novel applications of SLAM for mobile robot navigation. We invite contributions to the following topics (but not limited to):

  • Applications of SLAM for mobile robot navigation
  • AI and machine learning for mobile robot navigation
  • Map-based or landmark-based navigation
  • Vision-based mobile robot navigation
  • Data fusion for SLAM-based navigation using vision, inertial, LiDAR, UWB, or other time-of-flight sensors
  • Co-operative SLAM
  • 3D SLAM for indoor mapping
  • Fast SLAM for edge deployment

The use of various sensors such as LiDAR, stereo and mono-vision cameras, and other time-of-flight sensors for SLAM and navigation fits nicely within the scope of the “Sensors” journal and provides opportunities to enable edge devices with increased perception of the environment.

Dr. Henrik Hesse
Dr. Chee Kiat Seow
Dr. Yanliang Zhang
Dr. Torr Polakow
Dr. Kai Wen
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Related Special Issue

Published Papers (16 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

17 pages, 3915 KiB  
Article
OTE-SLAM: An Object Tracking Enhanced Visual SLAM System for Dynamic Environments
by Yimeng Chang, Jun Hu and Shiyou Xu
Sensors 2023, 23(18), 7921; https://doi.org/10.3390/s23187921 - 15 Sep 2023
Cited by 4 | Viewed by 2198
Abstract
With the rapid development of autonomous driving and robotics applications in recent years, visual Simultaneous Localization and Mapping (SLAM) has become a hot research topic. The majority of visual SLAM systems relies on the assumption of scene rigidity, which may not always hold [...] Read more.
With the rapid development of autonomous driving and robotics applications in recent years, visual Simultaneous Localization and Mapping (SLAM) has become a hot research topic. The majority of visual SLAM systems relies on the assumption of scene rigidity, which may not always hold true in real applications. In dynamic environments, SLAM systems, without accounting for dynamic objects, will easily fail to estimate the camera pose. Some existing methods attempt to address this issue by simply excluding the dynamic features lying in moving objects. But this may lead to a shortage of features for tracking. To tackle this problem, we propose OTE-SLAM, an object tracking enhanced visual SLAM system, which not only tracks the camera motion, but also tracks the movement of dynamic objects. Furthermore, we perform joint optimization of both the camera pose and object 3D position, enabling a mutual benefit between visual SLAM and object tracking. The results of experiences demonstrate that the proposed approach improves the accuracy of the SLAM system in challenging dynamic environments. The improvements include a maximum reduction in both absolute trajectory error and relative trajectory error by 22% and 33%, respectively. Full article
Show Figures

Figure 1

25 pages, 3740 KiB  
Article
FirebotSLAM: Thermal SLAM to Increase Situational Awareness in Smoke-Filled Environments
by Benjamin Ronald van Manen, Victor Sluiter and Abeje Yenehun Mersha
Sensors 2023, 23(17), 7611; https://doi.org/10.3390/s23177611 - 2 Sep 2023
Viewed by 2017
Abstract
Operating in extreme environments is often challenging due to the lack of perceptual knowledge. During fire incidents in large buildings, the extreme levels of smoke can seriously impede a firefighter’s vision, potentially leading to severe material damage and loss of life. To increase [...] Read more.
Operating in extreme environments is often challenging due to the lack of perceptual knowledge. During fire incidents in large buildings, the extreme levels of smoke can seriously impede a firefighter’s vision, potentially leading to severe material damage and loss of life. To increase the safety of firefighters, research is conducted in collaboration with Dutch fire departments into the usability of Unmanned Ground Vehicles to increase situational awareness in hazardous environments. This paper proposes FirebotSLAM, the first algorithm capable of coherently computing a robot’s odometry while creating a comprehensible 3D map solely using the information extracted from thermal images. The literature showed that the most challenging aspect of thermal Simultaneous Localization and Mapping (SLAM) is the extraction of robust features in thermal images. Therefore, a practical benchmark of feature extraction and description methods was performed on datasets recorded during a fire incident. The best-performing combination of extractor and descriptor is then implemented into a state-of-the-art visual SLAM algorithm. As a result, FirebotSLAM is the first thermal odometry algorithm able to perform global trajectory optimization by detecting loop closures. Finally, FirebotSLAM is the first thermal SLAM algorithm to be tested in a fiery environment to validate its applicability in an operational scenario. Full article
Show Figures

Figure 1

23 pages, 15809 KiB  
Article
SLAMICP Library: Accelerating Obstacle Detection in Mobile Robot Navigation via Outlier Monitoring following ICP Localization
by Eduard Clotet and Jordi Palacín
Sensors 2023, 23(15), 6841; https://doi.org/10.3390/s23156841 - 1 Aug 2023
Cited by 10 | Viewed by 5559
Abstract
The Iterative Closest Point (ICP) is a matching technique used to determine the transformation matrix that best minimizes the distance between two point clouds. Although mostly used for 2D and 3D surface reconstruction, this technique is also widely used for mobile robot self-localization [...] Read more.
The Iterative Closest Point (ICP) is a matching technique used to determine the transformation matrix that best minimizes the distance between two point clouds. Although mostly used for 2D and 3D surface reconstruction, this technique is also widely used for mobile robot self-localization by means of matching partial information provided by an onboard LIDAR scanner with a known map of the facility. Once the estimated position of the robot is obtained, the scans gathered by the LIDAR can be analyzed to locate possible obstacles obstructing the planned trajectory of the mobile robot. This work proposes to speed up the obstacle detection process by directly monitoring outliers (discrepant points between the LIDAR scans and the full map) spotted after ICP matching instead of spending time performing an isolated task to re-analyze the LIDAR scans to detect those discrepancies. In this work, a computationally optimized ICP implementation has been adapted to return the list of outliers along with other matching metrics, computed in an optimal way by taking advantage of the parameters already calculated in order to perform the ICP matching. The evaluation of this adapted ICP implementation in a real mobile robot application has shown that the time required to perform self-localization and obstacle detection has been reduced by 36.7% when obstacle detection is performed simultaneously with the ICP matching instead of implementing a redundant procedure for obstacle detection. The adapted ICP implementation is provided in the SLAMICP library. Full article
Show Figures

Figure 1

19 pages, 5236 KiB  
Article
Tightly Coupled LiDAR-Inertial Odometry and Mapping for Underground Environments
by Jianhong Chen, Hongwei Wang and Shan Yang
Sensors 2023, 23(15), 6834; https://doi.org/10.3390/s23156834 - 31 Jul 2023
Cited by 2 | Viewed by 2349
Abstract
The demand for autonomous exploration and mapping of underground environments has significantly increased in recent years. However, accurately localizing and mapping robots in subterranean settings presents notable challenges. This paper presents a tightly coupled LiDAR-Inertial odometry system that combines the NanoGICP point cloud [...] Read more.
The demand for autonomous exploration and mapping of underground environments has significantly increased in recent years. However, accurately localizing and mapping robots in subterranean settings presents notable challenges. This paper presents a tightly coupled LiDAR-Inertial odometry system that combines the NanoGICP point cloud registration method with IMU pre-integration using incremental smoothing and mapping. Specifically, a point cloud affected by dust particles is first filtered out and separated into ground and non-ground point clouds (for ground vehicles). To maintain accuracy in environments with spatial variations, an adaptive voxel filter is employed, which reduces computation time while preserving accuracy. The estimated motion derived from IMU pre-integration is utilized to correct point cloud distortion and provide an initial estimation for LiDAR odometry. Subsequently, a scan-to-map point cloud registration is executed using NanoGICP to obtain a more refined pose estimation. The resulting LiDAR odometry is then employed to estimate the bias of the IMU. We comprehensively evaluated our system on established subterranean datasets. These datasets were collected by two separate teams using different platforms during the DARPA Subterranean (SubT) Challenge. The experimental results demonstrate that our system achieved performance enhancements as high as 50–60% in terms of root mean square error (RMSE). Full article
Show Figures

Figure 1

25 pages, 85034 KiB  
Article
360° Map Establishment and Real-Time Simultaneous Localization and Mapping Based on Equirectangular Projection for Autonomous Driving Vehicles
by Bo-Hong Lin, Vinay M. Shivanna, Jiun-Shiung Chen and Jiun-In Guo
Sensors 2023, 23(12), 5560; https://doi.org/10.3390/s23125560 - 14 Jun 2023
Viewed by 1698
Abstract
This paper proposes the design of a 360° map establishment and real-time simultaneous localization and mapping (SLAM) algorithm based on equirectangular projection. All equirectangular projection images with an aspect ratio of 2:1 are supported for input image types of the proposed system, allowing [...] Read more.
This paper proposes the design of a 360° map establishment and real-time simultaneous localization and mapping (SLAM) algorithm based on equirectangular projection. All equirectangular projection images with an aspect ratio of 2:1 are supported for input image types of the proposed system, allowing an unlimited number and arrangement of cameras. Firstly, the proposed system uses dual back-to-back fisheye cameras to capture 360° images, followed by the adoption of the perspective transformation with any yaw degree given to shrink the feature extraction area in order to reduce the computational time, as well as retain the 360° field of view. Secondly, the oriented fast and rotated brief (ORB) feature points extracted from perspective images with a GPU acceleration are used for tracking, mapping, and camera pose estimation in the system. The 360° binary map supports the functions of saving, loading, and online updating to enhance the flexibility, convenience, and stability of the 360° system. The proposed system is also implemented on an nVidia Jetson TX2 embedded platform with 1% accumulated RMS error of 250 m. The average performance of the proposed system achieves 20 frames per second (FPS) in the case with a single-fisheye camera of resolution 1024 × 768, and the system performs panoramic stitching and blending under 1416 × 708 resolution from a dual-fisheye camera at the same time. Full article
Show Figures

Figure 1

22 pages, 18884 KiB  
Article
LiDAR Inertial Odometry Based on Indexed Point and Delayed Removal Strategy in Highly Dynamic Environments
by Weizhuang Wu and Wanliang Wang
Sensors 2023, 23(11), 5188; https://doi.org/10.3390/s23115188 - 30 May 2023
Cited by 2 | Viewed by 2046
Abstract
Simultaneous localization and mapping (SLAM) is considered a challenge in environments with many moving objects. This paper proposes a novel LiDAR inertial odometry framework, LiDAR inertial odometry-based on indexed point and delayed removal strategy (ID-LIO) for dynamic scenes, which builds on LiDAR inertial [...] Read more.
Simultaneous localization and mapping (SLAM) is considered a challenge in environments with many moving objects. This paper proposes a novel LiDAR inertial odometry framework, LiDAR inertial odometry-based on indexed point and delayed removal strategy (ID-LIO) for dynamic scenes, which builds on LiDAR inertial odometry via smoothing and mapping (LIO-SAM). To detect the point clouds on the moving objects, a dynamic point detection method is integrated, which is based on pseudo occupancy along a spatial dimension. Then, we present a dynamic point propagation and removal algorithm based on indexed points to remove more dynamic points on the local map along the temporal dimension and update the status of the point features in keyframes. In the LiDAR odometry module, a delay removal strategy is proposed for historical keyframes, and the sliding window-based optimization includes the LiDAR measurement with dynamic weights to reduce error from dynamic points in keyframes. We perform the experiments both on the public low-dynamic and high-dynamic datasets. The results show that the proposed method greatly increases localization accuracy in high-dynamic environments. Additionally, the absolute trajectory error (ATE) and average RMSE root mean square error (RMSE) of our ID-LIO can be improved by 67% and 85% in the UrbanLoco-CAMarketStreet dataset and UrbanNav-HK-Medium-Urban-1 dataset, respectively, when compared with LIO-SAM. Full article
Show Figures

Figure 1

13 pages, 13033 KiB  
Article
Uncontrolled Two-Step Iterative Calibration Algorithm for Lidar–IMU System
by Shilun Yin, Donghai Xie, Yibo Fu, Zhibo Wang and Ruofei Zhong
Sensors 2023, 23(6), 3119; https://doi.org/10.3390/s23063119 - 14 Mar 2023
Viewed by 2200
Abstract
Calibration of sensors is critical for the precise functioning of lidar–IMU systems. However, the accuracy of the system can be compromised if motion distortion is not considered. This study proposes a novel uncontrolled two-step iterative calibration algorithm that eliminates motion distortion and improves [...] Read more.
Calibration of sensors is critical for the precise functioning of lidar–IMU systems. However, the accuracy of the system can be compromised if motion distortion is not considered. This study proposes a novel uncontrolled two-step iterative calibration algorithm that eliminates motion distortion and improves the accuracy of lidar–IMU systems. Initially, the algorithm corrects the distortion of rotational motion by matching the original inter-frame point cloud. Then, the point cloud is further matched with IMU after the prediction of attitude. The algorithm performs iterative motion distortion correction and rotation matrix calculation to obtain high-precision calibration results. In comparison with existing algorithms, the proposed algorithm boasts high accuracy, robustness, and efficiency. This high-precision calibration result can benefit a wide range of acquisition platforms, including handheld, unmanned ground vehicle (UGV), and backpack lidar–IMU systems. Full article
Show Figures

Figure 1

16 pages, 10120 KiB  
Article
HFNet-SLAM: An Accurate and Real-Time Monocular SLAM System with Deep Features
by Liming Liu and Jonathan M. Aitken
Sensors 2023, 23(4), 2113; https://doi.org/10.3390/s23042113 - 13 Feb 2023
Cited by 7 | Viewed by 5128
Abstract
Image tracking and retrieval strategies are of vital importance in visual Simultaneous Localization and Mapping (SLAM) systems. For most state-of-the-art systems, hand-crafted features and bag-of-words (BoW) algorithms are the common solutions. Recent research reports the vulnerability of these traditional algorithms in complex environments. [...] Read more.
Image tracking and retrieval strategies are of vital importance in visual Simultaneous Localization and Mapping (SLAM) systems. For most state-of-the-art systems, hand-crafted features and bag-of-words (BoW) algorithms are the common solutions. Recent research reports the vulnerability of these traditional algorithms in complex environments. To replace these methods, this work proposes HFNet-SLAM, an accurate and real-time monocular SLAM system built on the ORB-SLAM3 framework incorporated with deep convolutional neural networks (CNNs). This work provides a pipeline of feature extraction, keypoint matching, and loop detection fully based on features from CNNs. The performance of this system has been validated on public datasets against other state-of-the-art algorithms. The results reveal that the HFNet-SLAM achieves the lowest errors among systems available in the literature. Notably, the HFNet-SLAM obtains an average accuracy of 2.8 cm in EuRoC dataset in pure visual configuration. Besides, it doubles the accuracy in medium and large environments in TUM-VI dataset compared with ORB-SLAM3. Furthermore, with the optimisation of TensorRT technology, the entire system can run in real-time at 50 FPS. Full article
Show Figures

Figure 1

21 pages, 658 KiB  
Article
DPO: Direct Planar Odometry with Stereo Camera
by Filipe C. A. Lins, Nicolas S. Rosa, Valdir Grassi, Jr., Adelardo A. D. Medeiros and Pablo J. Alsina
Sensors 2023, 23(3), 1393; https://doi.org/10.3390/s23031393 - 26 Jan 2023
Cited by 1 | Viewed by 2456
Abstract
Nowadays, state-of-the-art direct visual odometry (VO) methods essentially rely on points to estimate the pose of the camera and reconstruct the environment. Direct Sparse Odometry (DSO) became the standard technique and many approaches have been developed from it. However, only recently, two monocular [...] Read more.
Nowadays, state-of-the-art direct visual odometry (VO) methods essentially rely on points to estimate the pose of the camera and reconstruct the environment. Direct Sparse Odometry (DSO) became the standard technique and many approaches have been developed from it. However, only recently, two monocular plane-based DSOs have been presented. The first one uses a learning-based plane estimator to generate coarse planes as input for optimization. When these coarse estimates are too far from the minimum, the optimization may fail. Thus, the entire system result is dependent on the quality of the plane predictions and restricted to the training data domain. The second one only detects planes in vertical and horizontal orientation as being more adequate to structured environments. To the best of our knowledge, we propose the first Stereo Plane-based VO inspired by the DSO framework. Differing from the above-mentioned methods, our approach purely uses planes as features in the sliding window optimization and uses a dual quaternion as pose parameterization. The conducted experiments showed that our method presents a similar performance to Stereo DSO, a point-based approach. Full article
Show Figures

Figure 1

19 pages, 47788 KiB  
Article
Analysis of Lidar Actuator System Influence on the Quality of Dense 3D Point Cloud Obtained with SLAM
by Paweł Trybała, Jarosław Szrek, Błażej Dębogórski, Bartłomiej Ziętek, Jan Blachowski, Jacek Wodecki and Radosław Zimroz
Sensors 2023, 23(2), 721; https://doi.org/10.3390/s23020721 - 8 Jan 2023
Cited by 5 | Viewed by 3341
Abstract
Mobile mapping technologies, based on techniques such as simultaneous localization and mapping (SLAM) and surface-from-motion (SfM), are being vigorously developed both in the scientific community and in industry. They are crucial concepts for automated 3D surveying and autonomous vehicles. For various applications, rotating [...] Read more.
Mobile mapping technologies, based on techniques such as simultaneous localization and mapping (SLAM) and surface-from-motion (SfM), are being vigorously developed both in the scientific community and in industry. They are crucial concepts for automated 3D surveying and autonomous vehicles. For various applications, rotating multiline scanners, manufactured, for example, by Velodyne and Ouster, are utilized as the main sensor of the mapping hardware system. However, their principle of operation has a substantial drawback, as their scanning pattern creates natural gaps between the scanning lines. In some models, the vertical lidar field of view can also be severely limited. To overcome these issues, more sensors could be employed, which would significantly increase the cost of the mapping system. Instead, some investigators have added a tilting or rotating motor to the lidar. Although the effectiveness of such a solution is usually clearly visible, its impact on the quality of the acquired 3D data has not yet been investigated. This paper presents an adjustable mapping system, which allows for switching between a stable, tilting or fully rotating lidar position. A simple experiment in a building corridor was performed, simulating the conditions of a mobile robot passing through a narrow tunnel: a common setting for applications, such as mining surveying or industrial facility inspection. A SLAM algorithm is utilized to create a coherent 3D point cloud of the mapped corridor for three settings of the sensor movement. The extent of improvement in the 3D data quality when using the tilting and rotating lidar, compared to keeping a stable position, is quantified. Different metrics are proposed to account for different aspects of the 3D data quality, such as completeness, density and geometry coherence. The ability of SLAM algorithms to faithfully represent selected objects appearing in the mapped scene is also examined. The results show that the fully rotating solution is optimal in terms of most of the metrics analyzed. However, the improvement observed from a horizontally mounted sensor to a tilting sensor was the most significant. Full article
Show Figures

Figure 1

18 pages, 2894 KiB  
Article
OMC-SLIO: Online Multiple Calibrations Spinning LiDAR Inertial Odometry
by Shuang Wang, Hua Zhang and Guijin Wang
Sensors 2023, 23(1), 248; https://doi.org/10.3390/s23010248 - 26 Dec 2022
Cited by 4 | Viewed by 2767
Abstract
Light detection and ranging (LiDAR) is often combined with an inertial measurement unit (IMU) to get the LiDAR inertial odometry (LIO) for robot localization and mapping. In order to apply LIO efficiently and non-specialistically, self-calibration LIO is a hot research topic in the [...] Read more.
Light detection and ranging (LiDAR) is often combined with an inertial measurement unit (IMU) to get the LiDAR inertial odometry (LIO) for robot localization and mapping. In order to apply LIO efficiently and non-specialistically, self-calibration LIO is a hot research topic in the related community. Spinning LiDAR (SLiDAR), which uses an additional rotating mechanism to spin a common LiDAR and scan the surrounding environment, achieves a large field of view (FoV) with low cost. Unlike common LiDAR, in addition to the calibration between the IMU and the LiDAR, the self-calibration odometer for SLiDAR must also consider the mechanism calibration between the rotating mechanism and the LiDAR. However, existing self-calibration LIO methods require the LiDAR to be rigidly attached to the IMU and do not take the mechanism calibration into account, which cannot be applied to the SLiDAR. In this paper, we propose firstly a novel self-calibration odometry scheme for SLiDAR, named the online multiple calibration inertial odometer (OMC-SLIO) method, which allows online estimation of multiple extrinsic parameters among the LiDAR, rotating mechanism and IMU, as well as the odometer state. Specially, considering that the rotating and static parts of the motor encoder inside the SLiDAR are rigidly connected to the LiDAR and IMU respectively, we formulate the calibration within the SLiDAR as two separate sets of calibrations: the mechanism calibration between the LiDAR and the rotating part of the motor encoder and the sensor calibration between the static part of the motor encoder and the IMU. Based on such a SLiDAR calibration formulation, we can construct a well-defined kinematic model from the LiDAR to the IMU with the angular information from the motor encoder. Based on the kinematic model, a two-stage motion compensation method is presented to eliminate the point cloud distortion resulting from LiDAR spinning and platform motion. Furthermore, the mechanism and sensor calibration as well as the odometer state are wrapped in a measurement model and estimated via an error-state iterative extended Kalman filter (ESIEKF). Experimental results show that our OMC-SLIO is effective and attains excellent performance. Full article
Show Figures

Figure 1

15 pages, 760 KiB  
Article
xRatSLAM: An Extensible RatSLAM Computational Framework
by Mauro Enrique de Souza Muñoz, Matheus Chaves Menezes, Edison Pignaton de Freitas, Sen Cheng, Paulo Rogério de Almeida Ribeiro, Areolino de Almeida Neto and Alexandre César Muniz de Oliveira
Sensors 2022, 22(21), 8305; https://doi.org/10.3390/s22218305 - 29 Oct 2022
Cited by 1 | Viewed by 2348
Abstract
Simultaneous localization and mapping (SLAM) refers to techniques for autonomously constructing a map of an unknown environment while, at the same time, locating the robot in this map. RatSLAM, a prevalent method, is based on the navigation system found in rodent brains. It [...] Read more.
Simultaneous localization and mapping (SLAM) refers to techniques for autonomously constructing a map of an unknown environment while, at the same time, locating the robot in this map. RatSLAM, a prevalent method, is based on the navigation system found in rodent brains. It has served as a base algorithm for other bioinspired approaches, and its implementation has been extended to incorporate new features. This work proposes xRatSLAM: an extensible, parallel, open-source framework applicable for developing and testing new RatSLAM variations. Tests were carried out to evaluate and validate the proposed framework, allowing the comparison of xRatSLAM with OpenRatSLAM and assessing the impact of replacing framework components. The results provide evidence that the maps produced by xRatSLAM are similar to those produced by OpenRatSLAM when they are fed with the same input parameters, which is a positive result, and that implemented modules can be easily changed without impacting other parts of the framework. Full article
Show Figures

Figure 1

21 pages, 1774 KiB  
Article
Kinematic/Dynamic SLAM for Autonomous Vehicles Using the Linear Parameter Varying Approach
by Pau Vial and Vicenç Puig
Sensors 2022, 22(21), 8211; https://doi.org/10.3390/s22218211 - 26 Oct 2022
Cited by 2 | Viewed by 2244
Abstract
Most existing algorithms in mobile robotics consider a kinematic robot model for the the Simultaneous Localization and Mapping (SLAM) problem. However, in the case of autonomous vehicles, because of the increase in the mass and velocities, a kinematic model is not enough to [...] Read more.
Most existing algorithms in mobile robotics consider a kinematic robot model for the the Simultaneous Localization and Mapping (SLAM) problem. However, in the case of autonomous vehicles, because of the increase in the mass and velocities, a kinematic model is not enough to characterize some physical effects as, e.g., the slip angle. For this reason, when applying SLAM to autonomous vehicles, the model used should be augmented considering both kinematic and dynamic behaviours. The inclusion of dynamic behaviour implies that nonlinearities of the vehicle model are most important. For this reason, classical observation techniques based on the the linearization of the system model around the operation point, such as the well known Extended Kalman Filter (EKF), should be improved. Consequently, new techniques of advanced control must be introduced to more efficiently treat the nonlinearities of the involved models. The Linear Parameter Varying (LPV) technique allows working with nonlinear models, making a pseudolinear representation, and establishing systematic methodologies to design state estimation schemes applying several specifications. In recent years, it has been proved in many applications that this advanced technique is very useful in real applications, and it has been already implemented in a wide variety of application fields. In this article, we present a SLAM-based localization system for an autonomous vehicle considering the dynamic behaviour using LPV techniques. Comparison results are provided to show how our proposal outperforms classical observation techniques based on model linearization. Full article
Show Figures

Figure 1

33 pages, 12814 KiB  
Article
Development of an Online Adaptive Parameter Tuning vSLAM Algorithm for UAVs in GPS-Denied Environments
by Chieh-Li Chen, Rong He and Chao-Chung Peng
Sensors 2022, 22(20), 8067; https://doi.org/10.3390/s22208067 - 21 Oct 2022
Cited by 5 | Viewed by 2364
Abstract
In recent years, unmanned aerial vehicles (UAVs) have been applied in many fields owing to their mature flight control technology and easy-to-operate characteristics. No doubt, these UAV-related applications rely heavily on location information provided by the positioning system. Most UAVs nowadays use a [...] Read more.
In recent years, unmanned aerial vehicles (UAVs) have been applied in many fields owing to their mature flight control technology and easy-to-operate characteristics. No doubt, these UAV-related applications rely heavily on location information provided by the positioning system. Most UAVs nowadays use a global navigation satellite system (GNSS) to obtain location information. However, this outside-in 3rd party positioning system is particularly susceptible to environmental interference and cannot be used in indoor environments, which limits the application diversity of UAVs. To deal with this problem, in this paper, a stereo-based visual simultaneous localization and mapping technology (vSLAM) is applied. The presented vSLAM algorithm fuses onboard inertial measurement unit (IMU) information to further solve the navigation problem in an unknown environment without the use of a GNSS signal and provides reliable localization information. The overall visual positioning system is based on the stereo parallel tracking and mapping architecture (S-PTAM). However, experiments found that the feature-matching threshold has a significant impact on positioning accuracy. Selection of the threshold is based on the Hamming distance without any physical meaning, which makes the threshold quite difficult to set manually. Therefore, this work develops an online adaptive matching threshold according to the keyframe poses. Experiments show that the developed adaptive matching threshold improves positioning accuracy. Since the attitude calculation of the IMU is carried out based on the Mahony complementary filter, the difference between the measured acceleration and the gravity is used as the metric to online tune the gain value dynamically, which can improve the accuracy of attitude estimation under aggressive motions. Moreover, a static state detection algorithm based on the moving window method and measured acceleration is proposed as well to accurately calculate the conversion mechanism between the vSLAM system and the IMU information; this initialization mechanism can help IMU provide a better initial guess for the bundle adjustment algorithm (BA) in the tracking thread. Finally, a performance evaluation of the proposed algorithm is conducted by the popular EuRoC dataset. All the experimental results show that the developed online adaptive parameter tuning algorithm can effectively improve the vSLAM accuracy and robustness. Full article
Show Figures

Graphical abstract

18 pages, 335944 KiB  
Article
Persistent Mapping of Sensor Data for Medium-Term Autonomy
by Kevin Nickels, Jason Gassaway, Matthew Bries, David Anthony and Graham W. Fiorani
Sensors 2022, 22(14), 5427; https://doi.org/10.3390/s22145427 - 20 Jul 2022
Cited by 1 | Viewed by 1687
Abstract
For vehicles to operate in unmapped areas with some degree of autonomy, it would be useful to aggregate and store processed sensor data so that it can be used later. In this paper, a tool that records and optimizes the placement of costmap [...] Read more.
For vehicles to operate in unmapped areas with some degree of autonomy, it would be useful to aggregate and store processed sensor data so that it can be used later. In this paper, a tool that records and optimizes the placement of costmap data on a persistent map is presented. The optimization takes several factors into account, including local vehicle odometry, GPS signals when available, local map consistency, deformation of map regions, and proprioceptive GPS offset error. Results illustrating the creation of maps from previously unseen regions (a 100 m × 880 m test track and a 1.2 km dirt trail) are presented, with and without GPS signals available during the creation of the maps. Finally, two examples of the use of these maps are given. First, a path is planned along roads that have been seen exactly once during the mapping phase. Secondly, the map is used for vehicle localization in the absence of GPS signals. Full article
Show Figures

Figure 1

Review

Jump to: Research

17 pages, 3930 KiB  
Review
A Review on Visual-SLAM: Advancements from Geometric Modelling to Learning-Based Semantic Scene Understanding Using Multi-Modal Sensor Fusion
by Tin Lai
Sensors 2022, 22(19), 7265; https://doi.org/10.3390/s22197265 - 25 Sep 2022
Cited by 10 | Viewed by 5301
Abstract
Simultaneous Localisation and Mapping (SLAM) is one of the fundamental problems in autonomous mobile robots where a robot needs to reconstruct a previously unseen environment while simultaneously localising itself with respect to the map. In particular, Visual-SLAM uses various sensors from the mobile [...] Read more.
Simultaneous Localisation and Mapping (SLAM) is one of the fundamental problems in autonomous mobile robots where a robot needs to reconstruct a previously unseen environment while simultaneously localising itself with respect to the map. In particular, Visual-SLAM uses various sensors from the mobile robot for collecting and sensing a representation of the map. Traditionally, geometric model-based techniques were used to tackle the SLAM problem, which tends to be error-prone under challenging environments. Recent advancements in computer vision, such as deep learning techniques, have provided a data-driven approach to tackle the Visual-SLAM problem. This review summarises recent advancements in the Visual-SLAM domain using various learning-based methods. We begin by providing a concise overview of the geometric model-based approaches, followed by technical reviews on the current paradigms in SLAM. Then, we present the various learning-based approaches to collecting sensory inputs from mobile robots and performing scene understanding. The current paradigms in deep-learning-based semantic understanding are discussed and placed under the context of Visual-SLAM. Finally, we discuss challenges and further opportunities in the direction of learning-based approaches in Visual-SLAM. Full article
Show Figures

Figure 1

Back to TopTop