Next Article in Journal
Generative Diffusion Models for Compressed Sensing of Satellite LiDAR Data: Evaluating Image Quality Metrics in Forest Landscape Reconstruction
Previous Article in Journal
Map of Arctic and Antarctic Polynyas 2013–2022 Using Sea Ice Concentration
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

A Review of 2D Lidar SLAM Research

College of Mechanical and Electrical Engineering, Hohai University, Changzhou 213200, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2025, 17(7), 1214; https://doi.org/10.3390/rs17071214
Submission received: 17 December 2024 / Revised: 11 March 2025 / Accepted: 20 March 2025 / Published: 28 March 2025

Abstract

:
Two-dimensional (2D) simultaneous localization and mapping (SLAM) is a key technology for intelligent indoor robots. By using a map generated via SLAM, the robot can navigate and perform specific tasks. This paper reviews the progress of 2D Lidar SLAM algorithms based on four principles: filter-based SLAM, matching-based SLAM, graph optimization-based SLAM, and deep learning-based SLAM, highlighting their advantages, disadvantages, and applicability. Additionally, two key research topics in 2D Lidar SLAM are presented: solutions for dynamic objects during mapping and the fusion of 2D Lidar and vision data. Finally, the development trends of 2D SLAM are discussed.

1. Introduction

Indoor robots have found extensive applications in modern production processes and household tasks, including automated guided vehicles (AGVs), logistics distribution, and house cleaning. One of the key technologies of indoor robots is autonomous navigation [1,2]. To enable robots to navigate in different scenes, SLAM creates an environment map in advance [3,4]. In an uncharted environment, a mobile robot can estimate its position and orientation by perceiving environmental information through sensors, simultaneously constructing a progressively updated map based on its pose. Based on the sensor type, SLAM can be categorized into vision-based and laser-based SLAM [5,6,7]. At present, visual SLAM often uses RGBD cameras, monocular cameras, and binocular cameras. Mainstream open-source visual SLAM includes MonoSLAM [8], ORB-SLAM2 [9], and VINS-SLAM [10]. Visual SLAM offers rich texture information and provides a variety of feature information, but it is greatly affected by ambient light and cannot work in dark or non-textured areas. Since Lidar offers a wide field of view, high precision, and immunity to light intensity variations, laser-based SLAM is widely used in robotic applications. Particularly, 2D Lidar SLAM is more cost-effective compared to 3D Lidar SLAM and is primarily used for indoor robots [11].
Two-dimensional Lidar SLAM is typically divided into four parts: a front-end odometer, a back-end optimization, mapping, and loop detection, which is shown in Figure 1. First, 2D Lidar collects environmental information and sends point cloud data; then, the front-end odometer receives and preprocesses the data, estimates the pose through matching, and generates local sub-maps. Next, by utilizing pose information and motion constraints, methods such as filtering and graph optimization are applied in the back-end optimization module. It eliminates cumulative errors in the local map, further refines the robot’s pose and sub-maps, and corrects the global map. Finally, loop detection integrates the robot’s pose and interframe information to determine whether the robot has passed through a similar location. It eliminates map distortion caused by cumulative errors so as to achieve a globally consistent map.
The remainder of the article is organized as follows: Section 2 reviews 2D Lidar SLAM based on its principles, including filtering, scan matching, graph optimization, and deep learning. Section 3 presents the current hotspots in SLAM research, including Lidar SLAM in dynamic environments, vision, and laser fusion SLAM. Finally, Section 4 provides an outlook on the future development of 2D Lidar SLAM.

2. Research on 2D Lidar SLAM

Two-dimensional LiDAR is a non-contact detection sensor; it extends single-point detection to planar detection by rotating the laser beam, obtaining both target distance and angle information. Depending on the principle, 2D Lidar is usually divided into two categories: triangulation and time of flight (TOF). Triangulation calculates the distance, D, from the laser emitter to the object through geometric relationships. As the optical path illustrated in Figure 2, the laser is emitted onto the target and reflected to the CCD through a lens, and the distance, D, can be calculated using the following geometric relationship:
D = f × L + d d
where L is the distance between the laser emitter and the principal optical axis, f is the focal length, and d is the distance between the CCD spot and the principal optical axis.
On the other hand, TOF calculates the distance by measuring the flight time of the light. Its principle is illustrated in Figure 3, where a pulse laser is emitted onto the object and reflected back to the receiver. By subtracting the emission time from the reception time, the flight time, t, of the light is obtained, allowing for the calculation of the distance, D:
  D = c × t 2
where c is the speed of light in air.
The measurement range of triangulation depends on the resolution of the CCD. For a target beyond a certain distance, the position of the light spot on CCD becomes indistinguishable. As a result, many manufacturers have opted for Lidars based on the TOF principle. Table 1 lists the mainstream brands and models of 2D Lidars.
Researchers have been striving to apply 2D Lidar SLAM to larger scenes, more complex environments, and more sophisticated tasks. Over time, it has seen several different architectures of 2D Lidar SLAM based on filtering, scan matching, graph optimization, and deep learning, which are shown in Figure 4. Through the continuous evolution of the front end, back end, and loop detection, localization accuracy and mapping quality have been improved, resulting in broader applications in robotics. This paper reviews 2D Lidar SLAM based on its principles.

2.1. Filter-Based SLAM

Bayesian filtering predicts the current state based on the previous state and the current control state, and it updates the measurement value according to the current state. Formula (3) is the state transition probability, and Formula (4) is the measurement probability. Here, xt, ut, and zt are the state, control, and measurement at time t, respectively. The current state, xt, depends on the previous state, xt1, and the current control, ut, while the current measurement, zt, depends on the current state, xt.
  p x t | x 1 : t 1 , z 1 : t , u 1 : t = p ( x t | x t 1 , u t )
  p z t | x 0 : t , z 1 : t , u 1 : t = p ( z t | x t )
The state transition probability and measurement probability jointly describe the dynamic stochastic system composed of the robot and its environment [12]. The dynamic Bayesian network shown in Figure 5 illustrates the evolution of the states and measurements defined by these probabilities.
Based on the Bayesian filter, the Kalman filter algorithm and particle filter algorithm are developed, both of which realize localization and mapping by estimating the current pose of the mobile robot. Kalman filter-based SLAM relies on probability theory to address state estimation and prediction, enabling more accurate pose and map estimations. On the other hand, particle filter-based SLAM, which uses the Monte Carlo method, employs a large number of particles to estimate the robot’s pose and perform mapping. These particles can follow various forms of probability distributions not constrained by Gaussian assumptions.
In the early years, filter-based SLAM algorithms accomplished indoor localization through features and the extended Kalman filter (EKF) [13]. Corner and straight-line features were the primary features of the 2D Lidar. Man extracted corner points in Lidar data and combined corner features with EKF to achieve SLAM mapping [14,15]. Subsequently, the slope difference between adjacent Lidar points was used for the initial segmentation of the point set, and segmentation accuracy was improved by merging and segmenting points based on the line slop [16]. Yu Wen et al. obtained the robot posing error through the dynamic capture system and found that the actual error did not follow a Gaussian distribution. Considering the error characteristics, he proposed ellipsoid SLAM to realize large-scale localization and mapping [17]. However, as the number of features increased, the memory requirements grew, posing difficulties for mapping in large-scale scenes.
In addition to the feature-based filtering algorithm, Michael Montemerlo et al. proposed the FastSLAM1.0 algorithm based on the Rao-Blackwellized Particle Filter (RBPF). This algorithm can recursively estimate the pose of the mobile robot and the location of the landmark [18]. FastSLAM2.0 [19] further enhanced accuracy and robustness by refining the weight and pose predictions. To boost processing speed, CoreSLAM adopts a particle filter framework and performs scanning matching using the Monte Carlo algorithm [20]. In 2007, the Robot Operating System (ROS) officially introduced the Gmapping algorithm [21], this algorithm uses a grid representation for the map and employs a particle filter to perform localization and mapping, addressing the issue of high memory consumption in fast SLAM. Gmapping offers high precision and real-time performance in small scenes, which is shown in Figure 6. However, since it relies on an odometer and lacks loop detection, the issue of cumulative errors limits its applications in large scenes. To mitigate particle degradation, Optimal RBPF improves the odometer model in the Gmapping by decomposing each particle into N particles, and only the optimal particle is selected for propagation [22]. To solve the problem of non-loopback, Chengjun Tian et al. proposed employing a deep learning module for closed-loop detection in Gmapping, achieving an accuracy of 84.4% on the CityCenter data set [23]. In summary, filter-based SLAM improves accuracy through sampling, but its computational cost is high. This has led researchers to explore alternative methods, such as scan matching.

2.2. Scan Matching-Based SLAM

The fundamental concept of the Lidar SLAM algorithm based on registration is to calculate the rotation matrix R and the translation vector t through the matching of two or more frames of point clouds so as to estimate the pose of the robot at each moment and build a map. It relies on front-end registration and back-end optimization. The front-end registration uses consecutive point cloud data for matching to estimate the robot’s motion trajectory and create a local map. The back end performs global optimization and improves the accuracy of the trajectory and map obtained from the front end. For scan matching, Edwin B. Olson introduced the real-time correlation-matching algorithm (CSM), for which matching real-time frames is are simplified into an optimal pose search within a restricted range. It divides the pose candidates into multiple groups based on resolution, computes the matching scores for each group, and selects the optimal pose based on the highest score. Compared with the traditional Iterative Closest Point (ICP) and Iterative Closest Label Point (ICL) algorithms, this method is faster and more accurate [24]. In 2011, Stefan Kohlbrecher et al. proposed the Hector SLAM algorithm [25], which uses a bilinear difference method to decompose the probability values of points into grids. It employs the Gauss–Newton iterative algorithm to obtain the matching results of the front and back frames, and it applies multi-resolution to reduce the matching complexity. However, high-frame-rate Lidar is required to tackle the drift issue. To address this challenge, inertial measurement unit (IMU) data are incorporated into Hector SLAM, allowing for accurate attitude estimation and improved matching accuracy [26]. In practical applications, computation efficiency is crucial for robots. Kirill Krinkin and Anton Filatov used correlation filtering to improve the computational speed of SLAM matching [27], while Minjie Bao et al. accelerated the CSM algorithm using Field Programmable Gate Array (FPGA), achieving a 30 times speedup [28].
Significant progress has also been made in the use of geometric feature matching for SLAM, with features including straight lines, the four endpoints of broken lines, and polygon contours [29,30,31]. When a straight-line feature is present, the position and orientation are obtained by solving the transformation parameter between the current frame’s line and the mapping reference line. Otherwise, ICP matching is adopted, and the cumulative error is effectively reduced through alternation between the two matching methods [29]. The four endpoints of broken lines are also employed as geometric features. The corresponding matching method compensates for the position predicted via the kinematics, followed by the use of point-to-line ICP (PL-ICP) to refine the pose. Finally, the EKF further corrects the pose and updates the map [30]. As for other features, Natalia Prieto-Fernández et al. used weighted conformal Lidar mapping (WCLM) to extract polygon contours and propagation uncertainties from original measurement data, effectively improving computational efficiency [31]. Matching can also be performed in the frequency domain. By converting the point cloud into images, real-time inter-frame matching is realized through the combination of fast Fourier transform (FFT) and ICP [32], which reduces computational costs and achieves robustness for long-distance matching. Furthermore, matching can be optimized using intelligent optimization algorithms. Sara Bouraine et al. proposed a SLAM algorithm based on Normal Distributions Transform—Particle Swarm Optimization (NDT-PSO) matching, which outperforms Hector SLAM [33,34]. The above matching algorithms face several challenges: the ICP and NDT algorithms accumulate errors due to limitations in laser performance, and the lack of closed-loop detection restricts their applicability in large-scale scenes.

2.3. Graph Optimization-Based SLAM

In recent years, the graph optimization method has become a mainstream research direction of Lidar SLAM. This method models the robot’s movement and observation in the environment into a graph, where the nodes xt represent the poses, and the edges represent the constraints between the poses, as shown in Figure 7. By optimizing the nodes and edges in the graph to minimize the errors, the robot’s pose and the environment map are refined. The front end of the algorithm handles the data association and closed-loop detection, and the back end realizes the optimization by adjusting the node network to satisfy spatial constraints.
Graph-based SLAM was developed before 2000 [35]; it establishes relative constraints between robot poses based on pairwise geometric relationships between the front and back frame lasers, followed by graph optimization through iterative linearization. However, optimization using Gauss–Newton method is computationally expensive and not suitable for large-scale scenarios. Howard et al. constructed a relaxed network to pose the robot [36] and generate a map via single or multiple robots. However, this approach is not applicable to natural beacons, and the size of the network increases over time. Edwin Olson et al. used the stochastic gradient descent method to optimize the pose map [37], generating a map similar to the real state even in the presence of open-loop errors, with faster convergence. Subsequently, KartoSLAM, a milestone of graph-optimized SLAM, emerged [38]; it divides SLAM into the front end and the back end. The front end uses a matching algorithm to obtain the initial pose, and the back end uses the Spark Post Adjustment (SPA) method for optimization. The nonlinear constraint relationship between the pose graphs was used to solve the pose more accurately. However, as the number of edge constraints increases, more computational and memory resources are required.
Another graph optimization method is the general framework for graph optimization (G2O) [39]. LagoSLAM is a SLAM optimized through a linear approximated graph [40], which is the minimization of nonlinear nonconvex cost functions without initial assumptions. In 2016, Google proposed the Cartographer algorithm [41], which uses trilinear interpolation to calculate the probability value of the point projected onto the grid. The Cartographer algorithm combines previous consecutive multi-frame data into a sub-map. It uses the branch definition method to enable real-time closed-loop detection and the Ceres solver to optimize pose and map accuracy. As shown in Figure 8, Cartographer shows its advantages in large scenes. However, the optimization process involves optimizing historical nodes and new sub-graphs, along with the optimization of every node in the current sub-graph. Therefore, when the map is large, the complexity of queue operations increases significantly. To accelerate the operation speed, Huang proposed a closed-loop detection method based on the simulated annealing algorithm. On the ACES Building data set, the operation time was reduced from 39.95 s to 10.89 s while maintaining the same accuracy as KartoSLAM algorithm [42]. A novel data structure, called the multi-point cloud density search tree, has been proposed [43], which utilizes sparse point clouds for matching, thereby reducing complexity. The computing efficiency with benchmark data sets was 2–5 times that of the Cartographer algorithm’s. To further enhance accuracy, various types of sensors have been integrated into Lidar SLAM, such as the fusion of IMU and 2D Lidar, which improves the accuracy of angular matching [44]. By using odometry and Ultra Wide Band (UWB), initial poses are constructed and incorporated as nodes in the graph, thereby adding optimization constraints [45]. The GP-SLAM algorithm employs a regionalized Gaussian process (GP) map reconstruction algorithm and combines it with graph optimization to further enhance mapping accuracy [46,47].
Under the framework of graph optimization, some scholars have attempted to introduce feature matching to optimize front-end matching. It includes constraints between different levels of features [48], rectangular corridor feature matching [49], and PCA-based corridor feature matching [50]. In addition to explicit features like corridors, Jiaheng Zhao et al. utilized the implicit functions representing features to obtain a more accurate pose. On this basis, they studied the Fourier function to fit the closed-loop feature contour and constructed the optimized coordinates of the sub-map [51,52]. In addition, Kalman filters, such as EKF and the Unscented Kalman Filter (UKF), which tightly couple IMU and Lidar data, were combined with graph optimization to improve mapping accuracy [53,54]. Hanzhi Zhou et al. adopted line features and combined EKF with a cartogram graph optimization framework to achieve mapping with sparse Lidar [55]. In summary, the existing methods are gradually expanding from graph optimization to high-precision SLAM algorithms based on the fusion of feature EKF and graph optimization.

2.4. Deep Learning-Based SLAM

Advancements in deep learning have led researchers to explore replacing traditional point cloud matching and loop closure detection [56] with convolutional neural networks (CNNs). For instance, Jiaxin Li et al. converted consecutive two-frame data into polar coordinates as input and proposed a multi-layer CNN, ConvNet, which outputs both matching pose and loop closure detection results [57]. By combining ConvNet with Hector SLAM, the performance in mapping and localization accuracy was improved compared to Cartographer and Hector SLAM; Figure 9 shows the network structure, and Figure 10a–c shows the comparative effect of using UAV-captured data. When a similar network is used, the mean absolute error (MAE) of translation and rotation reach approximately 1 cm and 1°, respectively [58].
In addition to pure CNN architectures, replacing the fully connected layer CNN with a long short-term memory (LSTM) layer resulted in matching errors as low as 0.0293 m and 0.0218 m for consecutive frames in the Kitti05 and Kitti07 data sets, respectively, with an operation time of only 15 ms [59,60]. To further integrate IMU and Lidar data, a deep learning algorithm combining region-based convolutional neural networks (RCNNs) and LSTM was used to solve the translation and rotation, achieving overall accuracy superior to that of Hector SLAM [61]. Inspired by image recognition, the Laser2Vec depth learning algorithm was used to identify images transformed from point clouds. Convolutional autoencoders were used to extract and compress features, resulting in multi-dimensional feature vectors. As for loop closure detection, a distance function approximation Siamese neural network was constructed to judge similar scenes [62]. Despite the effectiveness of deep learning-based matching, it may not perform well in all scenarios, particularly in dynamic environments. To address this limitation, CNN was used to train images converted from ICP matching results, achieving recognition accuracy of over 99% [63]. However, it requires a large amount of data and high computational power.
Table 2 lists the characteristics of the 2D Lidar SLAM algorithm. In summary, filter-based SLAM methods, due to the high computational costs and lack of closed-loop detection, are limited in small-scale environment applications. To address this issue, high-precision scanning-matching methods were developed, represented by Hector SLAM. Feature matching and point cloud matching techniques have been integrated to further enhance accuracy. Since 2010, graph optimization-based SLAM methods have been developed to improve localization and mapping accuracy, represented by Google’s open-source Cartographer, which is widely applied in commercial robots. Recently, graph optimization-based SLAM methods have been expanded through the combination with filters, such as EKF. Finally, the deep learning-based SLAM methods are limited by the scene scale and operational complexity, making them more suitable for closed-loop detection.

3. Research Hotspot of 2D Lidar SLAM

Lidar SLAM has been continuously explored, with technological innovations in matching, optimization, and loop detection, which improve localization accuracy and mapping, enhance computational efficiency, reduce algorithm complexity, and expand the application scenarios of robots. From early robots designed for specific scenes, fixed routes, and single tasks, they have gradually evolved into intelligent robots capable of adapting to diverse scenes, independently planning and adjusting paths in real time, avoiding obstacles, and performing complex tasks. However, given the nature of 2D Lidar, the development of 2D SLAM is constrained by the data dimension, and it relies on continuous algorithm optimization. Robots are susceptible to interference in dynamic environments, such as changes in object presence, the change of pose, and the presence of dynamic objects; these interferences may affect the localization and mapping accuracies. How robots can achieve high-quality performance in dynamic environments has always been a focus of research. In addition, the use of other sensor data to increase data dimensions has also been a research hotspot, with camera and Lidar fusion emerging as one of the most popular approaches to enhancing SLAM performance by improving the robot’s environmental perception capability.

3.1. D Lidar SLAM in Dynamic Environment

Pedestrians are a significant interference factor when a robot is operating. A small amount of pedestrian interference point-cloud matching can cause positioning or navigation errors. A larger number of pedestrians may lead to the robot losing its pose information, resulting in a “kidnap” state. During the mapping, if the pedestrian is static, an inaccurate map will be generated. If the pedestrian is walking, a shadow is left, requiring a map revision. To avoid interference, it is ideal for the surrounding environment to remain unmanned during SLAM mapping. However, this condition cannot be met in certain scenarios, such as 24 h airports, railway stations, and other public scenes. Many researchers have been studying how to enable robots to automatically detect and identify pedestrians and optimize the map accordingly.
In the environment, dynamic objects are divided into high-dynamic and semi-static ones. A highly dynamic object, such as a pedestrian or a pet, changes poses frequently, while a semi-static object, like furniture or decorations, moves rarely. Clearly, the pose change of highly dynamic objects directly affects the accuracy of SLAM, which needs to be eliminated in real time. While the pose of semi-static objects does not change in a short time, it still needs to be updated in the map to improve the stability of robot localization for a SLAM with a long life cycle [64]. When detecting highly dynamic objects, the delay method and tracking method [65] are adopted. Each dynamic obstacle detected via the delay contrast method is assigned a filter for tracking, and data association is performed between different frames to realize dynamic obstacle detection. Under the assumption that highly dynamic objects in the point cloud have been filtered out, the point cloud contains only static objects (objects that match the current map) and semi-static objects (objects that do not match the current map).
To address the presence of pedestrians, María T. Lázaro et al. proposed a method to remove the moving pedestrian point cloud. In two aligned point clouds, the laser point is added, merged, or deleted by the distance relationship between the map laser point and the current frame laser point. The point cloud merging process is shown in Figure 11. After removing moving objects and updating the map, the point cloud map becomes more accurate, as shown in Figure 12a,b [66].
In an indoor environment, pedestrians are the most common type of high-dynamic object [67,68]. Angus Leigh et al. continued to study the features of human legs on the basis of Kai O. Arras, extracting 15-dimensional features of the segmented point cloud [69,70], and the features list is shown in Table 3. Then, they used a parallel random forest algorithm to identify legs [71] and combined it with the Kalman filter algorithm [72] to track the associated legs. After the leg point cloud was removed, the Cartographer framework was used to realize SLAM, effectively addressing the map distortion issue. In order to improve the mapping accuracy in the dark scene, Yolov3 was used to identify pedestrians in thermal images, and the pixels of pedestrians were projected into the Lidar coordinate system for filtering out the associated point cloud. Its combination with Hector SLAM effectively reduced the residual shadows of pedestrians [73].
Gaoxian Robotics Company added a real-time map-update strategy under the framework of Cartographer and used the Chow–Liu maximum manual information-spanning tree to dynamically update the nodes and sides through graph optimization so as to realize the long-term map update under changing scenes [74]. As shown in Figure 13, real-time updates of obstacles in supermarkets, as well as vehicle poses in underground parking slots, help the cleaning robot perform its tasks. Based on the fact that people are often closer to the obstacles, map changes can be assessed by comparing the Euclidean distance between the current point cloud and the map point cloud. This enables robust real-time 2D map updates, even in the presence of posing and measurement errors [75,76]. By building a 2D semantic map, map updates can also be determined based on the area change rate of the 2D room in the semantic scene [77,78]. Compared with the real-time update mode of the Gaoxian robot, it can reduce computing resources and is easier to implement and maintain. Gerhard Kurz et al. proposed the scale-invariant density method to determine whether the map needs to be updated; it does not rely on the direct data correlation but instead focuses on the distribution of the vertices. It realizes long-term map updates through the cutting and updating of the nodes with graph optimization, similar to the Gaoxian robot method [79]. With the development of deep learning, image deep learning is also applied to long-term 2D Lidar SLAM. The semantic information of object recognition via the image Cube RCNN [80] deep learning algorithm is integrated to improve the long-term update mapping accuracy of 2D SLAM [81].

3.2. Research on Laser and Vision Fusion SLAM

The fusion scheme of 2D Lidar with other sensors, such as wheel odometry, IMU, millimeter-wave radar, and visual sensors, enhances the overall sensing capability. The fusion of wheel odometry, IMU, and Lidar using an EKF has been widely applied as a preprocessing step before laser matching and has been maturely applied in various types of SLAM and navigation positioning. UWB sensors are commonly used as beacons [45] and integrated with laser SLAM for indoor scenarios. Millimeter-wave radar is not affected by lighting and is commonly used in outdoor vehicle scenarios, integrating with vision to enable measurement and application. In the fusion process, it is mainly used in scenes where 3D Lidar is involved. Therefore, this article mainly analyzes the fusion of 2D lasers and visual sensors.
Compared to visual SLAM, Lidar SLAM offers superior accuracy in ranging and mapping. Therefore, integrating Lidar SLAM into visual SLAM can significantly enhance modern SLAM applications, particularly in dynamic object removal, map fusion, and loop closure detection [82]. In the early years, vision-based methods used SIFT feature points and data association to detect and remove dynamic objects [83]. Modern vision-based methods use Yolov3 and RCNN to perform this task [73,80,81]. In terms of map fusion, the most straightforward approach is to convert the 3D point cloud from the depth camera into a 2D point cloud and fuse it with the original laser point cloud, subsequently building the map using Lidar SLAM [84,85,86]. Such a method enables the acquisition of information that cannot be scanned via 2D Lidar, such as tables and chairs. However, the point cloud generated via a depth camera is susceptible to lighting conditions, making it difficult to stably match the map. Thus, Feng conducted refined processing of the 3D point cloud with a depth camera. Initially, the ground point cloud was filtered out. Subsequently, the static features of the depth camera’s 3D point cloud were retained using a 2D laser point cloud. Finally, the fused data were processed via Cartographer to obtain a more information-rich map [87]. In addition to the dimensionality reduction in 3D point clouds, Meng utilized the fusion of visual semantic information. First, objects such as tables and chairs were identified using YOLO v3 [88]. Then, by performing 3D modeling of the tables and chairs, their obstacle grid information on the Cartographer map was obtained, resulting in more 3D semantic information about obstacles, which facilitates autonomous navigation. Other researchers constructed a laser-visual feature fusion front-end. Initially, the front-end pose was obtained by matching laser and visual features separately. Then, the poses from both were fused using an EKF to reduce matching failures [89]. Recently, scholars have constructed a tightly coupled unified error equation for visual and laser features through coefficient weighting [90,91,92]. In reference [90], the constructed Formula is as follows:
e u = α i = 1 m | | P i 1 Z t 1 C R P i 1 + t | | + 1 α β j = 1 n | | P j R P j 1 + t | |
where β is the unit conversion parameter between Lidar data and visual data, α is the weight that balances the two errors, Pi is the spatial point corresponding to the feature point extracted from the image, m is the number of matching feature point pairs, Pj is the point scanned by the Lidar, and n is the number of matching point pairs in the scan data.
For closed-loop detection, due to the limited information from Lidar, it is unable to extract feature information of multiple heights as a 3D Lidar does. Therefore, scan-to-map and map-to-map methods are commonly used, and detection distance is taken as a constraint in the detection process [93,94]. It is easier to detect whether the robot has returned to the previous pose through a visual word bag model or depth feature as constraints. Xiao Liang et al. first applied ORB features to the closed-loop detection of Lidar SLAM, which improved the closed-loop detection accuracy and mapping quality [95]. Jianming Zhang et al. converted the 3D point cloud of depth graph into a 2D point cloud and fused with a 2D Lidar point cloud to improve Lidar sensing capability. Three-dimensional and two-dimensional maps were built using ORB-SLAM2 and Gmapping, respectively, and a fused map was then created using Bayes criteria [96]. Compared with the straightforward Oriented FAST and Rotated BRIEF (ORB) features, the deep learning hierarchical feature network (HF-Net) [97] outperforms in extracting visual feature points. Zongkun Zhou et al. established a closed-loop detection constraint function that combines laser and visual features, applying it to hierarchical feature-fusion SLAM (HF-FSLAM), which increases the number of closed-loop detections compared to the ORB word bag model [98]. The structure of the fusion SLAM (FSLAM) system is shown in Figure 14. Its closed-loop detection accuracy in libraries and exhibition halls exceeded 99%. And the mapping results in both simulation and real environments are shown in Figure 15.
Unlike visual-only CNNs, the approach in [99] converts Lidar data into images and combines them with camera images to generate CNN-based descriptors. A sliding window is used to accurately locate the scan’s matching pose within the corresponding sub-map, and image matching is then employed to reduce errors and perform optimization, which improves the accuracy of closed-loop detection. Figure 16 displays a model for generating fusion descriptors between Lidar images and RGB images. The Lidar images and RGB images are employed to extract the features via the convolution and pooling operations. It is a shallow convolutional neural network that employs Tanh activation functions in the activation layers and adaptive max pooling in the pooling layers. However, to realize the detection of this method, a high-precision calibration method should be used to align the Lidar images and RGB images. If the error of the image frame and Lidar frame is large, fusion accuracy cannot be ensured. Lu constructed an image depth estimation network, which estimates image depth through upsampling and downsampling, as shown in Figure 17. Subsequently, low-cost 2D Lidar information is integrated into the network to refine the depth estimation. Finally, the motion trajectory is obtained by combining consecutive images, and a 3D model is reconstructed [100].
In summary, in the 2D Lidar and camera fusion SLAM algorithm, simply fusing the laser and dimensionality reduction images requires low computational power. However, due to the influence of the quality of the depth camera point cloud, the accuracy improvement is limited. In some cases, directly matching the point clouds from both sensors may even result in a reduction in accuracy. Therefore, matching visual feature points with Lidar feature points at the front end is an emerging trend for future development. In the fusion process involving deep learning methods such as the front-end removal of dynamic objects, closed-loop detection, and direct end-to-end SLAM networks, are difficult to implement in practical engineering applications. Currently, commercial indoor and outdoor robots are typically equipped with industrial PCs or low-cost embedded systems to achieve functionality, and they do not support deep learning capabilities. If AI-enabled embedded systems are utilized in the future, both the development costs and the optimization of framework algorithms will pose significant challenges.

4. Discussion

In order to analyze the accuracy of 2D Lidar SLAM, different 2D Lidar SLAM methods have been applied using the Intel data set. The results are shown in Table 4. The absolute translational error and rotational error are calculated. The typical Gmapping algorithm has a large absolute angle error, reaching up to 1.3 degrees, due to the lack of graph optimization. In contrast, the Cartographer algorithm, which employs graph optimization, has an absolute error angle of less than 0.5 degrees and a distance accuracy of less than 5 cm. This represents a significant improvement in performance compared to Gmapping, which is also demonstrated in the literature [41]. In recent years, Li proposed an ICP-SLAM algorithm with higher precision, achieving a translational error of 1.81 cm [55]. However, the translational error in [55] exceeds 5 cm, surpassing the resolution of a typical grid, while the angle error is the largest among the methods compared. This indicates that graph optimization algorithms for sparse point clouds significantly impact the precision [55]. The incorporation of loop-closure detection algorithms has greatly improved the accuracy of the absolute angle error, but the positioning accuracy remains relatively low [42]. In summary, Cartographer is still the most stable and high-performing algorithm. However, it suffers from the issue of an overly complex framework. Our experience in debugging code suggests that the preprocessing stages, such as point cloud filtering and back-end optimization, are overly complicated, making it difficult to fundamentally modify the code.
In order to analyze the effect of dynamic objects for 2D Lidar SLAM, legtracker is used to recognize the human [69]. The recognized human point cloud is used as a preprocessing step and removed before ICP matching. We designed the self-developed indoor mobile robot experimental platform, which is shown in Figure 18. It is equipped with a 2D LiDAR (RPLIDAR A2), six ultrasonic sensors and an IMU capable of mapping and autonomous navigation. This paper uses buildings No. 34 and No. 35 of Changzhou New Campus, Hohai University, as the experimental scene. Buildings No. 34 and No. 35 are twin buildings that are connected, as can be seen in Figure 19. Both buildings have similar structures, with a lobby connecting their front and a corridor connecting their back.
In the first-floor scenario test, the lobby appeared as a complex environment with irregular structures and multiple decorations, resulting in the scanning of irregular point clouds. During the experiment, a number of pedestrians were presented consistently. Their point clouds should be identified and removed from the map. The result of leg recognition in the corridor is shown in Figure 20, where the laser-scanned points are marked in red, and the points identified as human legs are marked in white. In Figure 20a, two pedestrians were identified, aligning with the real-world scenario. In Figure 20b, three pedestrians were identified, but the right leg of one was not identified due to insufficient laser points, and a decoration beside the wall was mistakenly identified. Therefore, the algorithm displays a high recognition rate.
The third floor features a zigzag structure, and the robot’s path mainly consists of corridors, one of which is made of glass. The reflectivity of the laser in the glass medium is low, which interferes with the mapping result. The Cartographer algorithm is used to assess the impact of removing human legs on the map quality. Figure 21a shows the map obtained from the original point cloud using the Cartographer algorithm, where the gray points on the corridor are generated via pedestrian movement. Due to the presence of numerous pedestrians in the collected data, a large number of gray points are observed. Figure 21b shows the map generated via the point cloud after pedestrians were removed using the Cartographer algorithm, with minimal gray points left. It is evident that filtering out pedestrian point clouds before mapping can significantly improve the quality of the map. Figure 22a,c are local maps of Figure 21a, while Figure 22b,d are local maps of Figure 21b. The comparison of local maps clearly shows that the algorithm can automatically remove the residual shadows of moving objects, optimize the mapping effect, and minimize the need for the offline manual removal of shadows.
We compared it with the Hector SLAM and Gmapping SLAM algorithms. The mapping result of different algorithms is shown in Figure 23. The cluttered lines in the red rectangle are attributed to the presence of a glass fence in the area. Because of the glass’s transparency, a significant portion of the laser is not reflected back, leading to the inaccurate positioning of the glass and interfering with the mapping result. Figure 23a shows the map generated via Hector SLAM, with a few residual gray shadows. However, there is a certain drift at the loop, and the whole map cannot be displayed as a regular square. Figure 23b shows the map generated via Gmapping Slam; although there are not many residual shadows, it is not difficult to see that the map failed. The map changed the original shape of the building and did not form a closed loop. Therefore, Figure 23c shows the map generated via the algorithm with preprocessing, which better restores the actual map, improves the map quality by removing pedestrians, and reduces the manual editing map.

5. Conclusions

The mainstream 2D Lidar SLAM framework relies on graph optimization for global mapping and filtering theory for local updates. These 2D Lidar SLAM methods have been widely applied in various indoor scenes and have led to the development of numerous robots, such as cleaning robots, distribution robots, disinfection robots, and guiding robots. However, it is challenged in larger scenes and more complex environments. Establishing large-scale maps and ensuring stable operation with limited computing power have become the focus of future SLAM development. Reducing the nodes of graph optimization, optimizing the map, and reducing irrelevant objects are promising research directions. The fusion of visual information and 2D lidar can also be a promising direction. By increasing the information dimension, it can improve the recognition accuracy, enhance obstacle avoidance abilities, and enrich the geometric information of the map. Furthermore, the incorporation of semantic information, such as shape, feature, and category, is another research trend. Semantics can be fundamental to a visual language model to finish more challenging tasks. It not only helps improve the accuracy and robustness of localization but also provides object information to build semantic maps, allowing robots to perform more advanced tasks.
The Lifelong SLAM algorithm has been applied in supermarkets and underground garages, as well as for indoor storage and cleaning tasks. Pedestrians, who are highly dynamic, are detected and removed from the map during map updates. Semi-static objects, such as vehicles and tables, are only updated when their movement is detected. In this architecture, the accurate determination of scene changes and localization updates are key research challenges.
Based on the above summary, a detailed universal algorithm framework for different scenarios can be provided, as shown in Figure 24. In practical scenarios such as factories and supermarkets, the front-end method is selected based on the cost. When the budget is limited, dynamic objects are removed using lasers, and feature extraction combined with machine learning is employed for target detection. When a higher budget is acceptable, systems equipped with GPU computing power can directly use end-to-end laser and the visual fused method to detect pedestrians and other indoor dynamic objects. In the back end, features are extracted, and those belonging to dynamic objects are removed to construct a unified error equation for matching. Each pose is treated as a node, forming the basis for subsequent graph optimization. In the loop closure detection phase, laser ICP matching combined with a visual bag-of-words model is used for detection. During back-end optimization, the Ceres Solver is employed to perform global optimization on multiple nodes. However, during actual navigation and localization, two factors need to be considered. The first factor is the highly dynamic objects, which are always present in factories, cleaning scenarios, or other environments. Therefore, machine learning and deep learning should still be considered options for detecting pedestrians and AGVs to minimize the impact of dynamic objects on localization accuracy. The second factor is lifelong map management. When the robot navigates to a relevant node, the local map information corresponding to the node is updated in real time. Based on the distance between the current scan points and the map’s obstacle grid cells, the grid cells are updated for long-term map maintenance.

Author Contributions

Conceptualization, X.X. and Y.R.; methodology, Y.R.; supervision, X.X.; validation, Y.R. and M.L.; visualization, Y.R. and Z.T.; writing—original draft preparation, Y.R.; writing—review and editing, X.X., M.L. and Z.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Natural Science Foundation of Jiangsu Province (Grant No. BK20231187).

Data Availability Statement

The data presented in this study are available upon request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Liu, B.; Xiao, X.; Stone, P. A Lifelong Learning Approach to Mobile Robot Navigation. IEEE Robot. Autom. Lett. 2021, 6, 1090–1096. [Google Scholar] [CrossRef]
  2. De Heuvel, J.; Zeng, X.; Shi, W.; Sethuraman, T.; Bennewitz, M. Spatiotemporal Attention Enhances Lidar-Based Robot Navigation in Dynamic Environments. IEEE Robot. Autom. Lett. 2024, 9, 4202–4209. [Google Scholar] [CrossRef]
  3. Cadena, C.; Carlone, L.; Carrillo, H.; Latif, Y.; Scaramuzza, D.; Neira, J.; Reid, I.; Leonard, J.J. Past, Present, and Future of Simultaneous Localization and Mapping: Toward the Robust-Perception Age. IEEE Trans. Robot. 2016, 32, 1309–1332. [Google Scholar] [CrossRef]
  4. Xu, X.; Zhang, L.; Yang, J.; Cao, C.; Wang, W.; Ran, Y.; Tan, Z.; Luo, M. A Review of Multi-Sensor Fusion SLAM Systems Based on 3D LIDAR. Remote Sens. 2022, 14, 2835. [Google Scholar] [CrossRef]
  5. Zhang, J.; Singh, S. LOAM: Lidar Odometry and Mapping in Real-time. Robot. Sci. Syst. 2014, 2, 9. [Google Scholar]
  6. Shan, T.; Englot, B. LeGO-LOAM: Lightweight and Ground-Optimized Lidar Odometry and Mapping on Variable Terrain. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 4758–4765. [Google Scholar]
  7. Shan, T.; Englot, B.; Meyers, D.; Wang, W.; Ratti, C.; Rus, D. LIO-SAM: Tightly-coupled Lidar Inertial Odometry via Smoothing and Mapping. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 24–30 October 2020; pp. 5135–5142. [Google Scholar]
  8. Davison, A.J.; Reid, I.D.; Molton, N.D.; Stasse, O. MonoSLAM: Real-Time Single Camera SLAM. IEEE Trans. Pattern Anal. Mach. Intell. 2007, 29, 1052–1067. [Google Scholar] [CrossRef]
  9. Mur-Artal, R.; Tardos, J.D. ORB-SLAM2: An Open-Source SLAM System for Monocular, Stereo and RGB-D Cameras. IEEE Trans. Robot. 2017, 33, 1255–1262. [Google Scholar] [CrossRef]
  10. Qin, T.; Li, P.; Shen, S. VINS-Mono: A Robust and Versatile Monocular Visual-Inertial State Estimator. IEEE Trans. Robot. 2018, 34, 1004–1020. [Google Scholar] [CrossRef]
  11. Jiang, G.; Yin, L.; Jin, S.; Tian, C.; Ma, X.; Ou, Y. A Simultaneous Localization and Mapping (SLAM) Framework for 2.5D Map Building Based on Low-Cost LiDAR and Vision Fusion. Appl. Sci. 2019, 9, 2105. [Google Scholar] [CrossRef]
  12. Sebastian, T. Probabilistic robotics. Commun. ACM 2005, 45, 52–57. [Google Scholar]
  13. Connette, C.P.; Meister, O.; Hagele, M.; Trommer, G.F. Decomposition of line segments into corner and statistical grown line features in an EKF-SLAM framework. In Proceedings of the 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), San Diego, CA, USA, 29 October–2 November 2007; pp. 3884–3891. [Google Scholar]
  14. Man, Z.; Ye, W.; Xiao, H.; Qian, X. Method for corner feature extraction from laser scan data. J. Nanjing Univ. Aeronaut. Astronaut. 2012, 44, 379–383. [Google Scholar]
  15. Man, Z. Research on Mapping and Localization of indoor AGV Based on LADAR. Ph.D. Thesis, Nanjing University of Aeronautics and Astronautics, Nanjing, China, 2016. [Google Scholar]
  16. Liu, P.; Ren, G.; He, Z. Method for Extracting Corner Feature from 2D Laser SLAM. J. Nanjing Univ. Aeronaut. Astronaut. 2021, 53, 366–372. [Google Scholar]
  17. Yu, W.; Zamora, E.; Soria, A. Ellipsoid SLAM: A novel set membership method for simultaneous localization and mapping. Auton. Robot. 2016, 40, 125–137. [Google Scholar] [CrossRef]
  18. Montemerlo, M.; Thrun, S.; Koller, D.; Wegbreit, B. FastSLAM: A factored solution to the simultaneous localization and mapping problem. In Proceedings of the AAAI National Conference on Artificial Intelligence (AAAI), Edmonton, AB, Canada, 28 July–1 August 2002; pp. 593–598. [Google Scholar]
  19. Montemerlo, M.; Thrun, S.; Koller, D.; Wegbreit, B. FastSLAM 2.0: An improved particle filtering algorithm for simultaneous localization and mapping that provably converges. Proc. Eighth Int. Jt. Conf. Artif. Intell. 2003, 133, 1151–1156. [Google Scholar]
  20. Turnage, D.M. Simulation results for localization and mapping algorithms. In Proceedings of the IEEE 2016 Winter Simulation Conference (WSC), Washington, DC, USA, 11–14 December 2016; pp. 3040–3051. [Google Scholar]
  21. Grisetti, G.; Stachniss, C.; Burgard, W. Improved Techniques for Grid Mapping with Rao-Blackwellized Particle Filters. IEEE Trans. Robot. 2007, 23, 34–46. [Google Scholar]
  22. Blanco, J.L.; Gonzalez, J.; Fernandez-Madrigal, J.A. Optimal Filtering for Non-parametric Observation Models: Applications to Localization and SLAM. Int. J. Robot. Res. 2010, 29, 1726–1742. [Google Scholar] [CrossRef]
  23. Tian, C.; Liu, H.; Liu, Z.; Li, H.; Wang, Y. Research on Multi-Sensor Fusion SLAM Algorithm Based on Improved Gmapping. IEEE Access 2023, 11, 13690–13703. [Google Scholar]
  24. Olson, E.B. Real-time correlative scan matching. In Proceedings of the 2009 IEEE International Conference on Robotics and Automation (ICRA), Kobe, Japan, 12–17 May 2009; pp. 4387–4393. [Google Scholar]
  25. Kohlbrecher, S.; Stryk, O.V.; Meyer, J.; Klingauf, U. A flexible and scalable SLAM system with full 3D motion estimation. In Proceedings of the 2011 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR), Kyoto, Japan, 1–5 November 2011; pp. 155–160. [Google Scholar]
  26. Wei, X.; Yang, C.; Kong, L.; Sun, P. Improved Hector-SLAM Algorithm Based on Data Fusion of LiDAR and IMU for a Wheeled Robot Working in Machining Workshop. In Proceedings of the 2022 China Automation Congress (CAC), Xiamen, China, 25–27 November 2022; pp. 2126–2131. [Google Scholar]
  27. Krinkin, K.; Filatov, A. Correlation filter of 2D laser scans for indoor environment. Robot. Auton. Syst. 2021, 142, 103809. [Google Scholar] [CrossRef]
  28. Bao, M.; Wang, K.; Li, R.; Ma, B.; Fan, Z. RIA-CSM: A Real-Time Impact-Aware Correlative Scan Matching Using Heterogeneous Multi-Core SoC. IEEE Sens. J. 2022, 22, 5787–5796. [Google Scholar] [CrossRef]
  29. Mohamed, H.; Moussa, A.; Elhabiby, M.; El-Sheimy, N.; Sesay, A. A Novel Real-Time Reference Key Frame Scan Matching Method. Sensors 2017, 17, 1060. [Google Scholar] [CrossRef]
  30. Cho, H.; Kim, E.K.; Kim, S. Indoor SLAM application using geometric and ICP matching methods based on line features. Robot. Auton. Syst. 2018, 100, 206–224. [Google Scholar] [CrossRef]
  31. Prieto-Fernández, N.; Fernández-Blanco, S.; Fernández-Blanco, Á.; Benítez-Andrades, J.A.; Carro-De-Lorenzo, F.; Benavides, C. Weighted Conformal LiDAR-Mapping for Structured SLAM. IEEE Trans. Instrum. Meas. 2023, 72, 8504110. [Google Scholar] [CrossRef]
  32. Jiang, G.; Yin, L.; Liu, G.; Xi, W.; Ou, Y. FFT-Based Scan-Matching for SLAM Applications with Low-Cost Laser Range Finders. Appl. Sci. 2018, 9, 41. [Google Scholar] [CrossRef]
  33. Bouraine, S.; Bougouffa, A.; Azouaoui, O. Particle swarm optimization for solving a scan-matching problem based on the normal distributions transform. Evol. Intell. 2022, 15, 683–694. [Google Scholar] [CrossRef]
  34. Bouraine, S.; Bougouffa, A.; Azouaoui, O. NDT-PSO, a New NDT based SLAM Approach using Particle Swarm Optimization. In Proceedings of the 2020 16th International Conference on Control, Automation, Robotics and Vision (ICARCV), Shenzhen, China, 13–14 December 2020; pp. 321–326. [Google Scholar]
  35. Lu, F.; Milios, E. Globally consistent range scan alignment for environment mapping. Auton. Robot. 1997, 4, 333–349. [Google Scholar]
  36. Howard, A.; Mataric, M.; Sukhatme, G. Relaxation on a mesh: A formalism for generalized localization. In Proceedings of the 2001 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Maui, HI, USA, 29 October–3 November 2001; pp. 1055–1060. [Google Scholar]
  37. Olson, E.; Leonard, J.; Teller, S. Fast iterative alignment of pose graphs with poor initial estimates. In Proceedings of the International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 16–21 May 2016; pp. 2262–2269. [Google Scholar]
  38. Konolige, K.; Grisetti, G.; Kümmerle, R.; Burgard, W.; Limketkai, B.; Vincent, R. Efficient Sparse Pose Adjustment for 2D mapping. In Proceedings of the 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Taipei, Taiwan, 18–22 October 2010; pp. 22–29. [Google Scholar]
  39. Luca, C. A Linear Approximation for Graph-based Simultaneous Localization and Mapping. In Robotics: Science and Systems VII; The MIT Press: Cambridge, MA, USA, 2012; pp. 41–48. [Google Scholar]
  40. Kümmerle, R.; Grisetti, G.; Strasdat, H.; Konolige, K.; Burgard, W. G2o: A general framework for graph optimization. In Proceedings of the 2011 IEEE International Conference on Robotics and Automation (ICRA), Shanghai, China, 9–13 May 2011; pp. 3607–3613. [Google Scholar]
  41. Hess, W.; Kohler, D.; Rapp, H.; Andor, D. Real-Time Loop Closure in 2D LIDAR SLAM. In Proceedings of the 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 16–21 May 2016; pp. 1271–1278. [Google Scholar]
  42. Huang, H.; Zhu, W. A Loop Closure Matching Method Based on Simulated Annealing Algorithm in 2D Lidar SLAM. In Proceedings of the 2022 China Automation Congress (CAC), Xiamen, China, 25–27 November 2022; pp. 732–736. [Google Scholar]
  43. Li, X.; Zhong, X.; Peng, X.; Gong, Z.; Zhong, X. Fast ICP-SLAM Method Based on Multi-resolution Search and Multi-density Point Cloud Matching. Robot 2020, 42, 583–594. [Google Scholar]
  44. Geer, I.; Vallve, J.; Sola, J. IMU preintegration for 2D SLAM problems using Lie Groups. In Proceedings of the 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Kyoto, Japan, 23–27 October 2022; pp. 1367–1373. [Google Scholar]
  45. Liu, R.; He, Y.; Yuen, C.; Lau, B.P.L.; Ali, R.; Fu, W.; Cao, Z. Cost-effective Mapping of Mobile Robot Based on the Fusion of UWB and Short-range 2-D LiDAR. IEEE/ASME Trans. Mechatron. 2022, 27, 1321–1331. [Google Scholar]
  46. Li, B.; Wang, Y.; Zhang, Y.; Zhao, W.; Ruan, J.; Li, P. GP-SLAM: Laser-based SLAM approach based on regionalized Gaussian process map reconstruction. Auton. Robot. 2020, 44, 947–967. [Google Scholar]
  47. Li, B. GP-SLAM: Novel Laser-Based Simultaneous Localization and Mapping Method. Ph.D. Thesis, Zhejiang University, Hangzhou, China, 2021. [Google Scholar]
  48. De la Puente, P.; Rodriguez-Losada, D. Feature based graph-SLAM in structured environments. Auton. Robot. 2014, 37, 243–260. [Google Scholar] [CrossRef]
  49. De la Puente, P.; Rodriguez-Losada, D. Feature based graph SLAM with high level representation using rectangles. Robot. Auton. Syst. 2015, 63, 80–88. [Google Scholar] [CrossRef]
  50. Chen, L.; Peng, C. A Robust 2D-SLAM Technology with Environmental Variation Adaptability. IEEE Sens. J. 2019, 19, 11475–11491. [Google Scholar]
  51. Zhao, J.; Zhao, L.; Huang, S.; Wang, Y. 2D Laser SLAM with General Features Represented by Implicit Functions. IEEE Robot. Autom. Lett. 2020, 5, 4329–4336. [Google Scholar]
  52. Zhao, J.; Li, T.; Yang, T.; Zhao, L.; Huang, S. 2D Laser SLAM With Closed Shape Features: Fourier Series Parameterization and Submap Joining. IEEE Robot. Autom. Lett. 2021, 6, 1527–1534. [Google Scholar]
  53. Jia, S.; Liu, C.; Wu, H.; Zeng, D.; Ai, M. A cross-correction LiDAR SLAM method for high-accuracy 2D mapping of problematic scenario. ISPRS J. Photogramm. Remote Sens. 2021, 171, 367–384. [Google Scholar]
  54. Zhang, H.; Chen, N.; Dai, Z.; Fan, G. A SLAM Localization Algorithm Based on Multi-level Data Fusion. Robot 2021, 43, 641–652. [Google Scholar]
  55. Zhou, H.; Hu, Z.; Liu, S.; Khan, S. Efficient 2D Graph SLAM for Sparse Sensing. In Proceedings of the 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Kyoto, Japan, 23–27 October 2022; pp. 6404–6411. [Google Scholar]
  56. Granstrom, K.; Callmer, J.; Ramos, F.; Nieto, J. Learning to detect loop closure from range data. In Proceedings of the 2009 IEEE International Conference on Robotics and Automation (ICRA), Kobe, Japan, 12–17 May 2009; pp. 15–22. [Google Scholar]
  57. Li, J.; Zhan, H.; Chen, B.M.; Reid, I.; Lee, G.H. Deep learning for 2D scan matching and loop closure. In Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, Canada, 24–28 September 2017; pp. 763–768. [Google Scholar]
  58. Spampinato, G.; Bruna, A.; Guarneri, I.; Giacalone, D. Deep Learning Localization with 2D Range Scanner. In Proceedings of the 2021 7th International Conference on Automation, Robotics and Applications (ICARA), Prague, Czech Republic, 4–6 February 2021; pp. 206–210. [Google Scholar]
  59. Valente, M.; Joly, C.; De La Fortelle, A. An LSTM Network for Real-Time Odometry Estimation. In Proceedings of the 2019 IEEE Intelligent Vehicles Symposium (IV), Paris, France, 9–12 June 2019; pp. 1434–1440. [Google Scholar]
  60. Valente, M.; Joly, C.; De La Fortelle, A. Deep Sensor Fusion for Real-Time Odometry Estimation. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macao, China, 4–8 November 2019; pp. 6679–6685. [Google Scholar]
  61. Li, C.; Wang, S.; Zhuang, Y.; Yan, F. Deep Sensor Fusion Between 2D Laser Scanner and IMU for Mobile Robot Localization. IEEE Sens. J. 2021, 21, 8501–8509. [Google Scholar]
  62. Nashed, S.B. Laser2Vec: Similarity-based Retrieval for Robotic Perception Data. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 24–30 October 2020; pp. 10657–10662. [Google Scholar]
  63. Jeong, H.; Lee, H. CNN-Based Fault Detection of Scan Matching for Accurate SLAM in Dynamic Environments. Sensors 2023, 23, 2940. [Google Scholar] [CrossRef]
  64. Huang, S.; Huang, H.Z.; Zeng, Q. A Robot Localization Method in Indoor Dynamic Environment. J. Univ. Electron. Sci. Technol. China 2021, 50, 382–390. [Google Scholar]
  65. Boniardi, F.; Caselitz, T.; Kümmerle, R.; Burgard, W. A pose graph-based localization system for long-term navigation in CAD floor plans. Robot. Auton. Syst. 2019, 112, 84–97. [Google Scholar] [CrossRef]
  66. Lazaro, M.T.; Capobianco, R.; Grisetti, G. Efficient Long-term Mapping in Dynamic Environments. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 153–160. [Google Scholar]
  67. Ye, H.; Chen, G.; Chen, W.; He, L.; Guan, Y.; Zhang, H. Mapping While Following: 2D LiDAR SLAM in Indoor Dynamic Environments with a Person Tracker. In Proceedings of the 2021 IEEE International Conference on Robotics and Biomimetics (ROBIO), Sanya, China, 26–31 December 2021; pp. 826–832. [Google Scholar]
  68. Leigh, A.; Pineau, J.; Olmedo, N.; Zhang, H. Person tracking and following with 2d laser scanners. In Proceedings of the 2015 IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 26–30 May 2015; pp. 726–733. [Google Scholar]
  69. Arras, K.O.; Mozos, O.M.; Burgard, W. Using Boosted Features for the detection of People in 2D Range Data. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Rome, Italy, 10–14 April 2007; p. 3402. [Google Scholar]
  70. Konstantinova, P.; Udvarev, A.; Semerdjiev, T. A study of a target tracking algorithm using global nearest neighbor approach. In Proceedings of the International Conference on Computer Systems and Technologies (CompSysTech03), Rousse, Bulgaria, 19–20 June 2003; pp. 290–295. [Google Scholar]
  71. Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar]
  72. Kalman, R.E. A new approach to linear filtering and prediction problems. J. Basic Eng. Trans. 1960, 82, 35–45. [Google Scholar]
  73. Sadhasivam, S.; Prakash, G.; Ranjith Kumar, G.; Nazeeya Anjum, N. Dynamic Mapping in Confined Spaces: Robotic SLAM with Human Detection and Path Planning. In Proceedings of the 2023 2nd International Conference on Automation, Computing and Renewable Systems (ICACRS), Pudukkottai, India, 11–13 December 2023; pp. 228–234. [Google Scholar]
  74. Zhao, M.; Guo, X.; Song, L.; Qin, B.; Shi, X.; Lee, G.H.; Sun, G. A General Framework for Lifelong Localization and Mapping in Changing Environment. In Proceedings of the 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Prague, Czech Republic, 27 September–1 October 2021; pp. 3305–3312. [Google Scholar]
  75. Stefanini, E.; Ciancolini, E.; Settimi, A.; Pallottino, L. Efficient 2D LIDAR-Based Map Updating for Long-Term Operations in Dynamic Environments. In Proceedings of the 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Kyoto, Japan, 23–27 October 2022; pp. 832–839. [Google Scholar]
  76. Stefanini, E.; Ciancolini, E.; Settimi, A.; Pallottino, L. Safe and Robust Map Updating for Long-Term Operations in Dynamic Environments. Sensors 2023, 23, 6066. [Google Scholar] [CrossRef]
  77. Zhao, S.; Zhao, J.; Zhang, M.; Yue, W. Evaluation Method of Change Degree of Dynamic Scene for Mobile Robots Based on Area Ratio and SLAM Algorithm. In Proceedings of the 2022 IEEE 4th International Conference on Civil Aviation Safety and Information Technology (ICCASIT), Dali, China, 12–14 October 2022; pp. 1372–1377. [Google Scholar]
  78. Narayana, M.; Kolling, A.; Nardelli, L.; Fong, P. Lifelong update of semantic maps in dynamic environments. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 24 October 2020–24 January 2021; pp. 6164–6171. [Google Scholar]
  79. Kurz, G.; Holoch, M.; Biber, P. Geometry-based Graph Pruning for Lifelong SLAM. In Proceedings of the 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Prague, Czech Republic, 27 September–1 October 2021; pp. 3313–3320. [Google Scholar]
  80. Brazil, G.; Kumar, A.; Straub, J.; Ravi, N.; Johnson, J.; Gkioxari, G. Omni3D: A Large Benchmark and Model for 3D Object Detection in the Wild. In Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 17–24 June 2023; pp. 13154–13164. [Google Scholar]
  81. Zimmerman, N.; Sodano, M.; Marks, E.; Behley, J.; Stachniss, C. Constructing Metric-Semantic Maps Using Floor Plan Priors for Long-Term Indoor Localization. In Proceedings of the 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Detroit, MI, USA, 1–5 October 2023; pp. 1366–1372. [Google Scholar]
  82. Debeunne, C.; Vivet, D. A Review of Visual-LiDAR Fusion based Simultaneous Localization and Mapping. Sensors 2020, 20, 2068. [Google Scholar] [CrossRef] [PubMed]
  83. Luo, R.C.; Lai, C.C. Multisensor Fusion-Based Concurrent Environment Mapping and Moving Object Detection for Intelligent Service Robotics. IEEE Trans. Ind. Electron. 2014, 61, 4043–4051. [Google Scholar] [CrossRef]
  84. Zhang, M.; Tang, D.; Liu, C.; Xu, X.; Tan, Z. A LiDAR and camera fusion-based approach to mapping and navigation. In Proceedings of the 2021 40th Chinese Control Conference (CCC), Shanghai, China, 26–28 July 2021; pp. 4163–4168. [Google Scholar]
  85. Kamarudin, K.; Mamduh, S.; Yeon, A.; Visvanathan, R.; Shakaff, A.; Zakaria, A.; Kamarudin, L.; Rahim, N. Improving performance of 2D SLAM methods by complementing Kinect with laser scanner. In Proceedings of the 2015 IEEE International Symposium on Robotics and Intelligent Sensors (IRIS), Langkawi, Malaysia, 18–20 October 2015; pp. 278–283. [Google Scholar]
  86. Li, Y.; Zhun, F.; Guijie, Z.; Wenji, L.; Chong, L.; Yupeng, W.; Honghui, X. A SLAM with simultaneous construction of 2D and 3D maps based on Rao-Blackwellized particle filter. In Proceedings of the 2018 Tenth International Conference on Advanced Computational Intelligence (ICACI), Xiamen, China, 29–31 March 2018; pp. 374–378. [Google Scholar]
  87. Feng, Z.; Kuang, L. A 2.5D SLAM Algorithm Based on the Fusion of 2D LiDAR and RGB-D Camera. In Proceedings of the 2024 3rd International Symposium on Sensor Technology and Control (ISSTC), Zhuhai, China, 25–27 October 2024; pp. 194–197. [Google Scholar] [CrossRef]
  88. Meng, X.; Gong, X.; Li, K.; Shi, Y.; Ma, Y. Research on 2D Laser SLAM Algorithm Fusion of Semantic Information. In Proceedings of the 2023 9th International Conference on Mechanical and Electronics Engineering (ICMEE), Xi’an, China, 17–19 November 2023; pp. 356–361. [Google Scholar]
  89. Xu, Y.; Ou, Y.; Xu, T. SLAM of Robot based on the Fusion of Vision and LIDAR. In Proceedings of the 2018 IEEE International Conference on Cyborg and Bionic Systems (CBS), Shenzhen, China, 25–27 October 2018; pp. 121–126. [Google Scholar] [CrossRef]
  90. Li, Y.; Li, S.; Tan, Y.; Hu, J. SLAM of Mobile Robot Based on the Joint Optimization of LiDAR and Camera. In Proceedings of the 2022 7th International Conference on Control and Robotics Engineering (ICCRE), Beijing, China, 15–17 April 2022; pp. 21–25. [Google Scholar]
  91. Oh, S.H.; Hwan, K.D.; Lim, H.T. 2.5D SLAM Algorithm with Novel Data Fusion Method Between 2D-Lidar and Camera. In Proceedings of the 2023 23rd International Conference on Control, Automation and Systems (ICCAS), Yeosu, Republic of Korea, 17–20 October 2023; pp. 1904–1907. [Google Scholar]
  92. Mu, L.; Yao, P.; Zheng, Y.; Chen, K.; Wang, F.; Qi, N. Research on SLAM Algorithm of Mobile Robot Based on the Fusion of 2D LiDAR and Depth Camera. IEEE Access 2020, 8, 157628–157642. [Google Scholar] [CrossRef]
  93. Chen, B.; Li, P.; Li, H. Fast Loop Closures Detection Method for Geomagnetic Signal and Lidar Fusion. In Proceedings of the 2020 IEEE 91st Vehicular Technology Conference, Antwerp, Belgium, 25–28 May 2020; pp. 1–6. [Google Scholar]
  94. Xu, X.; Lu, S.; Wu, J.; Lu, H.; Zhu, Q.; Liao, Y.; Xiong, R.; Wang, Y. RING++: Roto-Translation Invariant Gram for Global Localization on a Sparse Scan Map. IEEE Trans. Robot. 2023, 39, 4616–4635. [Google Scholar]
  95. Liang, X.; Chen, H.; Li, Y.; Liu, Y. Visual laser-SLAM in large-scale indoor environments. In Proceedings of the 2016 IEEE International Conference on Robotics and Biomimetics (ROBIO), Qingdao, China, 3–7 December 2016; pp. 19–24. [Google Scholar]
  96. Zhang, J.; Lei, J.; Zhao, Y.; Yang, K.; Liu, L.; Peng, X.; He, K. The Robot Fusion Mapping Method Applied in Indoor Poultry Farming Environment. In Proceedings of the 2023 2nd International Conference on Artificial Intelligence, Human-Computer Interaction and Robotics (AIHCIR), Tianjin, China, 8–10 December 2023; pp. 597–602. [Google Scholar]
  97. Sarlin, P.; Cadena, C.; Siegwart, R.; Dymczyk, M. From coarse to fine: Robust hierarchical localization at large scale. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 12708–12717. [Google Scholar]
  98. Zhou, Z.; Guo, C.; Pan, Y.; Li, X.; Jiang, W. A 2-D LiDAR-SLAM Algorithm for Indoor Similar Environment with Deep Visual Loop Closure. IEEE Sens. J. 2023, 23, 14650–14661. [Google Scholar]
  99. Qin, H.; Huang, M.; Cao, J.; Zhang, X. Loop closure detection in SLAM by combining visual CNN features and submaps. In Proceedings of the 2018 4th International Conference on Control, Automation and Robotics (ICCAR), Auckland, New Zealand, 20–23 April 2018; pp. 426–430. [Google Scholar]
  100. Lu, G. SLAM Based on Camera-2D LiDAR Fusion. In Proceedings of the 2024 IEEE International Conference on Robotics and Automation (ICRA), Yokohama, Japan, 13–17 May 2024; pp. 16818–16825. [Google Scholar]
Figure 1. Two-dimensional Lidar SLAM framework.
Figure 1. Two-dimensional Lidar SLAM framework.
Remotesensing 17 01214 g001
Figure 2. Triangular distance measurement principle.
Figure 2. Triangular distance measurement principle.
Remotesensing 17 01214 g002
Figure 3. TOF measurement principle.
Figure 3. TOF measurement principle.
Remotesensing 17 01214 g003
Figure 4. Development of typical SLAM algorithm for 2D Lidar.
Figure 4. Development of typical SLAM algorithm for 2D Lidar.
Remotesensing 17 01214 g004
Figure 5. Dynamic Bayesian network.
Figure 5. Dynamic Bayesian network.
Remotesensing 17 01214 g005
Figure 6. Map constructed using the Gmapping algorithm.
Figure 6. Map constructed using the Gmapping algorithm.
Remotesensing 17 01214 g006
Figure 7. Graph diagram.
Figure 7. Graph diagram.
Remotesensing 17 01214 g007
Figure 8. Cartographer map of the second floor of the Deutsches Museum.
Figure 8. Cartographer map of the second floor of the Deutsches Museum.
Remotesensing 17 01214 g008
Figure 9. The deep network for scan matching and loop-closure detection.
Figure 9. The deep network for scan matching and loop-closure detection.
Remotesensing 17 01214 g009
Figure 10. Comparison of ConvNet with Hector SLAM, Cartographer, and Hector SLEM: (a) the loop-closure simulation; (b) system hardware; (c) the localization accuracy. Reproduced with permission from Ref. [57]. Copyright 2017, IEEE Publications.
Figure 10. Comparison of ConvNet with Hector SLAM, Cartographer, and Hector SLEM: (a) the loop-closure simulation; (b) system hardware; (c) the localization accuracy. Reproduced with permission from Ref. [57]. Copyright 2017, IEEE Publications.
Remotesensing 17 01214 g010
Figure 11. Cloud merging: (a) Cloud points merging; reproduced with permission from Ref (red: reference cloud, green: current cloud) [66]. (b) Before cloud merging; reproduced with permission from Ref (red: oldest session, green: current session) [66]. Copyright 2018, IEEE Publications. (c) After cloud merging. Reproduced with permission from Ref. [66]. Copyright 2018, IEEE Publications.
Figure 11. Cloud merging: (a) Cloud points merging; reproduced with permission from Ref (red: reference cloud, green: current cloud) [66]. (b) Before cloud merging; reproduced with permission from Ref (red: oldest session, green: current session) [66]. Copyright 2018, IEEE Publications. (c) After cloud merging. Reproduced with permission from Ref. [66]. Copyright 2018, IEEE Publications.
Remotesensing 17 01214 g011
Figure 12. Map comparison: (a) Local map built with original point cloud; reproduced with permission from Ref. [66]. Copyright 2018, IEEE Publications. (b) Local map built with Optimize point clouds. Reproduced with permission from Ref. [66]. Copyright 2018, IEEE Publications.
Figure 12. Map comparison: (a) Local map built with original point cloud; reproduced with permission from Ref. [66]. Copyright 2018, IEEE Publications. (b) Local map built with Optimize point clouds. Reproduced with permission from Ref. [66]. Copyright 2018, IEEE Publications.
Remotesensing 17 01214 g012
Figure 13. Examples of environment change and corresponding map updating experiment. The results in (ac) were collected from a market. And (df) are from a garage. Reproduced with permission from Ref. [74]. Copyright 2023, IEEE Publications.
Figure 13. Examples of environment change and corresponding map updating experiment. The results in (ac) were collected from a market. And (df) are from a garage. Reproduced with permission from Ref. [74]. Copyright 2023, IEEE Publications.
Remotesensing 17 01214 g013
Figure 14. Structure of FSLAM system.
Figure 14. Structure of FSLAM system.
Remotesensing 17 01214 g014
Figure 15. Mapping results: (a) mapping results in simulation environments; (b) mapping results in real environments. Reproduced with permission from Ref. [98]. Copyright 2023, IEEE Publications.
Figure 15. Mapping results: (a) mapping results in simulation environments; (b) mapping results in real environments. Reproduced with permission from Ref. [98]. Copyright 2023, IEEE Publications.
Remotesensing 17 01214 g015
Figure 16. The model for generating fused descriptors with Lidar images and RGB images.
Figure 16. The model for generating fused descriptors with Lidar images and RGB images.
Remotesensing 17 01214 g016
Figure 17. The model for generating fused descriptors with Lidar images and RGB images. Reproduced with permission from Ref. [100]. Copyright 2018, IEEE Publications.
Figure 17. The model for generating fused descriptors with Lidar images and RGB images. Reproduced with permission from Ref. [100]. Copyright 2018, IEEE Publications.
Remotesensing 17 01214 g017
Figure 18. Experimental platform.
Figure 18. Experimental platform.
Remotesensing 17 01214 g018
Figure 19. Experimental building exterior view.
Figure 19. Experimental building exterior view.
Remotesensing 17 01214 g019
Figure 20. Leg recognition in corridor on the first floor: (a) leg recognition 1; (b) leg recognition 2.
Figure 20. Leg recognition in corridor on the first floor: (a) leg recognition 1; (b) leg recognition 2.
Remotesensing 17 01214 g020
Figure 21. Comparison of building drawings on the third floor: (a) original point cloud map; (b) revised point cloud map.
Figure 21. Comparison of building drawings on the third floor: (a) original point cloud map; (b) revised point cloud map.
Remotesensing 17 01214 g021
Figure 22. Comparison of local images of removing pedestrians: (a) local original map 1; (b) local modified map 1; (c) local original map 2; (d) local modified map 2.
Figure 22. Comparison of local images of removing pedestrians: (a) local original map 1; (b) local modified map 1; (c) local original map 2; (d) local modified map 2.
Remotesensing 17 01214 g022
Figure 23. Algorithm comparison: (a) Hector map; (b) Gmapping map; (c) improved Cartographer map.
Figure 23. Algorithm comparison: (a) Hector map; (b) Gmapping map; (c) improved Cartographer map.
Remotesensing 17 01214 g023
Figure 24. General algorithm framework.
Figure 24. General algorithm framework.
Remotesensing 17 01214 g024
Table 1. List of mainstream 2D Lidar.
Table 1. List of mainstream 2D Lidar.
ManufacturerProduct ModelParameterMethod
RangeRange AccuracySampling FrequencyAngular Resolution
HokuyoUST-20LX 2D20 m±40 mm 0.25°TOF
SICKPicoScan12030 m±20 mm20 K0.10°TOF
Pepperl+FuchsOMD30M-R2000-B23-V1V1D-HD30 m± 25 mm84 K0.042°TOF
SLAMTECRPLIDAR S340 m±30 mm32 K0.1125°SL-TOF
RPLIDAR A325 mRange × 1% (≤3 m)16 K0.225°Triangulation ranging
Range × 2% (3–5 m)
Range × 2.5% (5–25 m)
LSLIDARM10P25 m±3 cm12 K0.22°TOF
N301-6060 m±1 cm5 K0.09°TOF
OLEILR-1BS5H25 m±2 cm25 K0.225°TOF
richbeamLoraBeam 1L40 m2 cm10–30 K 0.08–0.24DTOF
Table 2. Characteristics of 2D Lidar SLAM algorithm.
Table 2. Characteristics of 2D Lidar SLAM algorithm.
NameCharacteristic
Filter-based SLAM
EKF SLAMCapable of building a sparse map using features, but with a high computational cost and limited robustness.
Fast SLAM1,2The first particle filter algorithm, the earliest to introduce real-time output grid map, but with particle degradation and lacking loop detection.
GmappingBased on the particle filter, mitigates particle degradation but relies on odometry; lacking loop detection, only suitable for small-scale scene construction.
Core SLAMBased on particle filter, it expends fewer resources but entails a large error.
Scan matching-based SLAM
Hector SLAMBased on Gauss–Newton, the pose can be constructed in real time without an odometer. Sensitive to initial value, lacking loop detection, difficult to guarantee accuracy, applicable to air or flat-road environments, drifts with fast rotation.
L-L ICP SLAMUses linear feature matching but is difficult to apply in indoor corridor environments.
Fourier Transform-Based Matching SLAMThe innovative use of Fourier transform to achieve matching.
Graph optimization-based SLAM
KartoSLAMThe first open-source SLAM based on graph optimization, with a long optimization time and high complexity.
CartographerA classic diagram optimization SLAM, applicable to 2D/3D radar, with the front end adopting CSM and gradient optimization and the back end adopting graph optimization, with accelerated closed-loop detection. Strong real-time performance, applicable to large scenes.
LagoSLAMLinear approximate graph optimization without initial assumptions.
ICP-SLAMBuilds a multi-point cloud density search tree, reducing complexity.
GP-SLAMApplicable to medium and small map construction, and the map cannot be directly navigated.
Deep learning-based SLAM
CNN SLAMThe deep learning method can adapt to various situations and improve accuracy and robustness, but it requires a large amount of data-scene changes, which reduces detection accuracy.
Table 3. List of features for human leg detection.
Table 3. List of features for human leg detection.
No.FeatureNo.FeatureNo.Feature
1Number of points6Width11Average distance from median
2Standard deviation7Length12Occluded (Boolean)
3Boundary regularity8Circularity13Radius of best-fit circle
4Boundary length9Linearity14Inscribed angular Variable
5Mean angular difference10Mean curvature15Distance from laser scanner
Table 4. Different 2D Lidar SLAM algorithms with Intel data set.
Table 4. Different 2D Lidar SLAM algorithms with Intel data set.
Performance ParameterGmapping [21]Cartographer
[41]
ICP-SLAM
[43]
2D Graph SLAM
[55]
Reference [42]
Absolute translation error/m0.0310.02290.01810.08480.1095
Absolute rotation error/°1.30.4530.2722.3190.0281
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ran, Y.; Xu, X.; Tan, Z.; Luo, M. A Review of 2D Lidar SLAM Research. Remote Sens. 2025, 17, 1214. https://doi.org/10.3390/rs17071214

AMA Style

Ran Y, Xu X, Tan Z, Luo M. A Review of 2D Lidar SLAM Research. Remote Sensing. 2025; 17(7):1214. https://doi.org/10.3390/rs17071214

Chicago/Turabian Style

Ran, Yingying, Xiaobin Xu, Zhiying Tan, and Minzhou Luo. 2025. "A Review of 2D Lidar SLAM Research" Remote Sensing 17, no. 7: 1214. https://doi.org/10.3390/rs17071214

APA Style

Ran, Y., Xu, X., Tan, Z., & Luo, M. (2025). A Review of 2D Lidar SLAM Research. Remote Sensing, 17(7), 1214. https://doi.org/10.3390/rs17071214

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop