Next Article in Journal
Estimation of Spatial–Temporal Dynamic Evolution of Potential Afforestation Land and Its Carbon Sequestration Capacity in China
Previous Article in Journal
Continuous Wavelet Transform Peak-Seeking Attention Mechanism Conventional Neural Network: A Lightweight Feature Extraction Network with Attention Mechanism Based on the Continuous Wave Transform Peak-Seeking Method for Aero-Engine Hot Jet Fourier Transform Infrared Classification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Effective LiDAR-Inertial SLAM-Based Map Construction Method for Outdoor Environments

State Key Laboratory of Robotics and Systems (HIT), Department of Mechatronics Engineering, Harbin Institute of Technology, Harbin 150001, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(16), 3099; https://doi.org/10.3390/rs16163099
Submission received: 2 July 2024 / Revised: 15 August 2024 / Accepted: 21 August 2024 / Published: 22 August 2024

Abstract

SLAM (simultaneous localization and mapping) is essential for accurate positioning and reasonable path planning in outdoor mobile robots. LiDAR SLAM is currently the dominant method for creating outdoor environment maps. However, the mainstream LiDAR SLAM algorithms have a single point cloud feature extraction process at the front end, and most of the loop closure detection at the back end is based on RNN (radius nearest neighbor). This results in low mapping accuracy and poor real-time performance. To solve this problem, we integrated the functions of point cloud segmentation and Scan Context loop closure detection based on the advanced LiDAR-inertial SLAM algorithm (LIO-SAM). First, we employed range images to extract ground points from raw LiDAR data, followed by the BFS (breadth-first search) algorithm to cluster non-ground points and downsample outliers. Then, we calculated the curvature to extract planar points from ground points and corner points from clustered segmented non-ground points. Finally, we used the Scan Context method for loop closure detection to improve back-end mapping speed and reduce odometry drift. Experimental validation with the KITTI dataset verified the advantages of the proposed method, and combined with Walking, Park, and other datasets comprehensively verified that the proposed method had good accuracy and real-time performance.

1. Introduction

Map construction has been a primary research focus in the area of autonomous navigation for outdoor mobile robots [1]. Map construction approaches are mainly categorized into visual SLAM [2] and LiDAR SLAM [3]. The latter has gradually become the mainstream method of outdoor environment map construction because it uses LiDAR sensors with the advantages of long detection distance, high data accuracy, and no interference from light changes [4,5]. The LiDAR SLAM method is essentially separated into two parts: front end and back end [6]. The front-end component is primarily responsible for LiDAR data processing and point cloud feature extraction, which includes preprocessing, filtering, feature point extraction, and so on. The front end’s purpose is to improve the LiDAR odometry’s accuracy, as well as the map stability. ICP (iterative closest point) [7] and NDT (normal distribution transform) [8] are two widely used front-end methods for aligning point clouds. The back-end component is primarily in charge of optimizing the positioning and maps, which includes position optimization, map update, and loop closure detection. The back end’s purpose is to increase both the consistency and the accuracy of positioning and maps. Back-end techniques commonly utilized include pose graph optimization algorithms such as GTSAM [9], Ceres Solver [10], and so on, while loop closure detection algorithms typically use radius nearest neighbor (RNN) [11].
Currently, with the mainstream state-of-the-art LIO-SAM algorithm as the basis of research, there are still areas that need to be improved and areas for refinement in the front end and back end: (i) the front end should extract stable and reliable feature points as much as possible, so as to ensure the alignment accuracy of the LiDAR odometry and the stability of the map, and (ii) the back end should detect point cloud map loop closures as quickly and accurately as possible to avoid situations such as LiDAR odometry drift in large-scale scenarios.
As shown in Figure 1a, the feature point cloud is extracted directly by curvature calculation, and the result is very rough and cluttered. Moreover, due to the large scale of outdoor scenes, traditional loop closure detection methods based on odometry-measured distances (e.g., the RNN algorithm) are likely to produce cumulative drift and loop closure failures. As shown in Figure 1b, using the RNN algorithm for large-scale scene loop closure detection, the odometry drifts, and loop closure fails. In complex outdoor environments, directly relying on the curvature for feature extraction of point clouds is obviously single and unreasonable. Because the point cloud data acquired by LiDAR are too redundant, containing unnecessary distance maximum points, a large number of less reliable outlier points, etc., feature extraction and alignment of these points can easily degrade the odometry accuracy. The combination of the above leads to the low accuracy and poor reliability of the LiDAR SLAM algorithm in the outdoor mapping results, which brings unnecessary trouble to the subsequent robot re-localization and path planning. Therefore, there is an urgent need to optimize and improve the existing LiDAR SLAM front-end point cloud feature extraction module and back-end loop closure detection module, to further improve the accuracy and reliability of the map construction results, and to provide a guarantee for the subsequent autonomous navigation of mobile robots.
The LIO-SAM algorithm determines the LiDAR position by aligning two neighboring frames of the point cloud, a method known as scan-to-scan. It is important to note that LiDAR odometry based on scan-to-scan is less reliable since too little information is employed, resulting in lower position accuracy. Therefore, we integrated the distance from the laser point to the center of the LiDAR and the horizontal angle to extract the ground point, refine the front-end feature extraction module of the LIO-SAM algorithm, encode each frame of LiDAR point cloud as a range image, and cluster the non-ground point cloud. We decided to downsample the outlier points so that they could be included in the point cloud’s subsequent feature extraction because we could not discard all of the points that failed to cluster (outlier points); otherwise, the constructed maps would be too sparse, and the environment would have a single feature. By retaining just one column out of every five in the point cloud, we downsampled these outlier points. Next, planar points were extracted from ground points and corner points from clustered segmented non-ground points using the curvature. Both the downsampled outlier points and the successful clustering point cloud were contained in the non-ground points following clustering segmentation. The LiDAR measurement value is not always equal to the true value due to the existence of LiDAR measurement noise and other factors; this is known as “cumulative drift” and occurs each time the LiDAR odometry is solved. The error steadily increases over time. The localization of the LiDAR SLAM system may deviate from the true value due to cumulative drift, rendering the generated maps unusable. Therefore, loop closure detection is needed by the SLAM system in order to account for cumulative drift. In response to this problem, we present the Scan Context scene recognition method for loop closure detection. This method enhances the real-time mapping of the LIO-SAM algorithm’s back end and enables the algorithm to detect loop closure even in the case of a large odometry drift.
The overall framework of the algorithm proposed in this paper on the basis of the LIO-SAM algorithm is shown in Figure 2. The green module is the improved module proposed in this paper, including front-end point cloud clustering segmentation, feature extraction, and back-end loop closure detection by using the Scan Context algorithm. Point cloud clustering segmentation includes ground point extraction and non-ground point clustering, etc. The main innovations of this study are as follows:
(1)
We projected the original LiDAR point cloud as a range image and performed ground point extraction on the range image. This method effectively reduced the dimensionality of the point cloud data, improved the efficiency of point cloud processing, and could accurately identify the ground points.
(2)
We used the BFS algorithm to cluster and segment the non-ground points and found the point cloud with the same label by traversing layer by layer, which improved the clustering speed of the point cloud, avoided the larger computation brought by the global search, and established the foundation for the lightweight front end of the algorithm.
(3)
We introduce the Scan Context loop closure detection method in the back end of the LIO-SAM algorithm, which effectively improves the accuracy of loop closure detection and enhances the robustness of loop closure detection in the back end of the algorithm.

2. Related Work

LiDAR point cloud feature extraction methods have advanced significantly in the previous few years. H. Hoppe et al. (1992) represented the curvature and orientation information of the point cloud by calculating the normal vector of each LiDAR point [12]. A. Golovinskiy et al. (2009) proposed a shape-based point cloud recognition method that used shape descriptors to represent objects in a point cloud and used machine learning algorithms to classify the point cloud [13]. B. Jian et al. (2011) used a Gaussian mixture model to represent the point set, which was aligned and filtered based on the distance between the Gaussian models [14]. The above method provides a good research idea for the point cloud feature extraction module of the subsequent LiDAR SLAM algorithm. Zhang J et al. (2014) published the LOAM algorithm with LiDAR as an odometry, which practically applied the LiDAR point cloud feature extraction method in the LiDAR SLAM algorithm [15]. The LOAM algorithm extracts feature points from LiDAR point cloud data using curvature, such as corner points and planar points, and then aligns the LiDAR point cloud by creating constraints between the feature points to achieve map construction and localization. Furthermore, the LOAM algorithm compensates for each feature point’s motion by calculating the relative position of the start and end moments of the LiDAR point cloud, which solves the problem of point cloud distortion caused by LiDAR motion. However, the LOAM algorithm lacks a loop closure detection module, resulting in easy drift and increased error accumulation in large scale scenes. Zhang J et al. (2015) proposed the V-LOAM algorithm to improve the robustness of LiDAR SLAM systems in the absence of visual features and challenging motion scenarios [16]. The V-LOAM algorithm uses high-speed but low-accuracy visual odometry to estimate robot motion and then optimizes the motion estimation and point cloud alignment using the LiDAR scan-match method, which is highly robust in high-speed motion and illumination change scenarios but is generally effective for outdoor mapping. It is worth mentioning that in recent years, LiDAR point cloud clustering segmentation methods based on deep learning have been significantly developed. The PointNet series of methods capture the local structure of the point cloud by designing a point net, which can learn the deep point set features for point cloud clustering segmentation [17,18]. The PointNet method can realize point cloud segmentation quickly and efficiently, but it cannot adequately capture the structural information in ground point cloud data and is sensitive to noise. The RandLA-Net method can directly segment the point cloud for clustering point by point, saving computational resources and memory consumption based on random point sampling [19]. The RandLA-Net method can perform point cloud segmentation efficiently, but in the process of ground point cloud segmentation, it is easy to lose part of the ground features due to the local feature aggregation module used in the method, and it is poorly adapted to sparse or dense point clouds.
In the course of the development of LiDAR point cloud feature extraction methods, the SLAM back-end loop closure detection methods have likewise been widely studied and progressed. G. Grisetti et al. (2010) proposed the GraphSLAM method, which uses a graph to represent the robot’s trajectory and a map of the environment and estimates the position and pose of the loop closure by optimizing the structure and edges of the graph [20]. Latif Y et al. (2013) proposed a method using a probabilistic graphical model that allows loop closure detection in different time steps [21]. The method estimates the position and attitude of the loop closure by minimizing the error between the loop closure constraints and other constraints, thus improving the accuracy and robustness of the SLAM system. ShanTixiao et al. (2018) formally integrated the loop closure detection module into the LiDAR SLAM back end and proposed the LeGO-LOAM algorithm [22]. The LeGO-LOAM algorithm enhances the LOAM algorithm framework by including a loop closure detection and optimization module based on the RNN algorithm.
Although the LiDAR SLAM algorithm framework has been improved, the single use of 3D LiDAR for mapping and localization will inevitably lead to mapping drift and localization failure, etc. In order to solve this problem, the researchers introduced the LiDAR inertial fusion (LiDAR- Inertial) SLAM framework based on the characteristics of the complementary LiDAR and IMU sensing [23]. Ye Haoyang et al. (2019) proposed a LIO-mapping algorithm for tightly coupling LiDAR and IMU [24]. The algorithm minimizes the cost functions from LiDAR and IMU simultaneously to compensate for the shortcomings of a single LiDAR’s insufficient mapping and localization, and it provides more accurate state estimation for LiDAR odometry. Shibo Zhao et al. (2019) proposed the LIOM algorithm [25], which uses a CNN network to remove dynamic targets from each LiDAR scan, a Kalman filter to fuse the LiDAR and IMU data, and a high-frequency output for LiDAR position estimation. Shan Tixiao et al. (2020) proposed the LIO-SAM algorithm, which is more advanced and widely used today [26], based on research into the LeGO-LOAM algorithm. The LIO-SAM algorithm fuses multi-sensing data such as IMU, LiDAR, etc.; builds observation models such as absolute measurement, relative measurement, and loop closure detection into factors; and integrates these factors into a complete factor graph for SLAM back-end optimization, which greatly improves LiDAR SLAM mapping accuracy and robustness compared with other algorithms. However, due to the front-end point cloud feature extraction method, the LIO-SAM algorithm is relatively single and rough, resulting in the algorithm losing its lightweight characteristics, and there is still improvement space for the accuracy of mapping construction. In addition, the LIO-SAM algorithm itself continues the loop closure detection module of the LeGO-LOAM algorithm, and the LiDAR odometry is prone to cumulative drift in large-scale scenarios.
Aiming at the mainstream LiDAR SLAM algorithms, which have the problems of single and unreliable point cloud feature extraction methods, time-consuming back-end loop closure detection, and easy drift of the odometry, this paper proposes a LiDAR-inertial SLAM method using point cloud segmentation and Scan Context based on the state-of-the-art LIO-SAM algorithm. First, we projected and encoded each frame of the LiDAR point cloud to obtain a point cloud range image. Then, in order to extract reliable point cloud features more accurately, we extracted ground points based on range images. It is worth mentioning that this method can extract ground points better than the RANSAC algorithm in outdoor scenes [27,28]. The outdoor scene is large in scale, and the amount of point cloud data acquired by LiDAR is large. When using the RANSAC method for ground point cloud segmentation, the computational efficiency is low and easily falls into the local optimum, the algorithm is time-consuming, and at the same time, it is difficult to obtain ideal segmentation results. Then, we used the BFS method to cluster and label the non-ground points [29] and downsampled the points that failed to cluster (i.e., outlier points). By calculating the curvature, the point cloud clusters with the largest and smallest smoothness were selected from the point clouds under different labels for subsequent point cloud map alignment, which reduced the number of point cloud matches and realized the lightweight quality of the algorithm. Finally, we improved the speed and capability of the algorithm’s back-end loop closure detection through the Scan Context loop closure detection algorithm [30] and improved the algorithm’s odometry drift phenomenon in large-scale scenarios. In order to validate the effectiveness of our proposed method, we tested it on several real datasets such as the KITTI dataset, Walking, Park, etc., and the results showed that our method had better mapping accuracy and real-time performance compared with other state-of-the-art LiDAR SLAM algorithms.
The above research methods provide strong support and a strong reference for the improvement of the LIO-SAM algorithm, but the problems related to the front-end feature extraction and back-end loop closure of the algorithm are still not effectively solved. Therefore, based on the research of the above methods, this paper proposes a LiDAR-inertial SLAM method based on point cloud segmentation and Scan Context, so as to provide an effective solution for the construction of outdoor environment maps.

3. Point Cloud Clustering Segmentation and Feature Extraction

In this section, we propose a point cloud clustering segmentation method based on range image analysis, which uses the BFS algorithm to achieve ground point segmentation and non-ground point clustering. Figure 3 shows the method’s overall framework. First, we converted the LiDAR current frame point cloud into a range image and encoded a 2D image for each LiDAR point, including row and column information. Then, the ground points were segmented, and the non-ground points were clustered. Points where clustering failed (outlier points) were also reflective of environmental features and were large in proportion, so the outlier points were downsampled. Finally, we extracted planar points from ground points and corner points from non-ground points. The non-ground points were mainly composed of outlier points after downsampling and successfully clustered points.

3.1. Convert to Range Image

To significantly increase the efficiency of point cloud data processing, we employed a range image conversion method to transform the current frame LiDAR point cloud into a 2D picture [31]. We illustrate this method of range image conversion with the example of the Velodyne-16 LiDAR used. The Velodyne-16 LiDAR (Velodyne, San Jose, CA, USA) has an angular resolution of 0.02 in the horizontal direction ( θ h d ) and 2 in the vertical direction ( θ v d ). Figure 4 shows the specific approach for transforming the current frame LiDAR point cloud into a single range image, from which the coordinates (row and column numbers) of each LiDAR point could be determined using Equations (1)–(4). After the conversion, the current frame of the LiDAR point cloud was turned into a range image with a resolution of 16 × 1800 , with each pixel r representing the target’s measured distance from the LiDAR’s center. Equations (1)–(4) are specified as follows:
θ h = arctan y x
θ v = arctan z x 2 + y 2
row = θ v θ v min θ v d
column = θ h + π θ h d
P ( x , y , z ) represents the LiDAR point P with coordinates ( x , y , z ) in the LiDAR coordinate system. θ h is the angle between the laser beam’s projection in the x y plane and the y z plane, θ v is the angle between the laser beam and the x y plane, and (row, column) is the row and column number of the range image to which the LiDAR points are transformed. θ v min represents the minimum value of the laser beam’s vertical field of view.

3.2. Ground Point Extraction

In this study, the 3D LiDAR sensor was placed horizontally, and the 16 laser beams were numbered 0 15 sequentially from bottom to top, so the extraction of ground points was limited to the 8 laser beams numbered 0 7 . Figure 5 shows the ground point extraction procedure, in which two nearby laser beams intersected at ground points A and B , with coordinates ( x a , y a , z a ) and ( x b , y b , z b ) , respectively. The vertical height difference between grounding points A and B is Δ h , whereas the plane distance is d A B , and the horizontal angle is α A B . We used Equations (5)–(7) to calculate the horizontal angle α A B . If the horizontal angle α A B < 10 , then the grounding points A and B could be considered to be on the same horizontal plane, i.e., points A and B were ground points. Equations (5)–(7) are specified as follows:
d A B = ( x a x b ) 2 + ( y a y b ) 2
Δ h = z a z b
α A B = arctan Δ h d A B
Figure 6 shows that the above method could successfully extract a large number of ground points from the LiDAR point cloud.

3.3. Point Cloud Clustering Segment

First, we generated a labeled image of the same size as the range image and assigned class labels to the relevant point clouds in the range image. Then, we traversed the non-ground points to find the center point, followed by a search and angle calculation for the four domain points above, below, left, and right of the center point, as shown in Figure 7a. If the angle was greater than the threshold, the center point and neighborhood points were close and belonged to the same class of point cloud. We added labels of the same class to neighborhood points that met the requirements, as well as the current point, at the appropriate positions in the labeled image, until all traversals for such labels were completed. Finally, the non-ground points in the range image that did not have class labels were re-selected as new center points, and the preceding steps were repeated to complete the non-ground point clustering segmentation until all non-ground points in the range image had class labels in the labeled image.
The process of calculating the angle about the clustering decision is shown in Figure 7b. First, we calculated the distance from the center point and the neighborhood point to the LiDAR center; the larger value of the distance among the two was recorded as d 1 , and the smaller value of the distance was recorded as d 2 . We denoted α as the angle between the center point and the neighboring point; if the center point and the neighboring point were in the same laser beam, α = 0.2 , i.e., the angular resolution of the LiDAR in the horizontal direction. If the center point and the neighboring point were in the upper and lower adjacent two laser beams, α = 2 , i.e., the angular resolution of the LiDAR in the vertical direction. Finally, we wrote down the angle to be computed as β and utilized Equation (8) for the calculation.
β = arctan d 1 d 2 × cos α d 2 × sin α
We can define an angle threshold t h β . If β > t h β , it means that the center point and the neighboring points are close to each other and belong to the same class, and should be added with the same class (number) label. After extracting the ground points, the non-ground points were clustered and segmented using the above method to obtain successfully cluster points and outlier points, as shown in Figure 8.

3.4. Corner and Planar Point Extraction

When the curvature exceeded a certain threshold, we chose the current point to be an edge feature point. When the curvature fell below a certain threshold, the current point was selected as a planar feature point. The specific steps were as follows: traversing from the fifth point on the first laser beam until the end of the penultimate fifth point of the last laser beam and for the current point p i taking each of the five points adjacent to the front and back of the same laser beam for the calculation of the curvature c i . Specifically, the calculation was performed using Equation (9):
c i = 1 S p k , i j S , j i p k , i p k , j
where c i denotes the curvature of the current point p i , and S is the set of points consisting of five points each neighboring the current point p i in front of and behind it. p k , i represents the coordinates of the i th LiDAR point in frame k of the LiDAR point cloud. p k , j represents the neighboring point coordinates of point p i in the point cloud set S in frame k of the LiDAR point cloud.
After calculating the curvature, in order to extract the feature points uniformly, we divided the point cloud of each laser beam scanning for one week into six equal parts and then sorted each part of the point cloud in ascending order according to the curvature. We set the curvature threshold as c t h . We took 20 points with curvature values greater than c t h as the corner feature points according to the curvature value and took up to four points with curvature value less than c t h as planar feature points according to the curvature value. Following clustering and segmentation, the returned ground and non-ground points were used to extract feature points via the previously stated procedure. The results are shown in Figure 9.

4. Loop Closure Detection Based on Scan Context

Figure 10 shows the Scan Context algorithm’s framework. The procedures of the Scan Context algorithm were as follows: first, a single-frame LiDAR point cloud was encoded into Scan Context. After that, a dimension vector was encoded using this Scan Context, and it was utilized to build the KD-Tree and obtain the closest candidate vectors. Finally, the current query’s Scan Context was compared with the searched candidate objects, and the candidate item that most closely matched the current query and met the threshold criterion was chosen as the loop closure.

4.1. The Concept of Scan Context

The reason for the name “Scan Context” is that much like when reading an article, you must relate the context in order to understand the content [32]. First, the LiDAR point cloud of the current frame was divided into N r and N s sub-areas according to both the radius and the azimuth, as shown in Figure 11. N r represents the number of circles divided along the radius direction, and N s represents the number of subregions divided by each circle according to the azimuthal angle. The farthest LiDAR detection distance used in this paper was 80   m , N r = 20 , N s = 60 (default values in the algorithm), so each sub-area contained LiDAR point cloud data within a distance of 4   m and an azimuth of 6 . These sub-areas were numbered sequentially in order, and the set of LiDAR point clouds that fell within the j th sub-area on the i th circle was denoted as P i j .
Given that the height can effectively reflect the shape information of the environmental structure in the vertical direction, and the maximum height can reflect which parts of the environmental structure are visible and unobstructed for the sensors, we assigned a value to each sub-area based on the maximum height of the LiDAR point cloud, as shown in Equations (10) and (11).
ϕ : P i j
where ϕ denotes the mapping relationship from the LiDAR point cloud to real numbers in each sub-area.
ϕ P i j = max p P i j z p
where function z returns the z -axis value of the laser point p , which is directly assigned to 0 for the empty sub-area.
After assigning values to each sub-area through Equations (10) and (11), the circle shown in Figure 11 can be expanded along the radius direction to obtain a rectangular matrix. The Scan Context matrix with information about the structure of the environment is shown in Figure 12.

4.2. Scan Context Similarity Calculation

The Scan Context matrices obtained by converting the current frame point cloud and the history frame point cloud by the above method were set to be I q and I c , respectively, and we used the column-wise distance method to calculate the similarity between I q and I c . Setting the j th column vectors of matrices I q and I c as c j q and c j c , respectively, the similarity of matrices I q and I c can be calculated using Equation (12):
d I q , I c = 1 N s j = 1 N s 1 c j q c j c c j q c j c
The comparison between the history frame point cloud and the current frame point cloud using Equation (12) did not take into account the case where the viewing angle and position of the LiDAR observation had changed. Even though the viewing angle changed when the LiDAR returned to the same place, only the order of the column vectors would change in the Scan Context matrix for the current frame transformation, as shown in Figure 13.
In order to alleviate the problem of shifting the Scan Context matrix column vectors due to the change in the viewing angle of the LiDAR sensor, we translated the history frame I c by columns to obtain N s Scan Contexts and calculated the distance between the shifted Scan Contexts and the Scan Context of the current frame in turn, as shown in Equation (13), and then selected the frame with the smallest distance as the history frame for the loop closure matching. After finding the matching history frame, the column translations corresponding to the current frame and the history frame were calculated as shown in Equation (14):
D I q , I c = min n N s d I q , I n c
n * = a r g m i n n N s d I q , I n c
where, I n c is the new Scan Context matrix obtained by moving the last n columns of the Scan Context matrix corresponding to the history frame I c to the top. n * is the best solution for n .

4.3. Two-Step Search Strategy

The direct search for n according to the above method was computationally too large, so we used the following two-step search strategy. First, the Ring Key descriptor was defined, and the function ψ was used to encode one frame of the point cloud from the inside to the outside of one ring r , i.e., to encode the row vectors of the Scan Context matrix:
ψ r i = r i 1 N s , i = 1 , 2 N r
where, r i denotes the i th row vector of the Scan Context matrix. Equation (15) denotes the encoding of a row vector as the average of all elements of that row vector. An N r -dimensional column vector k can be obtained after encoding the N r circles according to Equation (15).
k = ψ r 1 , , ψ r N r T , ψ : r i
The KD-Tree was constructed by vector k . The Ring Key descriptor of the current frame was used to take the KD-Tree to search for the m most similar LiDAR point cloud scans. Then, we found n to solve the above problem of column vector shift of Scan Context caused by the change in the LiDAR viewing angle. The similarity between the Scan Context matrix corresponding to the current frame and the Scan Context matrix corresponding to these m scans was compared one by one by (12). In order to improve the computational efficiency of finding n , S e c t o r K e y descriptors are defined by Equations (17) and (18), and S e c t o r K e y descriptors were constructed for the Scan Context matrices of both the current frame and the candidate frames:
ζ c i = c i 1 N r , i = 1 , 2 N s
l = ξ c 1 , , ξ c N s , ξ : c i
where c i denotes the i th column vector of the Scan Context matrix. Equation (17) denotes the encoding of a column vector as the average of all elements of that column vector. An N s -dimensional row vector l can be obtained after encoding the N s circles according to Equation (17), i.e., the S e c t o r K e y descriptor.
Solve n * by traversing the Sector Key descriptor of the candidate frame after how many column shifts to obtain the row vector l n c most similar to the Sector Key descriptor l q of the current frame. The difference between the two row vectors was measured by the two-paradigm measure as shown in Equation (19). The computation of A is shown in Equation (20):
d l q , l n c = l n c l q 2
n * = a r g m i n n N s d l q , l n c
It should be noted that Equations (13) and (14) are general expressions for solving the optimal column shift n * , and Equations (19) and (20) are specific forms for solving the optimal column shift n * by means of the S e c t o r K e y descriptor.
After solving the optimal column shift number for each candidate frame by the S e c t o r K e y descriptor, the column vector of each candidate frame was optimally adjusted. Utilizing Equation (12) to compare the similarity between the Scan Context matrix corresponding to the current frame and the Scan Context matrix corresponding to the above m scans one by one, the minimum distance between the adjusted candidate frame and the current frame was denoted as d i , i = 1 , , m . If d i was less than a set threshold, the candidate frame after column shifting was the optimal candidate frame, and the number of moving columns n * of this candidate frame was recorded, then the angle at which the LiDAR of the current frame was turned relative to the LiDAR at the candidate frame was
θ = 2 π n * N s
We could obtain the index of the current frame and the loop closure frame using the Scan Context method, as well as a better initial value of the relative pose estimation between the two frames, and then use the ICP algorithm to obtain the precise pose constraints between the two frames, so we will not go into detail about the specific introduction.

5. Experimental Results

In this section, we present experimental validation of map construction accuracy and back-end mapping in real time. We tested the accuracy of our algorithm based on the KITTI dataset with a given ground truth and compared it with other state-of-the-art algorithms. To ensure adequate validation of the algorithms, we compared and evaluated our algorithm based on the most recently collected real-world datasets from Walking, Park, and others. The controller was a computer and was equipped with an i7-10710U processor (Intel Corporation, Santa Clara, CA, USA), which used the melodic version of the Robot Operating System (ROS) in Ubuntu 18.04, and the algorithms involved in this experiment were based on C++ implementation.

5.1. Dataset

We fully validated the algorithm’s accuracy through extensive testing on the KITTI dataset [33]. The KITTI dataset was converted to a packet in the form of a . bag , which contained a 10   Hz LiDAR point cloud, 100   Hz IMU data, and 100   Hz GPS absolute measurements. Additionally, we fully validated the algorithm’s real-time performance based on the most recent actual collected datasets, such as Walking and Park, provided by the LIO-SAM algorithm.

5.2. Evaluation

To accurately measure the accuracy of the algorithms in this study, an experimental comparison of the absolute trajectory error (ATE), which focuses on globally consistency in algorithm outcomes, was performed using the EVO evaluation tool. ATE compares the estimated position values output from the algorithm with the ground truth of the dataset according to the time stamps on a frame-by-frame basis, and then calculates the position difference between the estimated values and the ground truth on each set of corresponding timestamps, so as to realize an important measure of the global consistency of the trajectory, which is very suitable for evaluating the performance of the SLAM system and the accuracy of the trajectory.
Assuming that the trajectory position estimated by the SLAM algorithm at moment t is P t and the position of the real trajectory at that moment is G t , the ATE at that moment is defined as follows:
F t = G t 1 P t
In order to be able to better reflect the relative error situation, with the total number of positions T and the time interval Δ known, the root mean squared error (RMSE) was utilized to count the errors of all frames to obtain an overall value.
R M S E F = 1 n t = 1 n t r a n s F t 2 1 2
where n = T Δ , and t r a n s denotes that the translation portion was taken for the variables inside the parentheses.
In conclusion, we used the KITTI dataset to evaluate the absolute trajectory errors of the algorithms in this research.

5.3. Results

In order to evaluate the effectiveness of our method, we performed various tests of the proposed algorithm on the KITTI dataset, with a special focus on the sequences 0016 (No. 04), 0027 (No. 07), 0034 (No. 02), and 0042 (No. 01). We performed a careful comparison with several state-of-the-art and typical methods through these sequences in the order of KITTI dataset numbering. The results of the trajectory accuracy comparison are shown in Figure 14 and Figure 15.
As seen in Figure 14 and Figure 15, compared with A-LOAM, LeGO-LOAM, and LIO-SAM algorithms, the overall match between the trajectories estimated by our method and the ground truth of the dataset was higher, and the fitting results were better. The LeGO-LOAM algorithm data were not counted in Figure 14a because the LeGO-LOAM algorithm drifted during the KITTI-0042 sequence mapping process. The trajectories estimated by the four algorithms had some deviation from the ground truth, but as can be seen by the localized enlarged view of the complex comparison of the trajectories of sequence 0034 and sequence 0027, our method was closer to the ground truth of the trajectories and achieved a better positional estimation result both in straight lines and at corners. Since the features extracted by our method in the front end were more accurate and reliable, the point cloud had less error in the map matching process, thus reducing the mapping error. Second, the back end of our method had a robust loop closure detection capability, which was not affected by the cumulative error of the LiDAR odometry and could further eliminate the cumulative error in the mapping process. As a result, our mapping results fit the ground truth more satisfactorily.
As can be seen in Figure 16, our algorithm obtained the smallest ATE.Max results in the KITTI-0042, KITTI-0034, and KITTI-0016 datasets, which were 36.54 m, 19.54 m, and 0.864 m, respectively. It can be seen by fitting the color results with the ground truth that the algorithm in this paper had a better overall fit with the ground truth and a lower error level compared with the A-LOAM, LeGO-LOAM, and LIO-SAM algorithms. It directly illustrates that the algorithm in this paper was much closer to the ground truth, and the maximal error was smaller. The ATE error data are shown in Table 1, Table 2, Table 3 and Table 4.
It can be seen from Table 1, Table 2, Table 3 and Table 4 that the A-LOAM and LIO-SAM algorithms achieved better results in a small number of indicators. However, compared with algorithms such as A-LOAM, LeGO-LOAM, and LIO-SAM, our method achieved better results in most of the indicators of ATE. Our algorithm had obvious advantages in the evaluation of metrics such as ATE.Mean, ATE.RMSE, and ATE.SSE, which further validated that our method predicted trajectories that were closer to the ground truth with less error. The local point cloud features were too sparse, resulting in a large ATE.Min value for our method. However, the ATE.Max and ATE.Mean values were smaller, which also proved that our method had a better consistency of mapping results and higher accuracy.
Next, in order to effectively verify the loop closure detection effect of our method, we conducted a loop closure detection comparison experiment with the RNN algorithm, and the experimental results are shown in Figure 17. For ease of observation, we rendered the two mapping results using different color schemes.
As can be seen through Figure 17a,b, the traditional RNN-based loop closure detection had several loop closure failures, whereas the loop closure detection results based on our method were better, and the desired loop closure was achieved. The main reason for the difference in this result was that the traditional RNN-based loop closure detection method was susceptible to the influence of the cumulative error of the LiDAR odometry, whereas the Scan Context-based loop closure detection method had rotational invariance, which avoided the influence of the cumulative error of the LiDAR odometry on loop closure detection. In addition, by comparing Figure 17a and Figure 17b, it can be seen that the map construction results of our method were clearer, which indirectly verified the map construction accuracy of our method. Compared with several other algorithms, the method in this paper extracted reliable point cloud features at the front end and therefore had a high accuracy of map construction.
In order to fully validate the real-time performance of our method, in addition to the KITTI dataset, we also used the Walking and Park datasets collected in real scenarios provided by the LIO-SAM algorithm. The mapping results of our method based on the Walking and Park datasets are shown in Figure 18, with localized mapping zoom results appended to the mapping results.
As can be seen in Figure 18, the mapping results of our method did not show large or obvious drift, and the local mapping results can clearly show the environmental structure and details when enlarged. Among them, Figure 18a can show clear localized building structures when compared with the results of LOAM, LIOM, and LIO-SAM mapping in the literature [23]. This also directly indicated that our method built maps with high accuracy. Table 5 shows the average running time of the four algorithms to register a LiDAR frame on the KITTI dataset and the Walking and Park datasets.
As shown in Table 5, our method’s runtime decreased when compared with the A-LOAM, LeGO-LOAM, and LIO-SAM algorithms, making it more suitable for deployment in low-powered situations. In the above dataset, the minimum construction time for our method to process a single-frame point cloud was 19.8 ms, and the maximum was 92.1 ms. The mapping time was reduced by an average of 69.57% compared with the A-LOAM algorithm and even 11.38% compared with the state-of-the-art LIO-SAM algorithm.

6. Conclusions

Based on the research of the LIO-SAM algorithm, we proposed a map construction method suitable for an outdoor environment. First, we realized ground point extraction based on the range image transformation method, which can reduce the point cloud dimension and improve the reliability of ground point extraction. Second, we traversed and clustered the segmentation of non-ground point clouds by a BFS algorithm, which effectively avoided the increased computation of a global search and could significantly improve the clustering speed when the distribution of point clouds was not uniform. Finally, we introduced the Scan Context loop closure detection method in the back end of the algorithm, which improved the accuracy of the algorithm’s loop closure detection. We fully validated the overall mapping accuracy and real-time performance of the method in this paper with a variety of datasets such as KITTI, Walking, and Park.
In the future work, we will refer to the deep learning method to adaptively adjust the angle threshold in the point cloud clustering segmentation process and construct a new type of loop closure detection descriptor to further improve the robustness of the algorithm in the process of mapping in outdoor environments.

Author Contributions

Project administration, Y.L.; writing—original draft preparation, C.W.; formal analysis, H.W.; supervision, Y.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Key Special Projects of Heilongjiang Province’s Key R&D Program (NO. 2023ZX01A01) and Heilongjiang Province’s Key R&D Program, “Leading the Charge with Open Competition” (NO. 2023ZXJ01A02).

Data Availability Statement

The original contributions presented in the study are included in the article; further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Li, C.; Pan, W.B.; Yuan, X.W.; Huang, W.Y.; Yuan, C.; Wang, Q.D.; Wang, F.Y. High-Precision Map Construction in Degraded Long Tunnel Environments of Urban Subways. Remote Sens. 2024, 16, 809. [Google Scholar] [CrossRef]
  2. Adkins, A.; Chen, T.J.; Biswas, J. ObVi-SLAM: Long-Term Object-Visual SLAM. IEEE Robot. Autom. Lett. 2024, 9, 2909–2916. [Google Scholar] [CrossRef]
  3. Shi, C.H.; Chen, X.Y.L.; Xiao, J.H.; Dai, B.; Lu, H.M. Fast and Accurate Deep Loop Closing and Relocalization for Reliable LiDAR SLAM. IEEE Trans. Robot. 2024, 40, 2620–2640. [Google Scholar] [CrossRef]
  4. Wang, J.K.; Xu, M.; Chen, Z.H. Range Map Interpolation-Based 3-D LiDAR Truncated Signed Distance Fields Mapping in Outdoor Environments. IEEE Trans. Instrum. Meas. 2024, 73, 1–8. [Google Scholar] [CrossRef]
  5. Wu, W.T.; Li, J.P.; Chen, C.; Yang, B.S.; Zou, X.H.; Yang, Y.D.; Xu, Y.H.; Zhong, R.F.; Chen, R.B. AFLI-Calib: Robust LiDAR-IMU extrinsic self-calibration based on adaptive frame length LiDAR odometry. Isprs J. Photogramm. Remote Sens. 2023, 199, 157–181. [Google Scholar] [CrossRef]
  6. Dong, Y.C.; Li, L.X.; Liu, Y.H.; Xu, S.X.; Zuo, Z.L. PVE-LIOM: Pseudo-Visual Enhanced LiDAR-Inertial Odometry and Mapping. IEEE Trans. Instrum. Meas. 2023, 72, 1–13. [Google Scholar] [CrossRef]
  7. Arun, K.S.; Huang, T.S.; Blostein, S.D. Least-Squares Fitting of Two 3-D Point Sets. IEEE Trans. Pattern Anal. Mach. Intell. 1987, 9, 699–700. [Google Scholar] [CrossRef] [PubMed]
  8. Biber, P.; Straßer, W. The normal distributions transform: A new approach to laser scan matching. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Las Vegas, NV, USA, 27 October–1 November 2003; pp. 2743–2748. [Google Scholar]
  9. Kaess, M.; Ranganathan, A.; Dellaert, F. iSAM: Incremental Smoothing and Mapping. IEEE Trans. Robot. 2008, 24, 1365–1378. [Google Scholar] [CrossRef]
  10. Strasdat, H.; Davison, A.J.; Montiel, J.M.M.; Konolige, K. Double Window Optimisation for Constant Time Visual SLAM. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Barcelona, Spain, 6–13 November 2011; pp. 2352–2359. [Google Scholar]
  11. Behley, J.; Steinhage, V.; Cremers, A.B. Efficient Radius Neighbor Search in Three-dimensional Point Clouds. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 26–30 May 2015; pp. 3625–3630. [Google Scholar]
  12. Hoppe, H.; DeRose, T.; Duchamp, T.; McDonald, J.; Stuetzle, W. Surface reconstruction from unorganized points. In Proceedings of the 19th Annual Conference on Computer Graphics and Interactive Techniques, Chicago, IL, USA, 26–31 July 1992; pp. 71–78. [Google Scholar]
  13. Golovinskiy, A.; Kim, V.G.; Funkhouser, T. Shape-based Recognition of 3D Point Clouds in Urban Environments. In Proceedings of the 12th IEEE International Conference on Computer Vision, Kyoto, Japan, 29 September–2 October 2009; pp. 2154–2161. [Google Scholar]
  14. Jian, B.; Vemuri, B.C. Robust Point Set Registration Using Gaussian Mixture Models. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33, 1633–1645. [Google Scholar] [CrossRef] [PubMed]
  15. Zhang, J.; Singh, S. LOAM: Lidar odometry and mapping in real-time. In Proceedings of the Robotics: Science and Systems, Delft, The Netherlands, 15–19 July 2014; pp. 1–9. [Google Scholar]
  16. Zhang, J.; Singh, S. Visual-lidar Odometry and Mapping: Low-drift, Robust, and Fast. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 26–30 May 2015; pp. 2174–2181. [Google Scholar]
  17. Qi, C.R.; Su, H.; Mo, K.C.; Guibas, L.J. PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation. In Proceedings of the 30th IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 77–85. [Google Scholar]
  18. Qi, C.R.; Yi, L.; Su, H.; Guibas, L.J. PointNet plus plus: Deep Hierarchical Feature Learning on Point Sets in a Metric Space. In Proceedings of the 31st Annual Conference on Neural Information Processing Systems (NIPS), Long Beach, CA, USA, 4–9 December 2017. [Google Scholar]
  19. Hu, Q.Y.; Yang, B.; Xie, L.H.; Rosa, S.; Guo, Y.L.; Wang, Z.H.; Trigoni, N.; Markham, A. Learning Semantic Segmentation of Large-Scale Point Clouds With Random Sampling. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 44, 8338–8354. [Google Scholar] [CrossRef] [PubMed]
  20. Latif, Y.; Cadena, C.; Neira, J. Robust loop closing over time for pose graph SLAM. Int. J. Robot. Res. 2013, 32, 1611–1626. [Google Scholar] [CrossRef]
  21. Grisetti, G.; Kümmerle, R.; Stachniss, C.; Burgard, W. A Tutorial on Graph-Based SLAM. IEEE Intell. Transp. Syst. Mag. 2010, 2, 31–43. [Google Scholar] [CrossRef]
  22. Shan, T.X.; Englot, B. LeGO-LOAM: Lightweight and Ground-Optimized Lidar Odometry and Mapping on Variable Terrain. In Proceedings of the 25th IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 4758–4765. [Google Scholar]
  23. Yan, L.; Dai, J.C.; Zhao, Y.H.; Chen, C.J. Real-Time 3D Mapping in Complex Environments Using a Spinning Actuated LiDAR System. Remote Sens. 2023, 15, 963. [Google Scholar] [CrossRef]
  24. Ye, H.; Chen, Y.; Liu, M. Tightly coupled 3d lidar inertial odometry and mapping. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 3144–3150. [Google Scholar]
  25. Zhao, S.B.; Fang, Z.; Li, H.L.; Scherer, S. A Robust Laser-Inertial Odometry and Mapping Method for Large-Scale Highway Environments. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 4–8 November 2019; pp. 1285–1292. [Google Scholar]
  26. Shan, T.X.; Englot, B.; Meyers, D.; Wang, W.; Ratti, C.; Rus, D. LIO-SAM: Tightly-coupled Lidar Inertial Odometry via Smoothing and Mapping. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Electr Network, Las Vegas, NV, USA, 25–29 October 2020; pp. 5135–5142. [Google Scholar]
  27. Asvadi, A.; Premebida, C.; Peixoto, P.; Nunes, U. 3D Lidar-based static and moving obstacle detection in driving environments: An approach based on voxels and multi-region ground planes. Robot. Auton. Syst. 2016, 83, 299–311. [Google Scholar] [CrossRef]
  28. Li, L.; Yang, F.; Zhu, H.H.; Li, D.L.; Li, Y.; Tang, L. An Improved RANSAC for 3D Point Cloud Plane Segmentation Based on Normal Distribution Transformation Cells. Remote Sens. 2017, 9, 433. [Google Scholar] [CrossRef]
  29. Zermas, D.; Izzat, I.; Papanikolopoulos, N. Fast segmentation of 3d point clouds: A paradigm on lidar data for autonomous vehicle applications. In Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore, Singapore, 29 May–3 June 2017; pp. 5067–5073. [Google Scholar]
  30. Kim, G.; Kim, A. Scan Context: Egocentric Spatial Descriptor for Place Recognition within 3D Point Cloud Map. In Proceedings of the 25th IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 4802–4809. [Google Scholar]
  31. Himmelsbach, M.; von Hundelshausen, F.; Wuensche, H.J. Fast Segmentation of 3D Point Clouds for Ground Vehicles. In Proceedings of the IEEE Intelligent Vehicles Symposium (IV), Univ Calif, San Diego (UCSD), San Diego, CA, USA, 21–24 June 2010; pp. 560–565. [Google Scholar]
  32. Kim, G.; Choi, S.; Kim, A. Scan Context plus plus: Structural Place Recognition Robust to Rotation and Lateral Variations in Urban Environments. IEEE Trans. Robot. 2022, 38, 1856–1874. [Google Scholar] [CrossRef]
  33. Geiger, A.; Lenz, P.; Stiller, C.; Urtasun, R. Vision meets robotics: The KITTI dataset. Int. J. Robot. Res. 2013, 32, 1231–1237. [Google Scholar] [CrossRef]
Figure 1. Traditional point cloud feature extraction and loop closure detection. (a) Feature extraction of the point cloud directly based on curvature, with coarse and redundant repetition results. (b) Odometry drift and loop closure failure when using the RNN algorithm for loop closure detection.
Figure 1. Traditional point cloud feature extraction and loop closure detection. (a) Feature extraction of the point cloud directly based on curvature, with coarse and redundant repetition results. (b) Odometry drift and loop closure failure when using the RNN algorithm for loop closure detection.
Remotesensing 16 03099 g001
Figure 2. The overall framework of the algorithm proposed in this paper. The light blue and dark blue modules are the original basic modules of the LIO-SAM algorithm, including IMU pre-integration, odometry publication, map position optimization, and so on. The green module is the improvement module proposed in this paper, including point cloud clustering segmentation, feature extraction, and back-end loop closure detection based on the Scan Context algorithm.
Figure 2. The overall framework of the algorithm proposed in this paper. The light blue and dark blue modules are the original basic modules of the LIO-SAM algorithm, including IMU pre-integration, odometry publication, map position optimization, and so on. The green module is the improvement module proposed in this paper, including point cloud clustering segmentation, feature extraction, and back-end loop closure detection based on the Scan Context algorithm.
Remotesensing 16 03099 g002
Figure 3. Proposed general framework for point cloud clustering segmentation. In the output image, the green color is the corner point, and the purple color is the planar point.
Figure 3. Proposed general framework for point cloud clustering segmentation. In the output image, the green color is the corner point, and the purple color is the planar point.
Remotesensing 16 03099 g003
Figure 4. The current frame point cloud transformed to a range image.
Figure 4. The current frame point cloud transformed to a range image.
Remotesensing 16 03099 g004
Figure 5. The ground point extraction process.
Figure 5. The ground point extraction process.
Remotesensing 16 03099 g005
Figure 6. Schematic diagram of ground point extraction. (a) The original LiDAR point cloud. (b) The extracted ground points.
Figure 6. Schematic diagram of ground point extraction. (a) The original LiDAR point cloud. (b) The extracted ground points.
Remotesensing 16 03099 g006
Figure 7. Center and neighborhood points in a range image. (a) The center point and neighboring points spatial relationship. (b) A schematic diagram of the calculated angles in the clustering process of the center and neighborhood points.
Figure 7. Center and neighborhood points in a range image. (a) The center point and neighboring points spatial relationship. (b) A schematic diagram of the calculated angles in the clustering process of the center and neighborhood points.
Remotesensing 16 03099 g007
Figure 8. Non-ground point cluster segmentation. (a) The LiDAR point cloud after extracting the ground points, which consists of outlier points (blue) and successfully clustered points (green) after downsampling. (b) The successfully clustered point cloud.
Figure 8. Non-ground point cluster segmentation. (a) The LiDAR point cloud after extracting the ground points, which consists of outlier points (blue) and successfully clustered points (green) after downsampling. (b) The successfully clustered point cloud.
Remotesensing 16 03099 g008
Figure 9. Feature point extraction results. (a) The original LiDAR point cloud, where both yellow and orange-red are LiDAR raw point clouds, and these two colors are determined by the intensity of the point cloud. (b) The extracted edge features (corner points) and planar features (planar points), where green color is the planar points extracted from the ground points and purple color is the corner points extracted from the clustered segmented non-ground points.
Figure 9. Feature point extraction results. (a) The original LiDAR point cloud, where both yellow and orange-red are LiDAR raw point clouds, and these two colors are determined by the intensity of the point cloud. (b) The extracted edge features (corner points) and planar features (planar points), where green color is the planar points extracted from the ground points and purple color is the corner points extracted from the clustered segmented non-ground points.
Remotesensing 16 03099 g009
Figure 10. The overall framework of the Scan Context algorithm.
Figure 10. The overall framework of the Scan Context algorithm.
Remotesensing 16 03099 g010
Figure 11. Split the point cloud into sub-areas by radius and azimuth [30]. Reprinted/adapted with permission from Ref. [30]. 2018, Kim, G.
Figure 11. Split the point cloud into sub-areas by radius and azimuth [30]. Reprinted/adapted with permission from Ref. [30]. 2018, Kim, G.
Remotesensing 16 03099 g011
Figure 12. One-frame point cloud converted to Scan Context matrix [30]. Reprinted/adapted with permission from Ref. [30]. 2018, Kim, G. The blue area indicates that the corresponding sub-area has no point cloud data or the area is not observable by LiDAR due to occlusion.
Figure 12. One-frame point cloud converted to Scan Context matrix [30]. Reprinted/adapted with permission from Ref. [30]. 2018, Kim, G. The blue area indicates that the corresponding sub-area has no point cloud data or the area is not observable by LiDAR due to occlusion.
Remotesensing 16 03099 g012
Figure 13. Changes in the Scan Context column vector sequence caused by changes in the LiDAR viewing angle [30]. Reprinted/adapted with permission from Ref. [30]. 2018, Kim, G. (a) The change in viewpoint when the LiDAR returns to the same place causes the Scan Context column vectors to be shifted. (b) The Scan Context matrices transformed with the history frames contain similar shapes.
Figure 13. Changes in the Scan Context column vector sequence caused by changes in the LiDAR viewing angle [30]. Reprinted/adapted with permission from Ref. [30]. 2018, Kim, G. (a) The change in viewpoint when the LiDAR returns to the same place causes the Scan Context column vectors to be shifted. (b) The Scan Context matrices transformed with the history frames contain similar shapes.
Remotesensing 16 03099 g013
Figure 14. Trajectory and ground truth comparison results of the four algorithms on the corresponding sequences of KITTI. (a) The trajectory and ground truth comparison results of four algorithms based on KITTI sequence 0042. (b) The trajectory and ground truth comparison results of four algorithms based on KITTI sequence 0034. (c) The trajectory and ground truth comparison results of four algorithms based on KITTI sequence 0016. (d) The trajectory and ground truth comparison results of four algorithms based on KITTI sequence 0027.
Figure 14. Trajectory and ground truth comparison results of the four algorithms on the corresponding sequences of KITTI. (a) The trajectory and ground truth comparison results of four algorithms based on KITTI sequence 0042. (b) The trajectory and ground truth comparison results of four algorithms based on KITTI sequence 0034. (c) The trajectory and ground truth comparison results of four algorithms based on KITTI sequence 0016. (d) The trajectory and ground truth comparison results of four algorithms based on KITTI sequence 0027.
Remotesensing 16 03099 g014
Figure 15. Localized enlargement of sequence 0034 and 0027 comparison results. (a) A localized enlargement of the sequence 0034 trajectory comparison. (b) A localized enlargement of the sequence 0027 trajectory comparison.
Figure 15. Localized enlargement of sequence 0034 and 0027 comparison results. (a) A localized enlargement of the sequence 0034 trajectory comparison. (b) A localized enlargement of the sequence 0027 trajectory comparison.
Remotesensing 16 03099 g015
Figure 16. ATE errors of the four algorithms under the corresponding sequences of the KITTI dataset. (a) The ATE errors of the four algorithms based on the sequence of KITTI dataset 0042. (b) The ATE errors of the four algorithms based on the sequence of KITTI dataset 0034. (c) The ATE errors of the four algorithms based on the sequence of KITTI dataset 0016. (d) The ATE errors of the four algorithms based on the sequence of KITTI dataset 0027. (ad) show, from left to right, the ATE errors of A-LOAM, LeGO-LOAM, LIO-SAM, and our method on the corresponding sequences of the KITTI dataset.
Figure 16. ATE errors of the four algorithms under the corresponding sequences of the KITTI dataset. (a) The ATE errors of the four algorithms based on the sequence of KITTI dataset 0042. (b) The ATE errors of the four algorithms based on the sequence of KITTI dataset 0034. (c) The ATE errors of the four algorithms based on the sequence of KITTI dataset 0016. (d) The ATE errors of the four algorithms based on the sequence of KITTI dataset 0027. (ad) show, from left to right, the ATE errors of A-LOAM, LeGO-LOAM, LIO-SAM, and our method on the corresponding sequences of the KITTI dataset.
Remotesensing 16 03099 g016
Figure 17. Loop closure detection comparison experiment. (a) The loop closure detection results using the RNN algorithm, where the point cloud color is rendered based on the point cloud intensity. (b) The loop closure detection results of our algorithm, where the point cloud color is rendered based on the point cloud coordinate axes.
Figure 17. Loop closure detection comparison experiment. (a) The loop closure detection results using the RNN algorithm, where the point cloud color is rendered based on the point cloud intensity. (b) The loop closure detection results of our algorithm, where the point cloud color is rendered based on the point cloud coordinate axes.
Remotesensing 16 03099 g017
Figure 18. Results of our algorithm for mapping based on Walking and Park datasets. (a) The results of mapping based on the Walking dataset. (b) The results of mapping based on the Park dataset. In Figure 18, the point cloud color is rendered based on the point cloud intensity.
Figure 18. Results of our algorithm for mapping based on Walking and Park datasets. (a) The results of mapping based on the Walking dataset. (b) The results of mapping based on the Park dataset. In Figure 18, the point cloud color is rendered based on the point cloud intensity.
Remotesensing 16 03099 g018
Table 1. ATE results of the four algorithms on the KITTI dataset 0042 sequence (m).
Table 1. ATE results of the four algorithms on the KITTI dataset 0042 sequence (m).
ATE.MaxATE.MeanATE.MedianATE.MinATE.RMSEATE.SSEATE.STD
A-LOAM47.83248318.7278919.4345710.22413820.374159265753.488 9.77424
LeGO-LOAMfailfailfailfailfailfailfail
LIO-SAM38.70146418.45379619.3173563.06319720.258078226124.7358.35746
Ours36.54205417.76545518.6701183.46839319.443158208298.0437.90095
Table 2. ATE results of the four algorithms on the KITTI dataset 0034 sequence (m).
Table 2. ATE results of the four algorithms on the KITTI dataset 0034 sequence (m).
ATE.MaxATE.MeanATE.MedianATE.MinATE.RMSEATE.SSEATE.STD
A-LOAM56.97262127.8138326.0419611.24768231.109423166628.0113.935097
LeGO-LOAM183.9758059.1625152.63843721.4966865.232844953196.6527.479466
LIO-SAM19.935548.4434127.5777740.5831319.308295201967.9993.91831
Ours19.5404228.2834997.9112820.77379.013449189375.7283.553295
Table 3. ATE results of the four algorithms on the KITTI dataset 0016 sequence (m).
Table 3. ATE results of the four algorithms on the KITTI dataset 0016 sequence (m).
ATE.MaxATE.MeanATE.MedianATE.MinATE.RMSEATE.SSEATE.STD
A-LOAM1.1371660.3058740.2745230.1006610.35998323.973720.189812
LeGO-LOAM2.0749860.5535110.4753580.1612240.63967827.41555520.320645
LIO-SAM0.8671990.2567730.2280040.1129680.28956511.4036650.133858
Ours0.8640570.2566780.2263320.1125850.28859911.3273750.131932
Table 4. ATE results of the four algorithms on the KITTI dataset 0027 sequence (m).
Table 4. ATE results of the four algorithms on the KITTI dataset 0027 sequence (m).
ATE.MaxATE.MeanATE.MedianATE.MinATE.RMSEATE.SSEATE.STD
A-LOAM1.042830.5400990.5519830.1352990.768797274.6773920.17839
LeGO-LOAM5.2495473.2182813.0630420.6417023.4081253182.5974891.1216
LIO-SAM1.4458580.6770210.6388760.130410.763134320.8875150.352158
Ours1.1583190.5336150.5070740.0810210.584182188.039150.237748
Table 5. Runtime of mapping for processing one scan (ms).
Table 5. Runtime of mapping for processing one scan (ms).
A-LOAMLeGO-LOAMLIO-SAMOurs
KITTI-0042226.2fail81.373.7
KITTI-0034195.545.666.958.7
KITTI-0016180.950.94842.1
KITTI-002710437.122.519.8
Walking253.6fail58.450.2
Park266.4fail100.592.1
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, Y.; Wang, C.; Wu, H.; Wei, Y. An Effective LiDAR-Inertial SLAM-Based Map Construction Method for Outdoor Environments. Remote Sens. 2024, 16, 3099. https://doi.org/10.3390/rs16163099

AMA Style

Liu Y, Wang C, Wu H, Wei Y. An Effective LiDAR-Inertial SLAM-Based Map Construction Method for Outdoor Environments. Remote Sensing. 2024; 16(16):3099. https://doi.org/10.3390/rs16163099

Chicago/Turabian Style

Liu, Yanjie, Chao Wang, Heng Wu, and Yanlong Wei. 2024. "An Effective LiDAR-Inertial SLAM-Based Map Construction Method for Outdoor Environments" Remote Sensing 16, no. 16: 3099. https://doi.org/10.3390/rs16163099

APA Style

Liu, Y., Wang, C., Wu, H., & Wei, Y. (2024). An Effective LiDAR-Inertial SLAM-Based Map Construction Method for Outdoor Environments. Remote Sensing, 16(16), 3099. https://doi.org/10.3390/rs16163099

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop