Next Article in Journal
Dynamic Capability Theory Based Study on Performance of Intelligent Manufacturing Enterprise under RFID Influence
Previous Article in Journal
High-Capacity Double-Sided Square-Mesh-Type Chipless RFID Tags
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Laser Radar Data Registration Algorithm Based on DBSCAN Clustering

1
School of Information Science and Engineering, Southeast University, Nanjing 210096, China
2
School of Automation, Nanjing Institute of Technology, Nanjing 211167, China
3
The Graduate School, Nanjing Institute of Technology, Nanjing 211167, China
4
Industrial Center, Nanjing Institute of Technology, Nanjing 211167, China
*
Author to whom correspondence should be addressed.
Electronics 2023, 12(6), 1373; https://doi.org/10.3390/electronics12061373
Submission received: 16 February 2023 / Revised: 8 March 2023 / Accepted: 10 March 2023 / Published: 13 March 2023
(This article belongs to the Section Microwave and Wireless Communications)

Abstract

:
At present, the core of lidar data registration algorithms depends on search correspondence, which has become the core factor limiting the performance of this kind of algorithm. For point-based algorithms, the data coincidence rate is too low, and for line-based algorithms, the method of searching the correspondence is too complex and unstable. In this paper, a laser radar data registration algorithm based on DBSCAN (Density-Based Spatial Clustering of Applications with Noise) clustering is proposed, which avoids the search and establishment of the corresponding relationship. Firstly, a ring band filter is designed to process the outliers with noise in a point cloud. Then, the adaptive threshold is used to extract the line segment features in the laser radar point cloud. For the point cloud to be registered, a DBSCAN density clustering algorithm is used to obtain the key clusters of the rotation angle and translation matrix. In order to evaluate the similarity of the two frames of the point cloud in the key clusters after data registration, a kernel density estimation method is proposed to describe the registered point cloud, and K-L divergence is used to find the optimal value in the key clusters. The experimental results show that the proposed algorithm avoids the direct search of the correspondence between points or lines in complex scenes with many outliers in laser point clouds, which can effectively improve the robustness of the algorithm and suppress the influence of outliers on the algorithm. The relative error between the registration result and the actual value is within 10%, and the accuracy is better than the ICP algorithm.

1. Introduction

In the process of scanning the environment, lidar generates an environmental map [1] by measuring the distance and angle of an obstacle relative to the sensor, and can obtain the position of the sensor itself in the environment. Reference [2] designed a mathematically tractable analytical model for a vehicle with arbitrary shape. The effects of extended-vehicle shape were correctly captured by lidar, and the main dependencies between receiving energy, bandwidth and array size were revealed. In the framework of a probabilistic multi-hypothesis tracker (PMHT), reference [3] proposed a new algorithm for lasers to track multiple point targets and extended targets simultaneously in the presence of clutter and missed detections, which can adapt to spatio-temporally varying target sizes or extents of extended targets and temporally varying target cardinality. Frequency-modulated continuous wave (FMCW) lidar can be studied for remote vital-sign monitoring. Reference [4] proposed an impulse denoising operation and spectral estimation decision method, which provides high-quality, repeatable respiration and heart rates. Compared with cameras, sonars and ultrasonic sensors, lidar has the advantages of high measurement accuracy, good directivity and immunity to environmental illumination. It plays an important role in target tracking, three-dimensional reconstruction [5], target detection and other fields.
Synchronous positioning and mapping based on lidar [6] positioning in an unknown environment [7] are essentially the registration of lidar data. Point cloud data registration [8], as an extremely important part of two-dimensional point cloud data processing [9], can convert data obtained from different perspectives into the same coordinate system through a series of calculations. By rotating and translating the scanning lidar data at the current moment, it has a large degree of overlap with the reference scanning data, so that changes in the robot pose [10] can be obtained. The registration effect of point cloud data will affect the subsequent multi-sensor fusion positioning [11] and loop detection directly [12]. In practical applications, due to the existence of the wheel-slip phenomenon of mobile robots, the radar point cloud will be distorted [13]; that is, a series of discrete obstacle points obtained by scanning have a certain deviation from their real coordinates. Therefore, the effect of the lidar data registration algorithm will be limited.
At present, the classical laser radar data registration method is based on correspondence. The process is to search and construct correspondence, calculate the transformation matrix according to the correspondence, use the transformation matrix to perform rigid body transformation on the point set, construct the square error function, calculate the current error and iterate to find the optimal solution. In point-based algorithms, the most widely used point cloud registration algorithm is the traditional iterative closest point (ICP) algorithm proposed by Besl [14]. The algorithm searches for the corresponding points of each point according to the minimum Euclidean distance standard and forms point pairs. The error function is constructed according to the square sum of the distances between the point pairs to minimize it, thereby correcting the pose transformation value of the robot. The ICP algorithm [15] provides a classical framework for the registration of lidar data, but it is easy to fall into a local optimum when the initial positions of the point cloud are quite different. Aiming at addressing the shortcomings of the ICP algorithm, Dong [16] proposed a Lie group parameterization method based on the ICP algorithm. This method can provide robust registration for a low-overlapping point cloud [17], but this method is time-consuming. Reference [18] applied a structure similar to a K-D tree [19] to search the nearest point, which improved the speed of searching the nearest point greatly, but the algorithm cannot solve the registration problem of a partially overlapping point cloud. An iterative dual matching (IDC) algorithm [20] takes a point in the reference scan data as the reference point and searches the matching point of that point in the current data. This algorithm does not require the robot to be in a structured environment. However, due to the movement of the mobile robot, the point cloud coincidence rate becomes lower and the corresponding point distance becomes larger, so it is easy to cause mismatch between points. In a line-based registration algorithm, the line has better stability than the point, and it has better robustness. Iterative closest line (ICL) [21] extracts straight lines from point cloud data, and then searches the corresponding line segments to register. The method of searching the corresponding relationship is too complicated and unstable. Holy [22] constructed an error function according to the area included between the extracted lines, and then used the gradient descent method [23] to solve the corresponding coordinate transformation parameters, but this method has a poor effect in complex indoor environments.
Table 1 shows the merits and demerits of the point cloud registration algorithms mentioned above. It can be seen that the core of the above laser radar data registration algorithms depends on search correspondence, but the correspondence has also become a core factor limiting the performance of this type of algorithm, which is limited in the application of laser point cloud outliers and complex scenes. Therefore, this paper proposes a lidar data registration algorithm based on DBSCAN clustering. The algorithm can avoid directly searching the correspondence between points or lines, effectively improving the robustness of the algorithm and suppressing the influence of outliers on the algorithm.
The rest of this paper is organized as follows: Section 2 describes a lidar data registration algorithm based on DBSCAN clustering. Firstly, a ring band filter is designed to process the outliers with noise in the point cloud, and then the adaptive threshold is used to extract the line segment features in the laser radar point cloud. Using the DBSCAN density clustering algorithm, the key clusters of the rotation angle and translation matrix of the point cloud to be registered are obtained. In order to evaluate the similarity of the two frames of point cloud after data registration in the key clusters, a kernel density estimation method is proposed to describe the registered point cloud. The optimal value in the key cluster is found by combining K-L divergence. Section 3 tests the feasibility of the algorithm in the data set and the actual environment, and compares the data registration results with the ICP algorithm. Section 4summarizes the lidar data registration algorithm based on DBSCAN clustering.

2. Method of Solution

2.1. Radar Data Preprocessing

In the process of analyzing lidar data, it is found that since lidar data acquisition is a real-time process, the sensor will inevitably generate noise due to its applied environment and its own accuracy, resulting in some isolated points in the data. These noise points will introduce a negative impact on the robustness, accuracy and timeliness of the data registration algorithm. Before the registration of laser scanning data, it is necessary to denoise the current and reference scanning data: that is, to preprocess the data. The laser sensor data are highly discretized. When the target is far away from the laser sensor, the discretization error is also greater. Therefore, it is necessary to process the noise adaptively for data with a long distance to prevent interference algorithm results. In order to facilitate the search for noise, the polar coordinate data collected by each frame of lidar are converted to the Cartesian coordinate system. The measured distance is represented by x i , y i relative to the Cartesian coordinate system of the sensor. The solution is:
θ i = Δ θ · i α · π 180
x i = ρ i cos θ i y i = ρ i sin θ i
In the formulae, i is the laser beam; θ i is the angle of the ith laser beam in the rectangular coordinate system; Δ θ is the angular resolution of the lidar; α is the starting angle of lidar measurement; and ρ i is the distance from the reflection point to the emission point measured by the ith laser beam.
In order to reduce the influence of the above factors on feature extraction, it is necessary to preprocess radar data. The essence of radar data preprocessing is point cloud filtering [24]. Common point cloud filtering algorithms include median filtering and Gaussian filtering [25]. Median filtering has higher requirements for the selected points. If the selected coordinate points are not representative, the filtering accuracy will be affected. Gaussian filtering cannot preserve the corner features of the original point cloud well, and the edge point cloud will be smoothed greatly. In this paper, a ring band filter is designed. Firstly, abnormal values with noise in the laser point cloud are judged by a ring band filter. Then, the mean filter is used to reduce noise while retaining the corner features of the original point cloud.
As shown in Figure 1, some of the original data collected by the lidar are P 1 , P 2 , P 3 , P 4 , P 5 , P 6 , P 7 , P 8 in turn. The ring band filter uses two bands. The inner ring used in this paper consists of three laser points P 4 , P 5 , P 6 , and the outer ring is seven laser points P 2 , P 3 , P 4 , P 5 , P 6 , P 7 , P 8 . In Figure 1a, the candidate laser point P 5 is the data point distorted by noise, which is distributed in the point cloud unevenly and should be replaced by the estimated laser point coordinate value. In Figure 1b, the candidate laser point P 5 is the corner point, and the corner point is the inflection point where two line segments intersect in the obstacle contour. It is a very important environmental feature for the segmentation of the point set and should be retained. The candidate laser point P 5 in Figure 1c is a noise point. However, due to the corner feature P 6 in its band, the estimated laser point coordinate value is quite different from the actual value, which affects the angle of the point cloud fitting line segment and should be filtered.
In Figure 1, slope k i , obtained by making straight lines from point P 1 to each laser point in the ring, is:
k i = y i y 1 x i x 1
In the formula, x i , y i are the coordinates of the laser point i. In the ring to be denoised, the laser points which are judged to be outliers are defined as candidate laser points. The candidate laser points are centered using a ring band filter shown in the graph. The algorithm can be expressed as:
P c e n t e r i = M , C 1 < T 1 C 2 < T 2 P b e f o r e i , C 1 < T 1 C 2 > T 2 i n f , C 1 > T 1 P b e f o r e i , C 1 = 0
In the above formula, P c e n t e r i is the coordinate value after the center candidate laser point is inferred; P c e n t e r i is the coordinate value before the point is inferred; C 1 , C 2 are the number of slope outliers in the inner and outer layers of the ring band filter; T 1 is the threshold of the number of slope outliers in the inner layer; T 2 is the threshold of the number of slope outliers in the outer layer; and M is the mean value of the laser point coordinates in the inner layer. The outlier refers to the fact that one or several values in the data are quite different from other values, and the slope value representing a laser point in the band deviates from the slope value of other laser points.
Therefore, the coordinate value of the candidate laser points is set as the mean of the coordinates of the inner laser points only when the number of outliers in the inner annulus is less than the T 1 threshold, and the number of outliers in the outer annulus is less than the T 2 threshold. When the number of outliers in the inner ring is less than the T 2 threshold, and the number of outliers in the outer ring is greater than the T 2 threshold, the coordinate value of the candidate laser point is retained. When the number of outliers in the inner ring is greater than the T 1 threshold, the slope in the laser point will produce multiple clusters. It is precisely because of the corner points in the ring that the estimation of the coordinate value of the candidate laser point will be interfered by other laser points. The laser point is removed when the line segment feature is maximized without affecting the line segment fitting angle. Figure 2 is the effect of preprocessing laser radar data. Region 1 and region 2 in Figure 2a are the data points distorted by noise, which are unevenly distributed in the point cloud and should be replaced by the estimated laser point coordinate value. Region 1 and region 2 in Figure 2b set their coordinate values as the mean value of the inner laser point coordinate to make the point cloud data smoother. Region 3 in Figure 2a is the corner feature of the intersection of two line segments in the obstacle contour. The corner feature of region 3 in Figure 2b is retained without changing the coordinate value. Region 4 in Figure 2a is a data point distorted by noise. However, due to the corner feature near the data point, in order to prevent the estimated laser point coordinates from being affected by other points to deviate from the true value, region 4 in Figure 2b is removed.

2.2. Laser Radar Data Registration

Before data registration of laser data, it is necessary to extract line segment features from the filtered point set to make it better in line with environmental characteristics. In this paper, previous research work is used to extract the line segment features: that is, adaptive-threshold laser radar scanning environment line segment feature extraction algorithm research [26]. The principle of the algorithm is to use the adaptive nearest-neighbor algorithm to detect the breakpoint, and segment the point set according to the adaptive-threshold segmentation point set.
The distance between two adjacent measurement points d i and the breakpoint detection threshold D are compared. If d i < D , the two adjacent measurement points are classified as the point set of the same obstacle. On the contrary, breakpoint i is extracted; that is, point i is the end point of the previous line, point i + 1 is the starting point of the latter line, and the point set of the original data is preliminarily segmented. The selected breakpoint detection threshold D should be adaptively changed according to the distance measured by the laser, such as in Equation (5):
D = k · Δ d i = k · ρ i · Δ θ · π / 180
Δ d i is the distance between two adjacent points, and ρ i is the distance measured by the laser beam i. Δ θ is the angular resolution of the lidar, and k is the fixed value which needs to be determined by the lidar currently. When distance ρ i measured by the laser increases, distance Δ d i of the adjacent two points also increases, and k is the amplification coefficient of the current breakpoint detection threshold.
After completing the initial segmentation of the original point set, it is necessary to segment the point set of each part to find out the corner feature. The slope difference Δ k i calculation formula at the two adjacent laser data points is as follows:
Δ k i = ρ i ρ i 1 ρ i 1 · Δ θ ρ i + 1 ρ i ρ i · Δ θ
In the formula, Δ θ is the angle resolution of the laser radar. Point P i is the corner of the two lines when P i is the intersection of line L1 and line L2. When Δ k i > d k t h , Δ k i > Δ k i 1 and Δ k i > Δ k i + 1 , point i is the corner point: that is, the end point of the previous line and the starting point of the next line. d k t h is the threshold for corner extraction. The algorithm evaluates the segmentation effect according to the fitting error of the segment segmentation point set until the most appropriate threshold d k t h is found, so that the average fitting error of all segments is minimum, and the segmentation point set is output.
The formula for calculating the sum of the squared errors (SSE) in line segment fitting is shown in (7):
S S E = i = 1 n W i y i y i
In the formula, SSE is the sum of squared errors of the corresponding points of predicted data and original data, W i is the weight, y i is the original data and y i is the predicted data. In order to reduce the amount of calculation, the point set to be fitted is divided into five sections. Then, the average value of the coordinates of each part of the point set is calculated as a new point. Finally, the five points are used to fit the line.
Figure 3 shows the simulation results of the line segment feature extraction algorithm on an Intel laboratory data set. The simulation results show that the adaptive nearest-neighbor algorithm in this paper has a good effect on breakpoint detection. All 22 breakpoints in this frame are extracted without wrong fitting, and the adaptive threshold segmentation algorithm accurately extracts the diagonal feature. Since the data were preprocessed before line segment segmentation, the data were smoother and no segmentation or undersegmentation occurred.
The lidar scanning data points can be properly registered, but it is difficult to search the corresponding relation of data points when encountering a similar environmental terrain, and the distance between corresponding data points increases with a longer laser scanning distance. After completing linear feature extraction of lidar data, a frame of lidar data can be represented by a set of straight-line segments, and the straight-line segments of two frames can be registered, which improves the accuracy of data registration.
In an unknown static environment, the association between data points will further deteriorate, and the classical laser data registration algorithm will be greatly affected. A mobile robot collects two frames of point cloud data at two adjacent locations by means of its self-carried lidar, which are recorded as sets O j = θ j , ρ j j = 1 , N and R i = θ i , ρ i i = 1 , , M , respectively. How to describe these two groups of sample points for correlation is the first problem to be considered. According to data sets O j and, the transformation relationship between two adjacent positions of the mobile robot is calculated, denoted as rotation angle φ and translation matrix t , respectively, where the translation matrix is t = t x , t y T .
The registration problem of 2D lidar data can be expressed as: to find an optimal translation matrix t and rotation angle φ , so that dataset O j can align dataset R i :
x i R = ρ i R c o s θ i R = ρ j O c o s θ j O + φ + t x y i R = ρ i R s i n θ i R = ρ j O s i n θ j O + φ + t y
Data point j in data set O j corresponds to data point i in data set R i . This section simulates two sets of 180° laser data with open point cloud data from Intel Lab. Point set R i is obtained from point set O j after transformation (−1.5, 0.8, 0.3), as shown in Figure 4. The blue solid point is the data point in set O j , and the red circle points are the data points in set R i .

2.2.1. Estimation of Rotation Angle

Figure 5 shows the linear feature extraction effect diagram of data sets O j and R i , respectively, represented by a group of line segments. The blue line segment is the pseudo-synthesized environmental feature of data set O j , and the red line segment is the pseudo-synthesized environmental feature of data set R i .
Make a difference between any two angles of two groups of line segments:
Δ θ i = θ k R θ p O
In the formula, θ k R is the angle of straight line k in dataset R i , and θ p O is the angle of straight line p in dataset O j . All one-dimensional angle differences Δ θ i are converted into two-dimensional variables Δ θ i , Δ θ i , and a DBSCAN algorithm [27] based on density clustering is adopted for clustering. The specific steps of the DBSCAN algorithm are as follows:
(1)
Determine the distance threshold σ of the sample field and the threshold M i n P t s of the number of samples in the neighborhood of distance σ . Check whether the current N p Δ θ i M i n P t s is established or not. If yes, Δ θ i is the core object and is added to the core object collection B , where N p Δ θ i is the set of points in a circular region with Δ θ i , Δ θ i as the center and σ as the radius.
(2)
Randomly select a core object Δ θ i from set B , look for all points reachable by its density, add it to a new set C 1 , form the first cluster and remove the core object contained in C 1 from B .
(3)
Repeat the above process until all the points in the dataset are processed and m clusters C m are obtained.
Figure 6 shows the distribution of each cluster after DBSCAN density clustering of all samples. It can be seen that the distribution of key clusters clustered in the black box is relatively concentrated, and the enlarged results are mainly concentrated around 0.3. The cluster with the largest number of sampled elements is taken as the key cluster. Finally, the weighted average of all samples of the key cluster can be obtained, and the rotation angle φ of data sets O j and R i is 0.3.
Because the matrix is a two-dimensional variable, for the selection of DBSCAN clustering algorithm parameters, this paper adopts the idea of a k d i s t graph proposed in reference [28]. Calculate the distance from each point to the point nearest to its k , and sort these distances from large to small for drawing. The distance to find the inflection point is the value of σ , and the value of M i n P t s is k + 1 . After testing, it is found that when the parameters of the DBSCAN algorithm are set as M i n P t s = 1 , σ = 0.08   m , ideal results can be obtained. Figure 7 shows the distribution of all data points in data set O j after rotation angle φ and all line segments in R i .

2.2.2. Estimation of the Translation Matrix

The rotation angle φ of data sets O j to R i is obtained above, and the rotation angle φ of all data points of data set O j is obtained. The new data set O j is:
x j O = ρ j O c o s θ j O = ρ j O c o s θ j O + φ y j O = ρ j O s i n θ j O = ρ j O s i n θ j O + φ
Then, data sets O j and R i can be represented by a group of straight-line segments. The straight-line segments in the two data sets have approximately parallel parts:
O j = O p p = 1 , 2 R i = R k k = 1 , 2
In the formulae, p is the number of line segments in data set O j , and k is the number of line segments in data set R i . For the straight-line segment O p in data set O j and the straight-line segment R k in data set R i , if Equation (12) can be satisfied, it means that the two straight lines in the two data sets are approximately parallel. They are assigned as a straight-line segment pair L , and the average angle of the two approximately parallel lines is taken as the direction angle θ p , k of the straight-line segment pair.
Δ θ p , k = O p , R k < ε 1
In the formula, Δ θ p , k is the angle difference between the two line segments, and ε 1 is the threshold for judging the approximate parallel of the two line segments, so as to remove the angle outliers that appear when feature extraction fits the line segments. When the threshold of ε 1 is too small, the matching degree of nearly parallel-line segments of the two data sets is high, but the number of successfully matched line segment pairs L is less, which is not conducive to subsequent clustering. If the threshold of ε 1 is too large, the line segments in the two data sets will match incorrectly, resulting in errors in subsequent calculations. Therefore, we should adjust the value of ε 1 and control the number of selected line segment pairs L within the range of the total number of line segments in data set O j . In this case, the probability of mismatching is small, and a certain amount of data is obtained.
For the line segment pair L , if the direction angle of any two line segment pairs can satisfy Equation (13), it can be regarded as a line segment pair equation L o p :
θ p , k L q θ p , k L t > ε 2
In the formula, L q ϵ L , L t ϵ L , ε 2 is the threshold for judging how non-parallel two line segment pairs are, and all line segment pairs L are screened to form the line segment pair equation L o p .
The translation matrix is in the form t = t x , t y T . According to Equation (8), the two parallel lines in the line segment pair L can be set as y = k x + b and y = k x t x + b + t y . Therefore, the relationship between the distance of the two parallel lines and the distance from a point in the line segment to another line segment can be obtained, as shown in Equation (14):
k t x + t y k 2 + 1 = k x 0 y 0 + b k 2 + 1
where x 0 , y 0 is a point in the straight-line segment. Due to error in the accuracy of the fit of the line, this experiment selects the midpoint of the straight line segment; a simplification can be obtained as follows.
k t x + t y 2 = k x 0 y 0 + b 2
A line segment pair L can obtain an equation as shown in (15), where the unknown quantity is t = t x , t y T . Therefore, two non-parallel line segment pairs L are required to solve the equation: that is, the line segment pair equation L o p . Each line segment pair equation L o p can solve a translation matrix t = t x , t y T .
After solving all the line segment pair equations L o p , this paper uses the DBSCAN algorithm to cluster all the translation matrices as samples, and then samples the cluster with the largest number of samples as the key cluster. Because the matrix is a two-dimensional variable, the test found that when the parameter of the DBSACN algorithm is set to M i n P t s = 1 , σ = 0.05   m , the ideal result can be obtained; that is, two-dimensional variables with similar values in both dimensions are gathered together. Figure 8 shows the distribution diagram of each cluster after DBSCAN density clustering of all the translation matrix t samples. The clusters of density clustering are mainly concentrated in region 1 and region 2 areas. This is because Equation (15) is an equation with squares, and the translation matrix t will be two opposite clusters. Then, it is impossible to determine which key cluster to use. The samples in the key clusters shown in the enlarged image of region 1 and region 2 are scattered. Because of outlier interference in the extraction of line segment features and the estimated rotation matrix is not the true value, the samples of the translation matrix t key clusters cannot be simply weighted averages. In order to evaluate the similarity of the two frames of point cloud after data registration in the key clusters, this paper proposes a kernel density estimation method, which brings the sample size in the key clusters into the probability density function. The optimal solution of the translation matrix t is found by combining the K-L divergence.

2.3. Kernel Density Estimation and K-L Divergence

There are many methods to analyze and describe sample points. The probability density function (PDF) [29] is used as a means to describe sample points. This paper proposes to use kernel density estimation (KDE) to obtain the probability density function of a laser point cloud. As one of the classical methods of nonparametric density estimation, the kernel density estimation method plays an extremely important role in big data processing. When the probability distribution of an event is unknown, the density function is estimated using the observed data. In addition, the distance between data will also have different effects, so it is believed that the data with a relatively close distance will have a greater impact on each other, while the data with a relatively long distance will have a smaller impact.
The kernel density estimation method can estimate the density of arbitrarily distributed sample points without the prior knowledge of sample distribution. It is suitable for analyzing the laser scanning data obtained by mobile robots in any environment. Let random variables X 1 , X 2 , , X n be independent identically distributed samples extracted from the population, and their density function is f x ; then the kernel density estimator f ^ x is:
f ^ x = 1 n h i = 1 n K ( x X i h )
In the formula, n is the sample size, h is the bandwidth and K x is the kernel function. As the core of the kernel density estimation method, its kernel function should meet the following conditions: (1) non-negative: K x 0 ; (2) symmetry: K x = K x ; (3) normality: R   K x d x   =   1 . For the kernel density estimator f ^ x , through observation, it is found that this method mainly emphasizes that the smaller the absolute value of random variable X i and variable x , the smaller the distance between them, and the greater the influence of random variable X i on the density function value at point x . In addition, the kernel density estimator only depends on the sample data, bandwidth and kernel function, and does not require whether the sample data meets the specific model or rule. Bring data set O j into the samples in the key clusters of the rotation matrix and the translation matrix, and obtain data set O j and data set R i , respectively, and carry out the kernel density estimation:
P O x = 1 n h i = 1 n K ( x O j h ) P R x = 1 n h i = 1 n K ( x R i h )
For the kernel density estimator, as long as its kernel function and bandwidth are properly selected, the kernel density estimation method can approach the real density function with arbitrary accuracy. The commonly used kernel functions that meet the above properties mainly include the Gaussian kernel function, the Epanechnikov kernel function and the Triangle kernel function. For the convenience of calculation, the Gaussian kernel function is selected, and its mathematical expression is:
K x = 1 2 π e x 2 2
The Gaussian kernel function is substituted into the kernel density estimation of the data set to obtain the probability density function of the sample points. Figure 9 shows the xy-axis sample data distribution and probability density function of the data sets O j and R i . Since data set R i is obtained by O j through transformation (−1.5, 0.8, 0.3), the x-axis coordinate distribution of the two data sets in Figure 9a is quite different. The distance between the corresponding laser points in the first half of the data set is large, and the distance between the corresponding laser points in the second half is small, resulting in different probability density function waveforms. The x-axis coordinates of the data set R i are more concentrated in the 0–1 interval, and the probability exceeds 0.3, while data set O j is more concentrated in the 2–3 interval, and the probability does not exceed 0.3. The y-axis coordinates of the two data sets in Figure 9b are more compact, but the distance at the laser point in the first half is small, and the distance at the laser point in the second half is large. From the waveform of the probability density function, the y-axis coordinates of data set R i are more concentrated in the 3–4 interval, and the probability does not exceed 0.2, and data set O j is more concentrated in the 1–2 interval, and the probability exceeds 0.2.
Figure 10 shows the xy-axis sample data distribution and probability density function of data sets O j and R i . Data set O j is obtained from the angle estimated by the rotation of O j . There is a certain distance between the x-axis coordinate distribution of the two data sets in Figure 10a, and the distance at the corresponding laser point is equal. The probability density function waveforms of the two are the same, but the x-axis coordinates of data set R i are more concentrated in the 0–1 interval, while data set O j is more concentrated in the 2–3 interval. The y-axis coordinates of the two data sets in Figure 10b are more compact, and the distances at the corresponding laser points are equal. From the waveform of the probability density function, the y-axis coordinates of the two data sets are concentrated in the interval 2–4.
For the two key clusters using DBSCAN to solve the translation matrix clustering, data set O j is first transformed by translation matrix t of region 1 in Figure 8 to obtain data set O j . As shown in Figure 11, the xy-axis sample data distribution and probability density function of data sets O j and R i can be seen. The coordinate values of the corresponding points of data sets O j and R i in the figure are very different. Since Equation (15) is a squared equation, the clustered translation matrix t will be two opposite clusters. Therefore, translation matrix t of the key cluster in region 1 in Figure 8 is incorrect, and it is opposite to the real value symbol of translation matrix t .
Data set O j is transformed by translation matrix t of region 2 in Figure 8 to obtain data set O j . Figure 12 shows the xy-axis sample data distribution and probability density function of data sets O j and R i . It can be seen that the xy-axis of data sets O j and R i basically overlap together, indicating that translation matrix t of the key cluster in region 2 is close to the real value, but the different values in the key cluster have an impact on the estimation results. In Figure 12a, when the value of t x is −1.500567, the x-axis coordinates of the 60th laser point of data sets O j and R i are different by 0.0006 m. When the value of t x is −1.506931, the x-axis coordinates of the 60th laser point of data sets O j and R i are different by 0.0058 m. In Figure 12b, when the value of t y is 0.800261, the x-axis coordinate difference of the 60th laser point of data sets O j and R i is 0.00025 m, and when the value of t y is 0.811266, the x-axis coordinate difference of the 60th laser point of data sets O j and R i is 0.0112 m.
To address the problem that different values in the key cluster of translation matrix t have an influence on the estimation results, this paper uses K-L divergence to evaluate the estimation effect of translation matrix t . K-L divergence, also known as relative entropy, can be used to measure the degree of difference between two signals and is an index to quantify the similarity between two probability distributions. Let the two probability distributions be p x and q x , respectively, and then the K-L distance δ K L p , q from q to p is defined as:
δ K L p , q = i = 1 N p x i · l o g p x i q x i
It can also be obtained that the K-L distance δ K L q , p of p relative to q is:
δ K L q , p = i = 1 N q x i · l o g q x i p x i
Finally, the K-L divergence value D K L p , q can be obtained by calculating the K-L distance δ K L p , q and δ K L q , p of the two probability distributions:
D K L p , q   =   δ K L p , q   +   δ K L q , p
K-L divergence is used to measure the difference between two probability distributions, and its physical significance is a measure of the angle between them. Therefore, a large K-L divergence value indicates a large difference between two probability distributions. The smaller the K-L divergence value, the smaller the difference between the two probability distributions; that is, the higher the similarity. When the two probability distributions are exactly the same, the value is 0. Therefore, the probability density function of the data in the key cluster of region 2 in Figure 8 is calculated, and the K-L divergence value is calculated with the probability density function of data set R i . The data with the minimum K-L divergence value are the optimal solution of the true value of the translation matrix t ; that is, the transformation is (−1.50057, 0.80026, 0.3). However, the true value is (−1.5, 0.8, 0.3). It can be seen that the lidar data registration algorithm proposed in this paper based on DBSCAN clustering has high accuracy in estimating the rotation angle and translation matrix of the laser point cloud of two frames, which is basically close to the true value.

2.4. Description of the Algorithm

The lidar returns a set of ordered two-dimensional lidar data once every scanning environment. The collected two frames of point cloud data are recorded as sets O j = θ j , ρ j j = 1 , N and R i = θ i , ρ i i = 1 , M , respectively. A detailed description of the algorithm is shown in Figure 13.

3. Discussion and Results

3.1. Open-Source Dataset Simulation

In order to verify the performance of the lidar data registration algorithm based on DBSCAN clustering in this paper, the algorithm is used to register two-dimensional lidar data in the MATLAB simulation environment. The open point cloud data of the Intel Lab provided by Dirk Hahnel [30] are selected in the experiment. The laser radar scanning angle used in this data set is 180 ° , and the angular resolution is 1 ° . In this experiment, two consecutive frames of laser information in the data set are selected, and the experimental results are shown in Figure 14. In Figure 14a, the line segment features of two frames of laser point cloud O j and point cloud R i are selected. The environment with multiple breakpoints and multiple corners is selected, and the line segment features are extracted. Due to the change of the position of the lidar in the map, the position of the corresponding line segment also changes. In the figure, region 1 is the newly emerged point cloud after the position transformation of laser point cloud O j , while region 2 is the missing point cloud of laser point cloud O j . However, since the initial point cloud has been rotated and translated, the two frames of point cloud can be approximately coincident by the data registration algorithm to obtain the lidar rotation angle and translation matrix. Figure 14b shows the point cloud line segment features after data registration of the algorithm in this paper. In the figure, except for the new and missing line segments of the position transformation, the rest of the line segments coincide with each other through estimated rotation angle and translation matrix transformation. It can be seen that the estimation of rotation angle and shift matrix of the lidar data registration algorithm proposed in this paper based on DBSCAN clustering is basically close to the real odometer results, which can be effectively used in complex environments with multiple breakpoints and multiple corners.
Figure 15 is a local enlarged view of region 3 in Figure 14. Figure 15a is the distribution of the line segments of the laser point cloud after rotation. It can be seen that the line segments of the two frames of the laser point cloud are approximately parallel. The experimental rotation angle obtained is 0.0737, and the actual rotation angle in the open-source data set is 0.0676. Figure 15b is the distribution of the line segments of the laser point cloud after rotation and translation. It can be seen that the line segments of the two frames of the laser point cloud are approximately coincident. The translation matrix obtained from the experimental results is (−0.00644, −0.003122), and the actual translation matrix in the open-source data set is (−0.001, −0.001).
Figure 16 selects two frames of point cloud with noise from Intel LABS open point cloud data. Figure 16a includes the mobile robot scanning the environmental laser point cloud O j at a certain time, and the laser point cloud R i obtained after moving forward and rotating the car body. The laser point cloud O j contains more environmental noise and the environmental angle scanned by lidar is larger, while the laser points in the laser point cloud R i are relatively dense and the environmental noise is less. Figure 16b shows the registration result of lidar data obtained by the ICP algorithm. It can be seen that the two frames of the point cloud do not coincide, mainly because the data registration carried out by the ICP algorithm is not global optimal matching, but is local optimal matching. The experimental rotation angle is 0.4904 and the translation matrix is (0.5071, −0.0122), but the actual experimental rotation angle is 0.08 and the actual translation matrix is (−0.9523, −1.1598). Figure 16c shows the line segment feature extraction results of two frames of the laser point cloud. For the case of environmental noise in the point cloud, the main line segment in the point cloud is fitted. Figure 16d shows the registration results of lidar data using the algorithm in this paper, and the two frames of laser point cloud almost coincide. The experimental rotation angle is 0.0876 and the translation matrix is (−0.9842, −1.1413), which is close to the actual value. The experiment shows that for the ICP algorithm, it is better to have one-to-one correspondence between points. When there are partial overlaps or outliers, the point sets cannot correspond well, resulting in matching errors. However, the algorithm in this paper can extract the main line segment in the point cloud for lidar data registration to avoid mismatching caused by too many outliers or missing environmental information.

3.2. Real Environment Simulation

Simulation in a real environment adopts the LMS511 Lidar of SICK to collect environmental data. The specific parameters of the radar are shown in Table 2.
In this study, the lidar sets the starting angle of measurement as −50° and the ending angle as 140° through SOPAS software, and a total of 381 groups of data are collected. This study uses the MATLAB 2016b simulation environment for algorithm simulation, and the computer CPU is Intel Core i5-6300U.
Figure 17a shows the corridor environment with flat ground. In the figure, the tracked robot equipped with lidar constantly scans the terrain. In the process of robot moving, the lidar scanning data with time stamp information and the tracked odometer data are obtained. The time stamps of two frames of lidar scanning data are mainly used. The tracked odometer whose time stamps are close to each other is used as time-weighted linear interpolation to obtain the time-stamp-synchronous lidar data and odometer. Figure 17b shows the conversion of polar coordinates to cartesian coordinates of two frames of lidar scanning. It can be seen that the corner points in the figure correspond to the turning part of the wall in Figure 17a, and the two frames of the point cloud cross but do not coincide. Figure 17c shows the data registration results after extracting the features of the point cloud line segments in Figure 17b. The line segments of the two frames of lidar approximately coincide, and the point cloud R i in region 1 has more line segments than the point cloud O j , which is due to the extra environmental features of the vehicle body after counterclockwise rotation. The rotation angle obtained in the experiment is 0.0639, and the translation matrix is (0.0155, −0.0265). The actual rotation angle of the track odometer is 0.0583, and the translation matrix is (0.0124, −0.0285). Experiments show that the lidar data registration algorithm based on DBSCAN clustering in this paper can accurately estimate the rotation angle and migration matrix of two frames of the laser point cloud, meeting the accuracy requirements of the laser odometer.
In the experiment, four groups of laser point cloud frames were extracted from the Dirk Hahnel Intel laboratory data set and the real environment for data registration. Most point cloud registration algorithms use mean square error as a metric of the lidar data registration effect, and calculate the mean square error of the distance between the current measurement point and the real corresponding point. However, in the actual scene, due to occlusion, movement and low coincidence rate, not all measurement points have one-to-one corresponding real points. Therefore, the lower the data coincidence rate, the more unreliable the evaluation standard of mean square error. In the case of large changes in lidar pose, not all search methods can find accurate real corresponding points, and there may be a large number of mismatched point pairs.
In order to evaluate the effect of data registration more effectively, this paper introduces absolute error E A t x , t y , θ and relative error E R t x , t y , θ as the main measurement standards. Absolute error E A t x , t y , θ can directly reflect the size of the error, but it cannot accurately reflect the accuracy of the registration algorithm. The relative error E R t x , t y , θ here can not reflect the size of the error, but can more accurately reflect the accuracy of the registration algorithm. Therefore, this paper uses these two errors at the same time to make use of the advantages of the two to evaluate the registration effect of the data registration algorithm more directly and accurately. Table 3 compares the rotation angle and translation matrix estimated by the proposed algorithm with the traditional ICP algorithm. Among them, t x , t y , θ is the actual value of translation and rotation, E A t x , t y , θ is the absolute error measured by the algorithm and E R t x , t y , θ is the relative error measured by the algorithm. The results show that the proposed algorithm can obtain accurate registration results compared with the ICP algorithm. The relative error between the registration results and the actual values is within 10%, and the accuracy is better than the ICP algorithm. The absolute error and relative error of the registration results obtained by the ICP algorithm are very large. This is due to the fact that a new part of the environment is prone to appear in the point cloud, coupled with the influence of outliers and noises. At this time, it is difficult for the ICP algorithm to find a matching relationship between points. The algorithm in this paper can still extract the main line segment features in the environment for point cloud data registration, avoiding the problem of low coincidence rate of point cloud matching due to the emergence of a new part of the environment.

4. Conclusions

The core of lidar data registration algorithms depends on searching correspondence. For a point-based algorithm, the data coincidence rate is too low, while for a line-based algorithm, the method of searching correspondence is too complex and unstable. Therefore, this paper proposes a lidar data registration algorithm based on DBSCAN clustering.
(1)
In the radar data preprocessing stage, a ring band filter is designed. According to the ring band filter, the outliers with noise in the laser point cloud are judged, retained or removed, or the coordinates of the laser points to be denoised are estimated using the surrounding laser point coordinates.
(2)
In the radar data registration stage, for the environmental line segment features required for registration, the adaptive nearest-neighbor algorithm is used to detect the breakpoints. The point set is segmented and fitted according to the adaptive threshold segmentation point set. Then, the fitted two frames of laser data sets are clustered by a DBSCAN algorithm based on density clustering. Thus, the key clusters of rotation angle and translation matrix of point cloud to be registered are obtained.
(3)
In the key cluster data selection stage, in order to evaluate the similarity of two frames of point cloud after data registration in the key cluster, a kernel density estimation method is proposed to describe the registered point cloud, and K-L divergence is used to find the optimal value in the key cluster.
The experimental results show that the laser radar data registration algorithm based on DBSCAN clustering proposed in this paper avoids the direct search of the correspondence between points or lines in the complex scene with many outliers of the laser point cloud, which can effectively improve the robustness of the algorithm and suppress the influence of outliers on the algorithm. The relative error between the registration result and the actual value is within 10%, and the accuracy is better than the ICP algorithm, which realizes the accurate self-localization of the mobile robot. On this basis, further improving the calculation speed of the algorithm and processing high-complexity 3D laser point clouds are the main topics to be studied in the future.

Author Contributions

Conceptualization, Y.L. (Yiting Liu), L.Z., P.L. and T.J.; methodology, Y.L. (Yiting Liu) and L.Z.; software, L.Z. and P.L.; validation, Y.L. (Yawen Liu), J.D. and R.L.; writing—original draft, Y.L. (Yiting Liu) and L.Z.; writing—review and editing, S.Y., J.T., H.Y. and J.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the 67th batch of top projects of the China Postdoctoral Science Foundation (Grant No. 2020M671292), Jiangsu Postdoctoral Research Funding Program (Class B) (Grant No. 2019K186), 2021 Provincial Key R & D Program (Industry Prospect and Common Key Technologies) (Grant No. BE2021016-5), Nanjing Institute of Technology Research Fund for Introducing Talents (Grant No. YKJ202112, Grant No. YKJ202043), Key Laboratory of Micro-Inertial Instrument and Advanced Navigation Technology (SEU-MIAN-202102) and the Jiangsu Innovation and Entrepreneurship Ph. D foundation. The information of funders are Jiangsu Provincial Department of Science and Technology, Jiangsu Provincial Department of Education, China Postdoctoral Foundation, Nanjing Institute of Engineering.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Publicly available datasets were analyzed in this study. This data can be found here: [http://ais.informatik.uni-freiburg.de/slamevaluation/datasets.php].

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Xu, J.; Wang, D.; Liao, M. Research of cartographer graph optimization algorithm based on indoor mobile robot. J. Phys. Conf. Ser. 2020, 1651, 012120–012125. [Google Scholar] [CrossRef]
  2. Garcia, N.; Fascista, A.; Coluccia, A.; Wymeersch, H.; Aydogdu, C.; Mendrzik, R.; Seco-Granados, G. Cramér-Rao Bound Analysis of Radars for Extended Vehicular Targets With Known and Unknown Shape. IEEE Trans. Signal Process. 2022, 70, 3280–3295. [Google Scholar] [CrossRef]
  3. Tang, X.; Li, M.; Tharmarasa, R.; Kirubarajan, T. Seamless Tracking of Apparent Point and Extended Targets Using Gaussian Process PMHT. IEEE Trans. Signal Process. 2019, 67, 4825–4838. [Google Scholar] [CrossRef]
  4. Xiang, M.; Ren, W.; Li, W.; Xue, Z.; Jiang, X. High-Precision Vital Signs Monitoring Method Using a FMCW Millimeter-Wave Sensor. Sensors 2022, 22, 7543. [Google Scholar] [CrossRef]
  5. Liang, Y.; Woźniak, M. Virtual Reconstruction System of Building Spatial Structure Based on Laser 3D Scanning under Multivariate Big Data Fusion. Mob. Netw. Appl. 2021, 27, 607–616. [Google Scholar] [CrossRef]
  6. Cwian, K.; Nowicki, R.; Jan, W.; Piotr, S. Large-Scale LiDAR SLAM with Factor Graph Optimization on High-Level Geometric Features. Sensors 2021, 21, 3445. [Google Scholar] [CrossRef]
  7. Ge, G.Y.; Zhang, Y.; Jiang, Q.; Wang, W. Visual Features Assisted Robot Localization in Symmetrical Environment Using Laser SLAM. Sensors 2021, 21, 1772. [Google Scholar] [CrossRef]
  8. Cong, B.; Li, Q.Y.; Liu, R.F.; Wang, F.; Zhu, D.Y.; Yang, J.B. Research on a Point Cloud Registration Method of Mobile Laser Scanning and Terrestrial Laser Scanning. KSCE J. Civ. Eng. 2022, 26, 5275–5290. [Google Scholar] [CrossRef]
  9. Zhou, L.; Xu, F.; Liu, S. The Research of Point Cloud Data Processing Technology. Appl. Mech. Mater. 2014, 628, 426–431. [Google Scholar] [CrossRef]
  10. Ge, G.Y.; Qin, Z.; Fan, L.L. An Improved VSLAM for Mobile Robot Localization in Corridor Environment. Adv. Multimed. 2022, 2022, 3941995. [Google Scholar] [CrossRef]
  11. Xu, R.X. Path planning of mobile robot based on multisensor information fusion. EURASIP J. Wirel. Commun. Netw. 2019, 2019, 44. [Google Scholar] [CrossRef] [Green Version]
  12. Wang, Q.; Zeng, Y.; Zou, Y.K.; Li, Q.Z. A Closed Loop Detection Method for Lidar Simultaneous Localization and Mapping with Light Calibration Information. Sens. Mater. 2020, 32, 2289–2301. [Google Scholar] [CrossRef]
  13. Castillon, M.; Pi, R.; Palomeras, N.; Ridao, P. Extrinsic Visual-Inertial Calibration for Motion Distortion Correction of Underwater 3D Scans. IEEE Access 2021, 9, 93384–93398. [Google Scholar] [CrossRef]
  14. Besl, P.J.; Mckay, H.D. A method for registration of 3-d shapes. IEEE Trans. Pattern Anal. Mach. Intell. 1992, 14, 239–256. [Google Scholar] [CrossRef] [Green Version]
  15. Huang, J.G.; Tao, B.; Zeng, F. Point cloud registration algorithm based on ICP algorithm and 3D-NDT algorithm. Int. J. Wirel. Mob. Comput. 2022, 22, 125–130. [Google Scholar] [CrossRef]
  16. Dong, J.M.; Peng, Y.X.; Ying, S.H.; Hu, Z.Y. LieTrICP: An improvement of trimmed iterative closest point algorithm. Neurocomputing 2014, 140, 67–76. [Google Scholar] [CrossRef]
  17. Milos, P.; Salman, A.S.; Kyoung-Sook, K. Low Overlapping Point Cloud Registration Using Line Features Detection. Remote Sens. 2019, 12, 61. [Google Scholar] [CrossRef] [Green Version]
  18. Zhao, M.F.; Huang, Z.; Song, T.; Cao, L.B.; Huang, J.M.; Chen, B. Point cloud registration method combining sampling consistency and iterative closest point algorithm. Laser J. 2019, 40, 45–50. [Google Scholar] [CrossRef]
  19. Allysson, S.M.; Lacerda; Lucas, S. KDT-MOEA: A multiobjective optimization framework based on K-D trees. Inf. Sci. 2019, 503, 200–218. [Google Scholar] [CrossRef]
  20. Bengtsson, O.; Baerveldt, A.J. Robot localization based on scan-matching—Estimating the covariance matrix for the IDC algorithm. Robot. Auton. Syst. 2003, 44, 29–40. [Google Scholar] [CrossRef]
  21. Li, Q.D.; Griffiths, J.D. Iterative closest geometric objects registration. Comput. Math. Appl. 2000, 40, 1171–1188. [Google Scholar] [CrossRef] [Green Version]
  22. Holy, B. Registration of lines in 2d lidar scans via functions of angles. Eng. Appl. Artif. Intell. 2018, 67, 436–442. [Google Scholar] [CrossRef]
  23. Masood, H.; Zafar, A.; Ali, M.U.; Hussain, T.; Khan, M.A.; Tariq, U.; Damasevicius, R. Tracking of a Fixed-Shape Moving Object Based on the Gradient Descent Method. Sensors 2022, 22, 1098. [Google Scholar] [CrossRef] [PubMed]
  24. Hu, C.H.; Zhou, P.; Li, P.P. A 3D Point Cloud Filtering Method for Leaves Based on Manifold Distance and Normal Estimation. Remote Sens. 2019, 11, 198. [Google Scholar] [CrossRef] [Green Version]
  25. Gao, Z.H.; Gu, C.F.; Yang, J.H.; Gao, S.S.; Zhong, Y.M. Random Weighting-Based Nonlinear Gaussian Filtering. IEEE Access 2020, 8, 19590–19605. [Google Scholar] [CrossRef]
  26. Liu, Y.; Zhang, L.; Qian, K.; Sui, L.J.; Lu, Y.H.; Qian, F.F.; Yan, T.W.; Yu, H.Q.; Gao, F.Z. An Adaptive Threshold Line Segment Feature Extraction Algorithm for Laser Radar Scanning Environments. Electronics 2022, 11, 1759. [Google Scholar] [CrossRef]
  27. Cheng, F.; Niu, G.F.; Zhang, Z.Z.; Hou, C.J. Improved CNN-Based Indoor Localization by Using RGB Images and DBSCAN Algorithm. Sensors 2022, 22, 9531. [Google Scholar] [CrossRef]
  28. Yu, Y.; Zhou, A. An improved DBSCAN density algorithm. Comput. Technol. Dev. 2011, 21, 30–33. [Google Scholar]
  29. Ha, C.N.; Thao, N.T.; Tran, N.B.; Trung, N.T.; Tai, V.V. A new approach for face detection using the maximum function of probability density functions. Ann. Oper. Res. 2020, 312, 99–119. [Google Scholar] [CrossRef]
  30. Slam Benchmarking. Available online: http://ais.informatik.uni-freiburg.de/slamevaluation/datasets.php (accessed on 9 March 2023).
Figure 1. Ring band filter: (a) the candidate laser points are noise points; (b) the candidate laser points are corner points; (c) the candidate laser points are noise points, and there are corners in the ring.
Figure 1. Ring band filter: (a) the candidate laser points are noise points; (b) the candidate laser points are corner points; (c) the candidate laser points are noise points, and there are corners in the ring.
Electronics 12 01373 g001
Figure 2. Radar data preprocessing: (a) raw data; (b) processed data.
Figure 2. Radar data preprocessing: (a) raw data; (b) processed data.
Electronics 12 01373 g002
Figure 3. Simulation of line feature extraction algorithm.
Figure 3. Simulation of line feature extraction algorithm.
Electronics 12 01373 g003
Figure 4. Laser data points.
Figure 4. Laser data points.
Electronics 12 01373 g004
Figure 5. Effect diagram of line feature extraction.
Figure 5. Effect diagram of line feature extraction.
Electronics 12 01373 g005
Figure 6. Using DBSCAN to solve rotation angle clustering effect: (a) DBSCAN cluster result; (b) magnification of result.
Figure 6. Using DBSCAN to solve rotation angle clustering effect: (a) DBSCAN cluster result; (b) magnification of result.
Electronics 12 01373 g006
Figure 7. Distribution of straight-line segments of the data set after rotation.
Figure 7. Distribution of straight-line segments of the data set after rotation.
Electronics 12 01373 g007
Figure 8. Using DBSCAN to solve the clustering effect of translation matrix.
Figure 8. Using DBSCAN to solve the clustering effect of translation matrix.
Electronics 12 01373 g008
Figure 9. Sample data distribution and probability density function of data sets O j and R i : (a) x-axis; (b) y-axis.
Figure 9. Sample data distribution and probability density function of data sets O j and R i : (a) x-axis; (b) y-axis.
Electronics 12 01373 g009
Figure 10. Sample data distribution and probability density function of data sets O j and R i : (a) x-axis; (b) y-axis.
Figure 10. Sample data distribution and probability density function of data sets O j and R i : (a) x-axis; (b) y-axis.
Electronics 12 01373 g010
Figure 11. Sample data distribution and probability density function of data sets O j and R i : (a) x-axis; (b) y-axis.
Figure 11. Sample data distribution and probability density function of data sets O j and R i : (a) x-axis; (b) y-axis.
Electronics 12 01373 g011
Figure 12. Sample data distribution and probability density function of data sets O j and R i : (a) x-axis; (b) y-axis.
Figure 12. Sample data distribution and probability density function of data sets O j and R i : (a) x-axis; (b) y-axis.
Electronics 12 01373 g012aElectronics 12 01373 g012b
Figure 13. Description of the algorithm.
Figure 13. Description of the algorithm.
Electronics 12 01373 g013
Figure 14. Simulation results of open-source datasets in Intel Research Laboratory: (a) point cloud line segment characteristics of two laser frames; (b) point cloud line segment features after data registration.
Figure 14. Simulation results of open-source datasets in Intel Research Laboratory: (a) point cloud line segment characteristics of two laser frames; (b) point cloud line segment features after data registration.
Electronics 12 01373 g014
Figure 15. Two frames of partial magnification of laser point cloud segment: (a) distribution of straight-line segment of point cloud after rotation; (b) distribution of straight-line segment of point cloud after rotation and translation.
Figure 15. Two frames of partial magnification of laser point cloud segment: (a) distribution of straight-line segment of point cloud after rotation; (b) distribution of straight-line segment of point cloud after rotation and translation.
Electronics 12 01373 g015
Figure 16. ICP and lidar data registration of this algorithm: (a) two frames of laser point cloud; (b) registration result of ICP algorithm; (c) line segment features of two frames of laser point cloud; (d) registration result of the algorithm in this paper.
Figure 16. ICP and lidar data registration of this algorithm: (a) two frames of laser point cloud; (b) registration result of ICP algorithm; (c) line segment features of two frames of laser point cloud; (d) registration result of the algorithm in this paper.
Electronics 12 01373 g016aElectronics 12 01373 g016b
Figure 17. Simulation results of real-environment lidar data: (a) flat ground corridor environment; (b) two frames of laser point cloud; (c) point cloud line segment features after data registration.
Figure 17. Simulation results of real-environment lidar data: (a) flat ground corridor environment; (b) two frames of laser point cloud; (c) point cloud line segment features after data registration.
Electronics 12 01373 g017
Table 1. The merits and demerits of point cloud registration algorithms.
Table 1. The merits and demerits of point cloud registration algorithms.
Point Cloud Registration AlgorithmMeritsDemerits
ICP algorithmPoint cloud registration with high coincidence rate has a better effect.Point cloud registration with low coincidence rate is easy to fall into local optimal solution.
IDC algorithmPoint cloud registration does not require a structured environment.It is easy to cause mismatch between points and points.
ICL algorithmPoint cloud registration suitable for low-texture environment.The method of searching line correspondence is complex and unstable.
Table 2. SICK LMS511 parameters.
Table 2. SICK LMS511 parameters.
ParameterValue
Measurement range0.7–80 m
Scanning angle190°
Angular resolution0.5°
Scanning frequency25 Hz
System error±25 ms (1–10 m)
Table 3. Comparison of lidar data registration results.
Table 3. Comparison of lidar data registration results.
Number t x , t y , θ E A t x , t y , θ E R t x , t y , θ
ICP
Algorithm
Proposed
Algorithm
ICP
Algorithm
Proposed Algorithm
1(−0.9523, −1.1598,0.08)(1.4594, 1.1476, 0.4104)(0.0319, 0.0185, 0.0076)(153.3%, 98.9%, 513%)(3.3%, 1.6%, 9.5%)
2(0.0124, −0.0285,0.0583)(0.0262, 0.0464, 0.0171)(0.0031, 0.002, 0.0056)(211.3%, 162.8%, 29.3%)(25.0%, 7.0%, 9.6%)
3(0.4389, −0.2463,0.0475)(0.3055, 0.1594, 0.0249)(0.0232, 0.0065, 0.0027)(69.6%, 64.7%, 52.4%)(5.3%, 2.6%, 5.7%)
4(−0.068, 0.3961, 0.0579)(0.0645, 0.2625, 0.1284)(0.0011, 0.007, 0.0038)(94.9%, 66.3%, 221.8%)(1.6%, 1.8%, 6.6%)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, Y.; Zhang, L.; Li, P.; Jia, T.; Du, J.; Liu, Y.; Li, R.; Yang, S.; Tong, J.; Yu, H. Laser Radar Data Registration Algorithm Based on DBSCAN Clustering. Electronics 2023, 12, 1373. https://doi.org/10.3390/electronics12061373

AMA Style

Liu Y, Zhang L, Li P, Jia T, Du J, Liu Y, Li R, Yang S, Tong J, Yu H. Laser Radar Data Registration Algorithm Based on DBSCAN Clustering. Electronics. 2023; 12(6):1373. https://doi.org/10.3390/electronics12061373

Chicago/Turabian Style

Liu, Yiting, Lei Zhang, Peijuan Li, Tong Jia, Junfeng Du, Yawen Liu, Rui Li, Shutao Yang, Jinwu Tong, and Hanqi Yu. 2023. "Laser Radar Data Registration Algorithm Based on DBSCAN Clustering" Electronics 12, no. 6: 1373. https://doi.org/10.3390/electronics12061373

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop