1. Introduction
Localization technology plays an important role in the Autonomous Internet of Vehicles (AIoV). In general, precise vehicle localization can provide vehicles with centimeter-level lane departure warnings, location-based services, and more [
1]. Traditional localization technologies such as global navigation satellite system (GNSS) localization and real-time kinematic (RTK) localization are relatively mature, and are able to achieve highly accurate localization. However, their performance will be influenced when satellite signals are blocked by tunnels, houses, and forests [
2,
3]. At the same time, the strict driving safety requirements of the AIoV entail higher requirements for localization accuracy, real-time computing, and environmental adaptability. Furthermore, the energy system of the AIoV is mainly powered by batteries, and vehicle localization should be based on the principle of not adding additional sensors, so as to reduce the energy consumption of in-vehicle equipment [
4].
To overcome the above challenges, many researchers have collected data on traffic infrastructure, generally observed in real driving environments, for vehicle localization. High-precision map localization is achieved by matching maps with built-in-vehicle sensor data. The sensors that is mainly used for the high-precision map localization are 3D-LiDAR. The measurements obtained from 3D-LiDAR are matched with a high-precision map to help in vehicle localization. Therefore, high-precision map localization algorithms based on 3D-LiDAR have been widely studied. In the localization process of a high-precision map based on point clouds, real-time observation data from 3D-LiDAR are not the same as map data due to the non-real-time character of a map and the noise of the sensors, which causes a local optimum or a localization of non-convergence. This will affect the accuracy of point cloud localization algorithms matched with maps.
At present, point cloud localization technologies mainly include map-matching algorithms based on an iterative closest point (ICP) algorithm combined with normal distribution transform (NDT) and statistical algorithms such as PF, the unscented Kalman filter (UKF), and the histogram filter (HF) based on Bayesian filtering. These two kinds of algorithms usually use the matching degree of observation points and map data to locate vehicles. When the ICP algorithm combined with NDT is used for locating objects, initial pose information of the objects is needed to narrow the scope of localization, and the observation data and the map data will then be iterated and matched. If the distance error between the observation data and the map data becomes smaller after iterations and meets a certain threshold, the pose of the observation data after matching is the localization result. In [
5], the NDT algorithm was used to achieve rough localization, and the ICP algorithm was then used for map matching with higher accuracy to finish the localization. A method of building a map by extracting the vertical features of a point cloud map was proposed in [
6]. This method uses the ICP algorithm to match angle features extracted in real time with the vertical angle features of the map to achieve localization. Yoneda et al. [
7] extracted map features through the feature quantity of scanned data based on cluster distribution, and used the ICP algorithm for localization. A method of matching and localization by extracting rod-shaped objects features is proposed in [
8]. Additionally, there are also map matching algorithms based on features such as height [
9] and road edge [
10,
11,
12,
13]. However, those map-based matching algorithms usually need to extract the point cloud features, and there are problems, such as the high sensitivity of the initial localization value to noise, the time-consuming process of feature extraction and registration, and a poor localization accuracy. Therefore, it is hard to meet the AIoV requirements of high-precision real-time localization.
The statistical method is usually divided into two processes: localization prediction and update. In the localization prediction stage, the observation data are predicted through an initial pose and system motion model. Subsequently, the matching degree between the predicted observation data and the map is calculated to indicate its uncertainty. The higher the mismatch degree is, the greater the uncertainty will be. Secondly, in the localization update phase, the mean and variance of pose are updated by using the difference between the real observation data and the predicted observation data as well as the matching degree as the localization output. In [
14,
15], the inconsistency between high-precision point cloud map data and point cloud data collected in real time is expressed by probability. The authors converted the localization process into two steps, including prediction and a prediction update, to obtain a better prediction of the uncertainty of localization. In the prediction stage, the observation data are predicted through the initial pose and the system motion model, and the matching degree between the predicted observation data and the map is then calculated to show its uncertainty. The higher the mismatch degree is, the greater the uncertainty will be. In the stage of the prediction update, the differential and matching degree between the observation data collected in real time and the predicted observation data is used to update the mean value and variance of the pose, and the updated mean value and variance are then used as the localization output. Therefore, due to its excellent anti-noise ability and robustness for various scenes, the localization algorithm based on filters has been widely studied in autonomous driving localization [
16,
17]. Castorena et al. [
16] built a ground reflectivity map based on a point cloud and implemented localization by an extended Kalman filter (EKF) localization algorithm. To achieve better precision and comparable stability in the localization, Lin et al. [
17] proposed a particle-aided unscented Kalman filter (PAUKF) algorithm. They updated position measurement information based on an onboard sensor and a high definition (HD) map. In [
18], they enhanced the weights of the particles by adding Kalman-filtered Global Navigation Satellite System (GNSS) information to improve autonomous vehicle localization. Feng et al. [
19] used a UKF-based SLAM algorithm to estimate a robot pose and feature map. Jia et al. [
20] used the Point-to-Line Iterative Closest Point (PL-ICP) to register the laser scan data of two adjacent frames and combined this with the Rao-Blackwillised Particle Filter (RBPF) for localization. For maximum performance, the localization algorithm based on a KF usually needs to be in a linear Gaussian system [
21], while a filtering localization algorithm and a PF algorithm based on an HF algorithm can adapt to any nonlinear non-Gaussian system and have higher localization accuracy. Compared with the KF algorithm and the HF algorithm, the nonlinear change capability of the PF algorithm makes it able to adapt to both linear and nonlinear systems without a specific posterior probability density function. Through rod-shaped objects in a point cloud map and a reflectivity map, Blanco et al. [
22] designed a localization algorithm based on a PF algorithm through a pre-built high-precision map. This algorithm updated particles according to the distance between each scanning point and the map until convergence, and localization accuracy was significantly improved. An improved particle filter algorithm based on an optimized iterative Kalman filter is proposed in [
23]. The algorithm used fewer particles for sampling, which reduced the amount of computation and improved the real-time performance of the algorithm. In [
24], a map sharing strategy is proposed; in this strategy, each particle has only a small map of the surrounding environment. However, due to the inconsistency between the high-precision point cloud map and the point cloud data collected in real time, as well as the randomness of the particle, the calculated value of the particle weight does not accurately reflect the correctness of its localization. Thus, autonomous driving point cloud localization technology based on a PF algorithm often shows deficiencies, such as slow convergence and particle degradation, which seriously reduce localization accuracy. In order to solve the problems of local optimization caused by the randomness of particles in the PF algorithm, a localization algorithm framework based on prior information fusion is proposed in this paper. The framework utilizes the prior information between the observation data and the map to optimize the intermediate parameters of the algorithm, so as to make up for the deficiency of the existing localization algorithm frameworks. According to the proposed framework, an improved PF localization algorithm based on particle shift is proposed to improve the localization accuracy. The contributions of this paper are summarized as follows:
This paper proposes a localization system framework based on prior information fusion. The framework considers prior information where the observation data is most similar to the map data of the real location of the vehicle in the point cloud map, and constructs its relationship model with the state transition particle set of the localization algorithm, which reduces the influence of occlusion and dynamic noise on the positioning accuracy.
To perfect the framework design, a particle shift filter localization algorithm is proposed. We calculate the similarity between the transferred point cloud and the map point cloud. We then resample the particles based on the similarity, and the particles represented by the point cloud with higher similarity can be selected as the final localization particles.
The rest of this paper is structured as follows. The localization algorithm framework based on prior information fusion and the improved PF algorithm based on particle shift are presented in
Section 2.
Section 3 discusses the verification process in the experiments and analyzes the errors. Finally, the conclusion is provided in
Section 4.
2. Problem Statement and System Framework
2.1. Problem Statement
The current popular point cloud localization system framework usually takes the observation data, the point cloud map, and the initial localization data as input data. Subsequently, statistical algorithms such as PF filtering can be performed in the localization module, and localization estimation is carried out according to the matching degree, which is determined by the sum of corresponding point distances between the observation data and the point cloud map. The smaller the sum of the distance is, the higher the matching degree will be. However, in a real 3D environment, point cloud localization algorithms usually judge the accuracy of the localization through the distance error between the point cloud due to the complexity of automatic driving environments and the randomness of the point cloud in the map. The distance error, as a scalar, can measure the matching degree of two point clouds to a certain extent but cannot reflect their structural differences due to initial localization errors and sensor errors, causing poor localization precision. To improve the precision of localization, this paper proposes a localization system framework based on prior information fusion. The framework considers the prior information where the observation data is most similar to the map data of the real vehicle location in the point cloud map, and constructs its relationship model with the localization algorithm. Subsequently, the prior information is fused into the localization algorithm framework. The system framework is shown in
Figure 1.
The framework of the point cloud localization system based on prior information fusion proposed in this paper includes two parts: localization processing and prior information processing. The localization processing includes data input, localization, and output, which is consistent with the current popular point cloud location processing framework. The prior information processing includes a similarity calculations, particle shift, particle selection, and fusion localization. First, the similarity calculation extracts the similarity between the observation point cloud and the map from the input data. The particle shift is used to find the particles close to the real localization in the particle update stage. Subsequently, particle selection is carried out based on the particle shift, such that the particles used for positioning converge in the direction of greater similarity. Fusion localization is achieved by particle resampling.
In the positioning state prediction stage, the particle filter needs to sample m particles to form a particle pose set and then predict and transfer the state of particles to obtain a new particle set. Subsequently, the weight of the particle is calculated, and the particle set is resampled. The resampled particle set determines the final positioning output of the particle filter algorithm. Therefore, if we can optimize the particle set after the transfer and improve the convergence and accuracy of the particle set, the positioning output will be improved. Therefore, the basic idea of the PSF is to calculate the similarity between the transferred point cloud and the map point cloud. The higher the similarity is, the greater the particle weight is. We then resample the particles based on the similarity, and the particles represented by the point cloud with higher similarity can be selected as the final localization particles. This allows particles used for localization to converge in the direction of greater similarity and improves positioning accuracy.
In this paper, the PSF algorithm mainly includes a state transition and observation. Generally, a dynamic system can be represented by the state transition probability
(prior probability) and the observation probability
. In addition, the reliability
(posterior probability) of the state
of the current dynamic system is estimated based on the historical observation
and the past control vector
. According to Bayesian theory, the relationship between prior probability and posterior probability is as follows:
However, due to the influence of obstacles and noise, and since the change trend of the particle set is nonlinear, there are many errors in the calculation of
. After particle resampling, in the process of particle renewal, it may cause particle degradation and finally fall into a local optimum. Therefore, we seek to find the relationship between the particles and the true value, recalculate and update the particles to ensure the diversity of the particles, and achieve more accurate localization. Given
, the particle set generated by the state transition, and
, another particle set generated by calculating the weight and resampled,
is the particle weight. The output of localization is as follows:
where
represents the accuracy of the PF localization algorithm prediction. The closer its value is to the true value, the smaller the error is, and the higher the localization accuracy is. The variance indicates the degree of the convergence of the particles. The variance and convergence of the particles are inversely proportional. As the variance reduces, the particle-to-particle distance also decreases, and the localization becomes more stable. According to the localization algorithm framework proposed in
Figure 1, we seek to design an optimization algorithm for the intermediate parameter
of the particle set, which can obtain a smaller variance.
We denote each particle as a six-vector particle, with parameters containing three-dimensional coordinates, the vehicle heading angle, the pitch angle, and the roll angle, recorded as
. In order to calculate the particle weight, it is necessary to transfer the observation point cloud obtained by the LiDAR on the vehicle to gain the observation point cloud in global coordinates. The particles can be shifted by
where
represents the rotation of the angular state of the vehicle, and
represents the translation of the coordinates of the particles. The point cloud corresponding to the particle is also transformed by the same rotation and translation to obtain the observation point cloud in global coordinates. We then calculate the Euclidean distance between the transferred point cloud and the map point cloud to judge the similarity between the two. The sum of the distance between the observation point cloud and the map point cloud reduces, the similarity increases correspondingly, and the weight also increases. Finally, the point cloud with higher similarity is screened out through resampling it as the final localization particle.
In order to optimize the particle set after the state transition, the convergence of the particle set after weight calculation and particle resampling is higher; thus, localization accuracy is improved.
2.2. Similarity Calculation and Particle Shift
The particle shift process matches observation data to the map in the particle update stage to make particles closer to the real localization. This is based on the following facts.
- (1)
Each particle is regarded as one possible localization point. The particle weight indicates the similarity between the observation point cloud and the data around the localization point. As a scalar, Euclidean distance is used in the convergence process to measure the similarity between the particle and its true value.
- (2)
For the randomness of particles and the complexity of the traffic environment, the localization indicated by the particles with a high similarity is not true, but the particles near the true value will be close to the true value.
In order to calculate the particle shift, this paper assumes that the state space is continuous and that the real localization point is included in this space. Ideally, as the localization accuracy increases, the similarity between the particles and the real state increases, so the point-to-point corresponding distances between the observation particle and the map are smaller. In this paper, the variable
represents the similarity, which is the average inverse of the sum of the distance between the observation point cloud and the map point cloud.
where
is a point in the observation point cloud, and
denotes the nearest point to
. After the particles converge, the particles are transformed, and the value
of the particles increases, such that the particles approach the true value. In order to calculate this transformation, it needs to be converted into a least squares problem.
The observation point cloud is represented by the set
, and the set of the closest points corresponding to
in the map is represented by
. The centroid of the set
is represented by
.
The centroid of the set
is represented by
.
We define a transformation matrix as
, where
is a
rotation matrix, and
is a translation matrix.
and
are obtained by minimizing the opposite of similarity
. The formula is described as
We then obtain a corresponding solution through singular value decomposition, i.e., a rotation transformation matrix. A shifting particle is chosen with the value increased after transformation.
2.3. Particle Selection
After the particles shift, the point cloud with its closest observation point changes. The similarity between the shifting particles and the true value may decrease. In order to solve the problem that particles may fall into local optimization after shifting, this paper introduces a simulated annealing algorithm to select the shifted particles, optimizing particle shift while maintaining particle diversity.
The flow chart of the particle shift selection algorithm based on the simulated annealing algorithm is shown in
Figure 2. The observation point cloud carried by the shifted particles is represented by the point set
, and the closest point set between the
point set and the map is represented by
. The similarity after the particle shift is represented by
.
If
, the shifted particle is valid, and it will be added to the particle set
. If
, we compare the value between
and the probability
p. If the
, we select this particle shift; otherwise, no particle shift is required, and the particle remains unchanged. Through the above selection algorithm, the PSF algorithm can be further prevented from falling into the local optimization, and the problem of particle degradation can be alleviated. The particle selection algorithm can be described in Algorithm 1.
is the particle set obtained through prediction in the PSF algorithm.
is the weight calculated by similarity.
represents the point cloud collected by the vehicle in real time.
represents the high-precision point cloud image.
represents the particle shift calculation algorithm based on
Section 2.2. The output is the particle set changed by the particle shift algorithm.
Algorithm 1 Particle Selection. |
Input: Output: 1: 2: fordo 3: 4: 5: if then 6: and 7: else 8: if then 9: and 10: else 11: do nothing
12: end if 13: end if 14: end for 15: return ;
16: |
2.4. PSF Based on Prior Information
There are four steps in the PSF algorithm: sampling, particle shift, weighting, and resampling. The algorithm makes the particles close to the real value with the shift of particles in order to obtain a higher localization accuracy.
Figure 3 shows the PSF algorithm for localization combined with the high-precision point cloud map and the real-time observation data. The steps of the PSF localization algorithm are as follows:
- (1)
In the initialization stage, the original particle pose is obtained by uniform sampling in the possible localization area, and its weight is the same.
- (2)
In the state prediction stage of localization, the data is used to transfer the state of the particles to obtain the particle pose after the transfer. Moreover, because the particle set is discrete in practical applications, it is usually possible to directly use the particles after the state transition to form a particle set without reducing the number of particles, which is expressed as .
- (3)
Since the particle set after the state transition is mixed with a high amount of noise, the particle aggregation becomes divergent. and measurement data are used in the particle shift algorithm to shift the particles before the particles calculate and update the weights. The updated particle set is more convergent to prevent particle degradation. The obtained optimized particle set after the shift is denoted as .
- (4)
The particle weight of the shift particle set is calculated.
- (5)
In the state update stage of localization, the weights of the shifted particles are normalized, and the particles are then resampled according to the weights. A new particle set is reconstructed.
- (6)
The cumulative mean and the variance of the output particle set
are used as the location estimation results. Starting again from Step (1),
is then used as the the original set of particle poses in the next iteration. We assume that the localization result is obtained at time
and that
,
, and
are the mean values of the localization. Taking the x-axis as an example, the localization error is as follows
where
represents the localization error on the x-axis at time
.
is the mean of the coordinates of
n particles on the x-axis at time
. i.e.,
.
represents the true value of the positioning on the x-axis at the current moment.
The cumulative mean error is obtained by adding the localization error values
,
,
,
at all times. The cumulative mean error means the cumulative degree of the localization error. The smaller the cumulative localization error is, the higher the localization stability of the algorithm is. The formulation for the cumulative mean error is given by
The cumulative variance represents the convergence performance of the particles in the particle filter algorithm. The smaller the variance is, the closer the particle is to the real localization, which means that the particle filter has a higher convergence performance. The cumulative variance of the coordinates of
n particles on the x-axis at time
is defined as
where
.