Next Article in Journal
Earth Observation for the Implementation of Sustainable Development Goal 11 Indicators at Local Scale: Monitoring of the Migrant Population Distribution
Previous Article in Journal
Applications of Unmanned Aerial Vehicles in Cryosphere: Latest Advances and Prospects
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Cooperative GNSS-RTK Ambiguity Resolution with GNSS, INS, and LiDAR Data for Connected Vehicles

1
GNSS Research Center, Wuhan University, Wuhan 430079, China
2
State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, Wuhan 430079, China
3
Engineering Research Center for Spatio-temporal Data Smart Acquisition and Application, Ministry of Education of China, Wuhan University, Wuhan 430079, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2020, 12(6), 949; https://doi.org/10.3390/rs12060949
Submission received: 12 February 2020 / Revised: 13 March 2020 / Accepted: 13 March 2020 / Published: 15 March 2020

Abstract

:
Intelligent vehicles and connected vehicles have garnered more and more attention recently, and both require accurate positions of the vehicles in their operation, which relies on navigation sensors such as Global Navigation Satellite System (GNSS), Inertial Navigation System (INS), Light Detection And Ranging (LiDAR) and so on. GNSS is the key sensor to obtain high accuracy positions in the navigation system, because GNSS Real Time Kinematic (RTK) with correct ambiguity resolution (AR) can provide centimeter-level absolute position. But AR may fail in the urban occlusion environment because of the limited satellite visibility for single vehicles. The navigation data from multiconnected vehicles can improve the satellite geometry significantly, which is able to help improve the AR, especially in occlusion environment. In this work, the GNSS, INS, and LiDAR data from multiconnected vehicles are jointly processed together to improve the GNSS RTK AR, and to obtain high accuracy positioning results, using a scan-to-map matching algorithm based on an occupancy likelihood map (OLM) for the relative position between the connected vehicles, a Damped Least-squares AMBiguity Decorrelation Adjustment (LAMBDA) method with least-squares for a relative AR between the connected vehicles, and a joint RTK algorithm for solving the absolute positioning for the vehicles by involving the relative position and relative ambiguity constraints. The experimental results show that the proposed approach can improve the AR for the connected vehicles with higher ratio values, success rates, and fixed rates, and achieve high-precision cooperative absolute positions compared with traditional GNSS RTK methods, especially in occlusion environments such as below a viaduct.

Graphical Abstract

1. Introduction

With the advance of computer science and 5G technology, intelligent transportation systems, autonomous vehicles, and connected vehicles are developing rapidly. An essential function of these systems is accurate positioning. Their effectiveness heavily depends on high accuracy navigation results. The positions of the vehicles must be highly reliable to prevent failures of these systems during operation. Integrated navigation technology is commonly used for these systems, which can fully use the complementary characteristics of different navigation subsystems and greatly improve the accuracy and reliability of the integrated navigation system. The integration of the Global Navigation Satellite System (GNSS) and the Inertial Navigation System (INS) is a traditional integrated navigation system. With the development of Light Detection And Ranging (LiDAR) technology and autonomous driving, LiDAR also becomes more and more cost-effective, and becomes required equipment for autonomous vehicles for obstacle avoidance and positioning.
GNSS can obtain absolute positions, and mainly determines the positioning accuracy of the integrated system. Besides GPS, the BeiDou Navigation Satellite System (BDS) has been on the rise in recent years and showed great potential for positioning and timing [1]. GNSS Real Time Kinematic (RTK) is a wildly applied high-accuracy kinematic positioning technology. It can weaken or eliminate the systematic errors (orbital error, clock error, tropospheric delay, and ionospheric delay, etc.) by differential process and get centimeter-level position results by carrier phase observations with correct ambiguity resolution (AR) [2,3,4]. The key issue of RTK is AR, which is usually solved in a two-step procedure [5,6,7,8,9]. The floating ambiguity is solved by the pseudorange observations, and then the integer ambiguity is fixed. The most popular method to fix ambiguity is the Least-squares AMBiguity Decorrelation Adjustment (LAMBDA) [10]. Although, combining multiple satellite systems and multiple frequencies observations can improve the geometry of satellites, and increase the fix rate of AR and the accuracy of positioning [11], when GNSS signals are blocked by high buildings or viaduct, the geometry of the observations would be too weak to obtain fixed solutions with correct AR. The AR fixed rate and the accuracy of positioning results will be degraded severely [12].
The Inertial Navigation System (INS) is an attractive augmentation of GNSS [13]. Besides position and velocity estimation, INS is able to provide attitude estimation. When combined with GNSS, INS’s error can be compensated using position and velocity updates from GNSS, and it can support consistent positioning even when loss-of-lock occurs in GNSS. The performance of INS is closely related to the quality of the inertial measurement unit (IMU), which involves a gyroscope and an accelerometer. The IMU suffers from drift errors that grow unbounded regarding operation time. With the advances of sensor technology, the IMU becomes more and more cost-effective, and even an IMU based on a Micro-Electro-Mechanical System (MEMS) can achieve high performance. In a GNSS/INS integrated navigation system, a high-grade IMU can improve the attitude accuracy significantly, but is not able to maintain the position accuracy for a long time. So the position accuracy of the INS/GNSS integrated system highly depends on the GNSS [14].
Apart from GNSS and INS, LiDAR has also become common in intelligent transportation systems, autonomous vehicles and connected vehicles. Many existing studies have explored the potential of LiDAR in positioning for autonomous driving [15,16,17,18,19,20,21]. According to whether the map is pre-built, the LiDAR-based positioning can be divided into relative positioning and absolute positioning. The positioning error of relative positioning will accumulate with the running distance. Although the absolute positioning can obtain high precision position results, it needs a prepared map, which will take time and money and it will be difficult to cover a large area in a short timeframe.
By exploiting the information coming from the above methods, the integrated navigation system has made achievements [22,23,24]. However, most existing works only consider one vehicle in applications. In recent years, connected vehicles have garnered great interest. With 5G technology and multivehicular wireless communication capabilities, navigation information can be shared among vehicles, and cooperative positioning has been an attractive topic [25,26,27,28]. The main goal of cooperative positioning is to improve all the vehicles’ positioning accuracy by sharing their information and locations.
There are two major existing methods for cooperative positioning: cooperative map matching (CMM) and positioning correction by relative distance constraint among vehicles. Assuming that all vehicles travel within lanes, CMM makes the positions of a group of vehicles best fit the map so that the positions are corrected. In 2015, Rohani et al. proposed a CMM method by which vehicles could apply the road constraints of other vehicles in their own map matching process and gain a significant improvement in their positioning [29]. Shen et al. further proved the capability of CMM to mitigate the biases of position results of a group of vehicles [26]. However, the positioning accuracy and robustness of CMM have a strong dependence on road constraints [30].
Besides CMM, connected vehicles are capable of measuring distances between each other, which provides relative distance constraint and improve the positioning accuracy of the vehicles. With the development of Dedicated Short-Range Communication (DSRC), the DSRC device becomes a required addition to connected vehicles and can measure distance without incurring expenses. Moreover, DSRC devices are able to identify vehicles, and it is easy to associate their received information and range measurements [28]. With these benefits, many researchers have explored the DSRC-based cooperative positioning using Ratio Signal Strength (RSS) ranging, time-based ranging and a range-rating method based on Carrier Frequency Offset (CFO). Alam et al. used CFO-based range-rating measurements based on the Doppler shift of the carrier of DSRC signals to improve GPS estimates [31]. Liu et al. further integrated CFO-based range-rating measurements of DSRC and GNSS pseudorange observations by a cubature Kalman filter for positioning improvements [27]. In addition to absolute positioning, Alam et al. also used DSRC distance measurements and GNSS pseudorange observations by Kalman Filter to improve relative positioning between two vehicles [32]. Wang proposed a tight integration of DGPS, Ultra-Wide Band (UWB) ranging radio observations and simulated bearing observations in a small network of vehicles or infrastructure points to improve relative positioning accuracy [33].
However, few researchers have investigated the performance of RTK AR by fusing sensor data from connected vehicles [34]. GNSS RTK is the most cost-effective way for a high-precision position if the observation condition is good. For connected vehicles, even if one vehicle’s GNSS signal is partially blocked and fails in AR, GNSS observation from multiple vehicles can be complementary. Fusing these GNSS observations can greatly improve the geometry of the GNSS observation equation, improve the fixed rate of ambiguity, and improve the positioning accuracy of all the connected vehicles. For example, in Figure 1 the red and green vehicles are moving below a viaduct and the red one can only receive satellite signals coming from the left half of the sky and the green one receives signals coming from the right half of the sky, so that the two vehicles share complementary GNSS observation.
In this work, we focus on the cooperative positioning with GNSS, INS and LiDAR data from multiple vehicles to improve the GNSS-RTK AR and achieve precise positions for all connected vehicles. The main contributions of our work include the following:
(1) proposing a high-precision relative positioning method between vehicles based on a window Occupancy Likelihood Map (OLM) and a scan-to-map matching method by LiDAR;
(2) proposing a method for fixing the relative ambiguity between vehicles based on their relative positions;
(3) proposing a joint RTK ambiguity fixing method for connected vehicles to improve their absolute positioning based on (1) and (2).
In the below sections, we elaborate on our methods and conduct field experiments to evaluate our system.

2. Methods

In this research, GNSS RTK is treated as the main method for high-precision positioning in our navigation system and the correct RTK AR is an important requirement. Compared to the traditional RTK AR, which uses GNSS observations from one rover receiver, we propose a joint RTK method to enhance the AR performance by fusing the observations from connected vehicles. Firstly, the relative positions between the vehicles can built the position constraints between the vehicles, which are calculated based on LiDAR OLM method and attitude information from INS. Taking advantage of this precise baseline, the double differential (DD) carrier phase measurements from the common satellites of the connected vehicles are utilized to solve their relative DD ambiguities by a Damped LAMBDA method, which is considered as the ambiguity constraints. Finally, a joint RTK is applied for final position estimations of the connected vehicles with the relative positions and ambiguities constraint. Figure 2 illustrates the main procedures of our system. More details about INS mechanization, relative positioning by LiDAR based on a window OLM, relative ambiguity resolution and joint RTK will be introduced in the following subsections. Table 1 outlines the main processing steps in our methods by pseudocode, where M, F, К, and H denote the procedures of generating the OLM, calculating relative position, calculating relative DD ambiguities, and calculating absolute positions, respectively.
Our method involves multiple coordinate systems:
(1) INS Navigation System (n-frame)
The n-frame is a local geodetic frame with platform mass center as the coordinate origin. The X axis pointing towards the East, the Y axis pointing to the North, and the Z axis completing a right-handed coordinate system. It is also called an east-north-up (ENU) system, forming a right-handed coordinate system.
(2) Earth-Centered Earth-Fixed Coordinate System (e-frame)
The e-frame rotates with the Earth. Its coordinate origin is the Earth’s centroid, the X axis points to the intersection of the equator and the prime meridian. The Z axis is the Earth’s rotation axis and points to the North Pole. Then, the Y axis is perpendicular to the X–Z plane, forming a right-handed coordinate system.
(3) World Coordinate System (w-frame)
The w-frame is used to express the GNSS positioning results. The initial GNSS position is taken as the origin. The X-Y plane is the local horizontal plane, with the X axis pointing East and the Y axis pointing North. The Z axis is perpendicular to the X-Y plane, forming a right-handed coordinate system. The w-frame is parallel to the n-frame.
(4) IMU Coordinate System (b-frame)
The b-frame is dynamic and moves with the vehicle, and the coordinate origin is the IMU barycenter, The X axis points to the right along the IMU horizontal axis, the Y axis points forward along the IMU longitudinal axis, and the Z axis is perpendicular to the X–Y plane, forming a right-handed coordinate system.
(5) LiDAR Coordinate System (l-frame)
The l-frame moves with the vehicle, with the LiDAR measurement center as the coordinate origin, the X axis pointing to the right along the LiDAR horizontal axis, the Y axis pointing forward along the LiDAR longitudinal axis, and the Z axis being perpendicular to the X–Y plane, forming a right-handed coordinate system.
(6) Map Coordinate System (m-frame)
The map coordinate system is used by LiDAR OLM. This frame is the l-frame of the point where the map starts.
More details about the reference systems can be seen in [35,36]. The b-frame and l-frame are precalibrated to achieve spatial synchronization. All the coordinate transformations in this paper are rigid body transformations. The rigid body transformation R i j from the i-frame to j-frame involves translation p i j and quaternion q i j . The rigid body transformation operation is defined as [35]:
R j i = ( R i j ) 1 = ( ( C i j ) T p i j , ( q i j ) 1 ) v j = R i j v i = C i j v i + p i j R k i = R j i R k j = ( C j i p i j + p i j , q j i q k j )
where C j i is the direction cosine matrix of q i i and v i is a vector in the i-frame.

2.1. INS Modelling

Firstly, the inertial system motion model is introduced based on an INS error model [5]:
δ v = ( 2 ω i e + ω e n ) × δ v δ ψ × f + δ g + ε a
δ r = ω e n × δ r + δ v
δ ψ = ( ω i e + ω e n ) × δ ψ + ε g
where δ r , δ v , and δ ψ are the error vectors of position, velocity, and attitude, respectively. The ε a and ε g are the error vectors of the accelerometer and gyroscope of the IMU, respectively. f is the specific force, δ g is gravity uncertainty, ω i e is the rotation rate of the earth, and ω e n is the craft rate.
In this research, a state vector x with 15 states for this INS error model is constructed as:
δ x = [ ( δ r n ) T , ( δ v n ) T , ( δ ψ n ) T , ( ε a n ) T , ( ε g n ) T ] T
where the superscript n means n-frame and T represents transpose. An error propagation model works alongside the system motion model to correct the IMU errors. A first-order Taylor series expansion is applied, and the error state vector and state-space model are defined as:
δ x = F δ x + G u
δ x t = ϕ t δ x t 1 + w t
where t means time step, F is the dynamic matrix, G is a design matrix, u is the forcing vector of white noise, and w is the driven response of the input white noise. After the error state vector is solved, the predicted position and attitude in n-frame at time t is obtained.

2.2. Relative Positioning by INS aiding LiDAR Based on a Window Occupancy Likelihood Map

To achieve the multi-vehicle cooperative positioning solution, the relative position of the vehicles needs to be estimated. Simultaneous position and mapping (SLAM) based on environmental perception is a widely used method for accurate three-dimensional relative position information. Existing research on SLAM mostly focuses on the relative position between epochs. Here we draw on the idea of SLAM and propose a method for relative positioning between vehicles based on a scan-to-map matching of a window OLM.
Scan-to-map is a front-end matching method in SLAM. It mainly stores historical observation information in the form of grid maps and achieves relative positioning through current scan and map matching. Compared with the scan-to-scan methods, such as Iterative Closed Point [37], Iterative Closed Line [38] and Polar Scan Matching [39], the scan-to-map method has better robustness. Each vehicle can store a short-term window OLM map which has a small cumulative error and then use a scan-to-map method between vehicles to obtain the relative position.
Our laser scan-to-map matching method with OLM is based on Maximum Likelihood Estimation (MLE) and aims to find the optimal rigid-body transformation in the m-frame. More details can be seen in our previous work in [40]. The MLE matches current scan S t to a grid-based OLM M t 1 , which is generated by all the previous scans from time step 1 to t−1 and stores the likelihood value of each grid cell in the 3D space region. According to Bayes rules, assuming that the scan points in S t are independent, the sum likelihood value of S t is calculated as:
P ( S t | M t 1 ) = x S t P ( x | M t 1 )
where P ( x | M t 1 ) is the probability that the scan point x S t drops at that location according to the map M t 1 . Due to the dynamic movement of the platform, l-frame varies over time. To match S t against M t 1 , a best rigid-body transformation T is needed to transform S t from the current l-frame to the m-frame of the OLM which is calculated by maximizing the likelihood value of the P ( S t | M t 1 ) according to:
T = arg max ( P ( T S t | M t 1 ) )
where T S t is the set of S t transformed by the rigid-body transformation T .
To find T , we apply a Gauss-Newton approach for (11). Assume the rigid transformation is T = ( p x , p y , p z , θ , γ , φ ) where ( p x , p y , p z ) is translation and ( θ , γ , φ ) is the attitude angles. Given a starting estimate of T , we want to estimate Δ T , which optimizes the error measurement as:
i = 1 n [ 1 M t 1 ( S t i ( T + Δ T ) ) ] 2 0
where i means the ith scan point in n points and S ( T ) are the coordinates of the scan points after transformation T and can be expressed as:
S ( T ) = [ 1 0 0 0 cos θ sin θ 0 sin θ cos θ ] · [ cos γ 0 sin γ 0 1 0 sin γ 0 cos γ ] · [ cos φ sin φ 0 sin φ cos φ 0 0 0 1 ] · [ s x s y s z ] + [ p x p y p z ]
where [ s x , s y , s z ] T are the coordinates of the scan point in the l-frame. Solving for Δ T yields the Gauss-Newton equation for the minimization problem:
Δ T = ( [ M t 1 ( S t i ( T ) ) S t i ( T ) T ] T · i = 1 n [ M t 1 ( S t i ( T ) ) S t i ( T ) T ] ) 1 · i = 1 n [ M t 1 ( S t i ( T ) ) S t i ( T ) T ] T · [ 1 M t 1 ( S t i ( T ) ) ]
where M t 1 ( S t i ( T ) ) is the map gradient. Then T = T + Δ T . The INS modeling can also calculate a T from Section 2.1. An extended Kalman filter (EKF) model is used to fuse the INS model and the LiDAR scan-to-map matching. EKF includes two parts: state prediction based on the system model and state update based on the measurements. In our work, the INS acts as the prediction model because the output rate of IMU is generally higher than LiDAR. When LiDAR observation information is obtained, the state error vector is updated by EKF and fed back to the INS model for estimating a final T . Many works have proved the advantages of the fusion of INS model and LiDAR model, and more details can be seen in [41].
In the next step, M t 1 is updated to M t by S t using a likelihood value determination method. The new likelihood values of 27 grid cells in the OLM space around each scan point in 3D are calculated by:
P x = exp ( d ( x , x c ) / σ ) P y = exp ( d ( y , y c ) / σ ) P z = exp ( d ( z , z c ) / σ )
where x c , y c , and z c are the x coordinate, y coordinate and z coordinate of the center of each grid cell around the scan point. The sign d means distance, and σ is the standard deviation of measurement noise of LiDAR sensor. The corresponding likelihood value of each grid cell is updated as:
P = 1 ( 1 P ) × ( 1 P x × P y × P z )
where P is the grid cell’s prior likelihood value in M t 1 . After all the scan points update the grid cells around them as Equations (13) and (14), new OLM M t is generated.
In our relative position, we generate a window OLM which stores the LiDAR information in a time window for each vehicle at each epoch, for example, the window OLM includes historical scans in the last one second so that our m-frame is dynamic, which can reduce the accumulation error and computation. Assuming we have S t 1 , M t 1 , T 1 S t 2 , M t 2 , T 2 , and superscript 1 and 2 represent vehicle 1 and vehicle 2, respectively. T 1 = ( p x * 1 , p y * 1 , p y * 1 , θ * 1 , γ * 1 , φ * 1 ) and T 2 = ( p x * 2 , p y * 2 , p y * 2 , θ * 2 , γ * 2 , φ * 2 ) .
We do Equation (9) for S t 2 and M t 1 to calculate a T 1 2 = ( p x * 1 2 , p y * 1 2 , p z * 1 2 , θ * 1 2 , γ * 1 2 , φ * 1 2 ) in the m-frame of vehicle 1. With T 1 and T 1 2 we can calculate the relative position ( d x 1 m , d y 1 m , d z 1 m ) of vehicle 1 and vehicle 2 in the m-frame of vehicle 1:
[ d x 1 m , d y 1 m , d z 1 m ] T = [ p x * 1 2 , p y * 1 2 , p z * 1 2 ] T [ p x * 1 , p y * 1 , p y * 1 ] T
The relative position ( d x 1 m , d y 1 m , d z 1 m ) can be transformed to the n-frame as:
[ d E 1 n d N 1 n d U 1 n ] = [ 1 0 0 0 cos ( θ 0 ) sin ( θ 0 ) 0 sin ( θ 0 ) cos ( θ 0 ) ] · [ cos ( γ 0 ) 0 sin ( γ 0 ) 0 1 0 sin ( γ 0 ) 0 cos ( γ 0 ) ] · [ cos ( φ 0 ) sin ( φ 0 ) 0 sin ( φ 0 ) cos ( φ 0 ) 0 0 0 1 ] · [ d x 1 m d y 1 m d y 1 m ]
where ( θ 0 , γ 0 , φ 0 ) is the attitude of the origin of the m-frame of vehicle 1 in the n-frame. Similarly, with T 2 and T 2 1 , we can calculate the relative position ( d E 2 n , d N 2 n , d U 2 n ) in the n-frame of vehicle 2. The mean value of the two relative positions is:
[ D E , D N , D U ] T = ( [ d E 1 n , d N 1 n , d U 1 n ] T + [ d E 2 n , d N 2 n , d U 2 n ] T ) / 2
( D E , D N , D U ) is thought to be as the final relative position of vehicle 1 and vehicle 2 in the n-frame, and the superscript n of ( D E , D N , D U ) is omitted for the simplicity of the following equations in the following subsections. The relative position in the w-frame is equal to that in the n-frame, because the axis directions of them are almost same in a small region.
The cross matching of LiDAR data from the vehicles can calculate relative position between the vehicles. In our work, the scan-to-map matching method is superior to scan-to-scan method for the cross matching. The scan-to-scan requires that the vehicles observe the same features simultaneously, which is difficult in the outdoor environment. The strategy that matches one vehicle’s laser scan to another vehicle’s window OLM makes feature matching more robust. If the OLM includes the same feature with the laser scan in the time window, the scan-to-map method can work successfully. Here we use a window OLM to ensure the efficiency and accuracy of the scan-to-map method. It is possible to store all the historical scan information in the OLM, but it takes time to update the OLM and match the laser scan to the large-size map. The OLM may accumulate errors over running distance; therefore, a window OLM might reduce the cumulative errors with a short time window to improve the accuracy of relative positioning.

2.3. Relative Ambiguity Resolution with Relative Position Constraint

After relative position between the vehicle 1 and vehicle 2 is determined, we integrate this constraint with the carrier phase measurements of GNSS by RTK method. It is noted that in this subsection we assume that either vehicle 1 or vehicle 2 is a moving reference receiver and another one is a rover receiver in our RTK method. An accurate position of the reference receiver is unnecessary and their relative position gets more interest. The effect of the reference receiver’s absolute positioning error on the baseline solution is [33]:
| | δ Δ r | | 10 9 | | δ Δ r r e f | | | | Δ r | |
where Δ r means the baseline, δ Δ r r e f is the absolute positioning error of the reference receiver. If the reference receiver has a five-kilometer positioning error using GPS standalone positioning; the maximum resulting relative positioning error on a 500 m baseline is about 10−8 m, which is negligible.
The carrier phase measurement is derived from counting the carrier phase cycles of the incoming carrier. In a general form, the DD pseudorange and carrier phase measurement model of RTK can be written as:
Δ p r , s i , j = Δ ρ r , s i , j + Δ δ ρ r , s i , j + Δ T r , s i , j Δ I r , s i , j + ε Δ p r , s i , j λ Δ ϕ r , s i , j = Δ ρ r , s i , j + Δ δ ρ r , s i , j + Δ T r , s i , j Δ I r , s i , j + λ Δ N r , s i , j + ε Δ ϕ r , s i , j
where the Δ notation refers to a double differentiation, p and ϕ are the pseudorange observable and the carrier phase measurements respectively, subscript r and s mean the rover receiver and the reference receiver respectively, superscript i and j mean the ith and jth satellites, λ refers to carrier wavelength, ρ is the geometrical range between the satellite and receiver, δ ρ refers to the orbital error of the satellite, T is the tropospheric error, I is the ionospheric error, N is the integer carrier ambiguity cycles, and ε represents measurement error. DD can mitigate spatially correlated errors T and I . δ ρ , Δ T and Δ I are only a few millimeters over a one kilometer baseline so that they are neglected in our work as we apply short-range positioning applications, ε Δ includes the residual of multipath error and observation error, which is assumed to be white noise and it is considered in the stochastic model. Equation (19) is simplified to:
Δ p r , s i , j = Δ ρ r , s i , j λ Δ ϕ r , s i , j = Δ ρ r , s i , j + λ Δ N r , s i , j
which can be linearized as:
Δ p r , s i , j = ( l r j l r i ) d x + ( m r j m r i ) d y + ( n r j n r i ) d z + L r , s i , j λ Δ ϕ r , s i , j = ( l r j l r i ) d x + ( m r j m r i ) d y + ( n r j n r i ) d z λ Δ N r , s i , j + L r , s i , j
where the superscript i means the reference satellite in DD and we assume that either vehicle 1 or vehicle 2 is the reference station in our method, d x , d y and d z are baseline corrections in the e-frame, the symbols l, m, and n are the unit vectors on the line-of-sight (LOS) from the receiver to satellite, and L r , s i , j is expressed as:
L r , s i , j = L r , s j L r , s i = ρ r j ρ s j ρ r i + ρ s i
According to the initial position of the reference station ( x 0 , y 0 , z 0 ) and ( b 0 , l 0 ) in the e-frame, the Equation (21) can be transformed to the w-frame:
[ Δ p r , s i , j λ Δ ϕ r , s i , j ] = [ l r j l r i m r j m r i n r j n r i l r j l r i m r j m r i n r j n r i ] · C w e [ d e d n d u ] + [ 0 λ Δ N r , s i , j ] + [ L r , s i , j L r , s i , j ] w i t h C w e = [ sin l 0 sin b 0 cos l 0 cos b 0 cos l 0 cos l 0 sin b 0 sin l 0 cos b 0 sin l 0 0 cos b 0 sin b 0 ]
where ( d e , d n , d u ) is the baseline correction in the w-frame. Assuming that the receivers of vehicle 1 and vehicle 2 have n common satellites with the reference receiver, we can construct n−1 equations like Equation (23).
Through the LiDAR scan-to-map matching described in the above subsection, the accurate relative position between the connected vehicles in the w-frame is calculated and we involve this information into our least-squares adjustment to improve the accuracy by a Damped LAMBDA method. A prior weight matrix P x for D E , D N , D U is used in the normal equation of least-squares adjustment:
[ A T P A + P x A T P B B T P A B T P B ] [ X Y ] = [ A T P L B T P L ]
where X is baseline components and Y is DD ambiguities in n−1 dimension, A and B are the coefficient matrices for X and Y according to Equation (23), L includes all L r , s i , j in n−1 dimension, and P is the weight matrix of the DD measurements in the least-squares. Supposing that the standard deviations (STD) of the original pseudo range and carrier phase observables of each satellite are the same, which are δ p and δ ϕ respectively, then P is:
P = δ 0 2 D s 1 D s = 1 2 δ s 2 1 n [ n 1 1 1 1 1 n 1 1 1 1 1 1 n 1 ]
where s means p or ϕ , and δ 0 2 is the variance of unit weight. D E , D N and D U are calculated by LiDAR scan-to-map matching, and their precision is known, which are δ x , δ y and δ z . P x is expressed as:
P x = [ δ 0 2 δ x 2 0 0 0 δ 0 2 δ y 2 0 0 0 δ 0 2 δ z 2 ]
Then according to the least squares, we can get the real-values solutions of DD carrier phase ambiguities Δ N r , s i , j and their covariance matrix. Based on the float solution yields, integer ambiguities are estimated afterwards by LAMBDA method, which defines a search space by the information derived from the covariance matrix and search for the integer ambiguities within this space. A main feature of LAMBDA method is that it includes a decorrelation procedure to transform the original float ambiguity estimations to a new set of ambiguity and preserve the integer nature of the ambiguity at the same time. The detailed implementation of the LAMBDA method can be seen in [10].

2.4. Joint RTK Algorithm for Connected Vehicles

In this part, both the vehicle 1 and vehicle 2 are rover receivers and one RTK reference station is chosen. DD observation equations are formed for all visible satellites for vehicle 1 and reference station and those for vehicle 2 and reference station to jointly solve their absolute position estimations. The relative position and relative DD ambiguities described in the above subsections are considered as constraints in our joint RTK. Figure 3 illustrates the GNSS satellites shown in Figure 2, the relative position, and the relative ambiguity between the connected vehicles. It is noted that the numbers and distribution of satellites in Figure 2 and Figure 3 are for illustration. In the real-world practice the numbers and distribution of satellites can be different.
Supposing that the satellite 4 is our reference satellite, and the DD pseudo range and carrier phase observation equations in the e-frame are formed as:
Δ p 1 , r e f 4 , i = ( l 1 i l 1 4 ) δ x 1 + ( m 1 i m 1 4 ) δ y 1 + ( n 1 i n 1 4 ) δ z 1 + L 1 , r e f 4 , i λ Δ ϕ 1 , r e f 4 , i = ( l 1 i l 1 4 ) δ x 1 + ( m 1 i m 1 4 ) δ y 1 + ( n 1 i n 1 4 ) δ z 1 λ Δ N 1 , r e f 4 , i + L 1 , r e f 4 , i Δ p 2 , r e f 4 , j = ( l 2 j l 2 4 ) δ x 2 + ( m 2 j m 2 4 ) δ y 2 + ( n 2 j n 2 4 ) δ z 2 + L 2 , r e f 4 , j λ Δ ϕ 2 , r e f 4 , j = ( l 2 j l 2 4 ) δ x 2 + ( m 2 j m 2 4 ) δ y 2 + ( n 2 j n 2 4 ) δ z 2 λ Δ N 2 , r e f 4 , j + L 2 , r e f 4 , j w i t h   i = 1 , 2 , 3 , 5 w i t h   j = 5 , 6 , 7 , 8
where subscript r e f means RTK reference station. Considering the relative position ( D E , D N , D U ) between vehicles in the w-frame, Equation (27) is reformulated in the w-frame as:
[ Δ p 1 , r e f 4 , i λ Δ ϕ 1 , r e f 4 , i ] = [ l 1 i l 1 4 m 1 i m 1 4 n 1 i n 1 4 l 1 i l 1 4 m 1 i m 1 4 n 1 i n 1 4 ] C w e [ d e 1 d n 1 d u 1 ] + [ 0 λ Δ N 1 , r e f 4 , i ] + [ L 1 , r e f 4 , i L 1 , r e f 4 , i ] [ Δ p 2 , r e f 4 , i λ Δ ϕ 2 , r e f 4 , i ] = [ l 2 i l 2 4 m 2 i m 2 4 n 2 i n 2 4 l 2 i l 2 4 m 2 i m 2 4 n 2 i n 2 4 ] C w e [ d e 2 d n 2 d u 2 ] + [ 0 λ Δ N 2 , r e f 4 , i ] + [ L 2 , r e f 4 , i L 2 , r e f 4 , i ] w i t h   i = 1 , 2 , 3 , 5 w i t h   j = 5 , 6 , 7 , 8 D E = e 2 , 0 + d e 2 ( e 1 , 0 + d e 1 ) D N = n 2 , 0 + d n 2 ( n 1 , 0 + d n 1 ) D U = u 2 , 0 + d u 2 ( u 1 , 0 + d u 1 )
where ( e 1 , 0 , n 1 , 0 , u 1 , 0 ) and ( e 2 , 0 , n 2 , 0 , u 2 , 0 ) are the initial positions of vehicle 1 and vehicle 2 in the w-frame, respectively. Then, we apply constrained indirect least squares for Equation (28) with relative ambiguity constraints:
N R A = Δ N 2 , r e f 4 , 5 Δ N 1 , r e f 4 , 5
where N R A is relative ambiguity. Equation (29) is formed into W = C [ X , Y ] T shape, and Equation (24) is changed to:
[ ( A B ) T P ( A B ) C T C 0 ] [ X Y K ] = [ ( A B ) T L W ]
where K is the Lagrange multiplier. According to the rules of constrained indirect least squares, the float solution of absolute positions together with their variance-covariance matrix Q = [ Q X X Q X Y Q Y X Q Y Y ] , and float carrier phase ambiguities for both vehicle 1 and vehicle 2 are calculated. The LAMBDA method is utilized again to fix all the ambiguities. Float position solutions are corrected by the fixed ambiguities shown as:
X ^ = X Q X Y Q Y Y 1 ( Y Y ^ )
where X ^ and Y ^ mean fixed solutions of absolute positions and ambiguities.
In order to validate the fixed ambiguity, the ratio test is used, and the success rate, which is an output from LAMBDA method, is used to assess the probability of correct position solution. The ratio test aims to decide whether the integer ambiguity solution is sufficiently more likely than any other integer candidate. Y ^ is the integer minimizer of q ( Y ) = ( Y Y ) T Q Y Y 1 ( Y Y ) . Assume Y ^ is the integer candidate that returns the second smallest q ( Y ^ ) , then the ratio is computed as q ( Y ^ ) q ( Y ^ ) . Thus, only if the ratio is sufficiently large, the fixed solutions will be accepted. Otherwise, the fixed solutions are rejected in favor of the float solutions. The success rate of AR is determined by the strength of the underlying GNSS model; the stronger the model, the higher the success rate [42].

3. Experimental Results

To evaluate our methods, we conduct field experiments. Our vehicles are shown in Figure 4 in red frames, and GNSS and LiDAR are mounted on each of the vehicles and their equipment types we used are shown in Figure 4. Table 2 shows the IMU parameters of the two types of IMU. For both vehicles, GNSS runs at 1 Hz and LiDAR runs at 10 Hz. IMU with the Honeywell device in the white vehicle (denoted as “vehicle 1” in below) runs at 600 Hz, and the IMU with Novatel SPAN device in the blue vehicle (denoted as “vehicle 2” in below) runs at 100Hz. The frequency of our ambiguities resolutions and position solutions is in accordance with that of GNSS as 1Hz. Experiments are done in an open area (shown in Figure 4a) and below a viaduct (shown in Figure 4b). The vehicles below a viaduct can share complementary GNSS observations, which is suitable for evaluating our method. It is hard to get continually accurate positions below the viaduct, which makes it difficult to evaluate the performance of our integrated system, so we only analyze the geometrical distribution of satellites above the vehicles’ horizon in the experiments below the viaduct, then we choose measurements from the same satellites collected from the open area to simulate the observation condition below the viaduct, and the RTK results in the open area are assumed to be the ground truth of the absolute positions of the two vehicles.
Figure 5 shows the distributions of satellites for the two vehicles in the open area and below the viaduct. In our work, G19 and C22 are the reference satellites for GPS and BDS respectively. The viaduct in Figure 4 is oriented directly south to north and the two vehicles move on both sides of the road below the viaduct. Because the viaduct blocks satellite signal from some angles of arrival, vehicle 1 in Figure 5b cannot receive satellite signal when satellite elevation angle is less than 60 degrees and yaw angle is from 15 degrees to 165 degrees, and vehicle 2 in Figure 5d cannot receive satellite signal when elevation angle is less than 60 degrees and yaw angle is between 195 degrees and 345 degrees. From Figure 5b,d we can see that they have seven common satellites including two reference satellites at this epoch. In the whole duration of our experiments below the viaduct, the number of common nonreference satellites of vehicle 1 and vehicle 2 varies from five to eight. The lower the number is, the more difficult to solve the relative position and ambiguities between the vehicles generally. Figure 5f shows all the visible satellites for vehicle 1 and vehicle 2, which combines Figure 5b and Figure 5d and makes sufficient satellites available in our joint RTK solution.
We first evaluate the performance of the LiDAR scan-to-map matching method in our work by comparing the relative position between the vehicles calculated by LiDAR scan-to-map matching method and ground truth. In our work, we did not use the scan-to-scan matching method, because the scan-to-scan method requires that the two vehicles should observe the same features simultaneously, which is difficult in practice. To improve the availability and reliability of the feature matching, the environment information observed in the last second was stored in the OLM. If the two vehicles observed the same features in the last second, they could be matched successfully. It was possible to store all the historical environment information in the OLM, but for efficiency of feature matching, a window OLM was generated using the LiDAR data in the previous 1s which included ten laser scans as the frequency of LiDAR sensors was 10 Hz. Moreover, the window OLM helped keep precision of relative positioning because OLM accumulated errors over distance, and a short time window could reduce cumulative error.
Figure 6 shows the LiDAR point clouds of the window OLM of vehicle 1 and the current laser scan of vehicle 2 at one epoch. As it is hard to visualize the three-dimensional OLM, the LiDAR point clouds of the window OLM are shown in the figure. We can see that the vehicle 1 travelled on the right side of the viaduct piers and the vehicle 2 travelled on the left side of the viaduct piers. The current laser scan of vehicle 2 could be matched successfully to the window OLM of vehicle 1, as they had common features of the viaduct piers.
Figure 7 shows the time series of bias of relative positioning in the n-frame with east, north and up directions. From Figure 7 we can see that the relative positioning is highly accurate with a bias of a few centimeters in east, north and up directions, which is beneficial to our high-precision absolute positioning of the two connected vehicles. It is noted that in our work the precision of yaw angles of vehicles are about 0.1 degree and our baseline’s length is about 20 m so that the yaw angle error might cause relative positioning error of about 2 cm.
Regarding relative ambiguity resolution in the above subsection C, we compare the results of using relative position constraint from LiDAR scan-to-map matching (denoted as “with LiDAR” in Figure 8) and not using this constraint (denoted as “without LiDAR” in Figure 8). Figure 8 shows the ratio and success rate of the fixed relative ambiguities and the number of common nonreference satellites for vehicle 1 and vehicle 2 in four cases: the operation is in the open area without LiDAR, below the viaduct without LiDAR, in the open area with LiDAR, and below the viaduct with LiDAR. From Figure 8 we can see that in the open area the vehicles share more common satellites so that the success rate values for the two cases in open area cases (represented by the black line and green line in Figure 8) are high and close to 1. But the ratio values for the case in the open area without LiDAR constraint are much smaller than the ratio values for the case with LiDAR constraint, which means that the LiDAR constraint can help fix the ambiguities and makes AR more reliable, and that even in the open area with sufficient available satellites this constraint is able to largely increase the ratio values. This is reasonable, because if the relative position constraint is not considered, vehicles’ predicted positions are calculated by GNSS pseudorange measurements which include large errors so that the float solutions of relative ambiguities may not be reliable and the ratio values are small.
When vehicles move below the viaduct, the number of their visible common satellites takes half. From the blue lines and red lines in the Figure 8 we can see that if relative position constraint is considered, both the success rate and ratio values increase significantly. The mean ratio value of the case below the viaduct without LiDAR constraint is only 0.23. In our work, we set the threshold of the ratio test to 3. Therefore, for this case at most of the epochs the relative ambiguities are not fixed correctly. For another case with LiDAR constraint, the mean ratio value is 14 and all ratio values from all epochs are larger than 3. In the first a few epochs, because only five common nonreference satellites are available, the success rate is about 0.8 for the case with LiDAR constraint and about 0.02 for the case without constraint. These results prove that our relative positioning method can largely improve the AR even when there are only a few common satellites for the connected vehicles.
Figure 9 shows the ratio values and success rate of AR for absolute positioning by our joint RTK algorithm. Six cases are tested and analyzed in this part: joint RTK without considering relative position constraint d R P or relative ambiguities constraint N R A in the open area, joint RTK with d R P but without N R A in the open area, joint RTK with both d R P and N R A in the open area, and those three settings below the viaduct instead of in the open area. From Figure 9, we can see that in the open area with sufficient visible satellites (shown in Figure 5e), success rate significantly increases if either d R P or N R A is considered, especially in the first few epochs when the number of visible satellites is relatively less. The success rate is 1 for most of the epochs if both d R P and N R A are considered and is slightly larger than that of the case where only d R P is considered. For ratio values of cases in the open area, the difference among the three cases is also obvious, with the case considering both d R P and N R A gets larger ratio values, the case considering only N R A gets intermediate ratio values and the case without considering d R P or N R A gets the smallest values. The three cases below the viaduct (right columns in Figure 9) show a similar pattern as above cases in the open area. This phenomenon is reasonable. In absolute positioning, when considering the N R A and d R P , the accurate correlation among relative ambiguities of the common satellites of the two vehicles can help establish a certain precision correlation for the non-common satellites when doing DD, which can improve the geometric conditions of the observation equation. The relative position and relative ambiguities constraints can also improve the accuracy of the floating solution. These advantages can improve the precision of the fixed ambiguities.
Once reliably fixed, the integer ambiguities make the carrier phase observables become a precise pseudorange. The fixed positioning solution is then achieved. The biases between the fixed positioning solutions and the ground truth in east, north and height directions are plotted in Figure 10 for vehicle 1 and in Figure 11 for vehicle 2. In these figures, there are spikes which have large biases. These spikes mean that the ambiguities are not fixed (ratio is less than 3) at these epochs and the incorrect float solutions have large biases. In the open area, the case without considering d R P or N R A gets the worst positioning results, and the case with d R P constraint and the case with both d R P and N R A constraints have nearly equal biases in the three directions. This is because in Figure 9 both cases achieve a high success rate and all the ratio values are larger than 3, which means the fixed ambiguities are reliable and can produce high-precision positioning solutions. For most of the epochs, these cases can achieve centimeter-level precision. For the cases below the viaduct, the case without considering d R P or N R A still gets the worst positioning results. At certain epochs the case only considering d R P worsens the position results comparing to the case considering both constraints, but at most epochs these two cases can achieve centimeter-level precision in east and north while the case considering no constraint gets very large biases. To compare the performances of our joint RTK method and single positioning by RTK, we conducted a single positioning experiment for vehicle 1, and results are shown in Figure 12. The results of the single positioning by RTK are quite similar to the results of the case with no constraint in Figure 10, which should be expected. Table 3 shows the statistical positioning precisions and fixed rates of these six cases and single positioning for the two vehicles. The fixed rates in Table 3 prove that our approach can largely increase the fixed rates below the viaduct where satellite observation condition is worse.

4. Discussion

Connected vehicles build a network where vehicles can share information and sensor data. Compared with a single vehicle, the observation information from the network is greatly increased, and the positioning accuracy of the connected vehicles can be significantly improved. An important prerequisite of our approach is the precise relative position between the connected vehicles. Even though our OLM-based matching method does not require the vehicles observe the same environmental characteristics at the same time, the vehicles are still required to observe the same features within a time window. In the future work, the geometric shapes of features in the environment, such as cylindrical and rectangular buildings, will be considered, so that even if different parts of the features are observed by vehicles, the scan-to-map matching method in our approach can still work successfully via geometric constrains.
The relative position between the vehicles should be transformed from the m-frame to the w-frame, which needs the attitude of the LiDAR sensor calculating by INS. In our experiments, IMUs with upper grade gyroscopes are used, therefore, the error of attitude accumulated slowly and neglected. In our work, the precision of yaw angles of vehicles is about 0.1 degree and our baseline’s length is about 20 m, so that the yaw angle error might cause relative positioning error of about 2 cm. In future work, we will try a lower-grade IMU and test our method further.
Our cooperative GNSS-RTK ambiguity resolution model for two connected vehicles is relatively simple, and a more complete model will be investigated in the future to address the multi-vehicle networking environment. Relative positions will be calculated by our INS aiding LiDAR OLM method for multiple vehicles in the network, which will increase computation costs. Our relative ambiguity resolution method with relative position constraint and joint RTK method are easy to be expanded to a multivehicle networking environment, but the number of parameters will increase along with the number of vehicles. Therefore, if there are large quantities of vehicles in the network, solving Equation (23) and Equation (28) will become difficult. In the future, we will also focus on this issue.

5. Conclusions

In this study, we proposed a cooperative GNSS-RTK AR method by sharing the sensor data between the connected vehicles. A LiDAR scan-to-map matching method based on a window OLM was applied to the cross matching of LiDAR data from the vehicles, based on which the precise relative position between the vehicles can be calculated. With this relative position constraint, a relative ambiguity resolution is proposed for the common satellites of the vehicles. Finally, a joint RTK method is applied to fuse the GNSS measurements from the connected vehicles and their relative ambiguities. Comparing to the standard RTK AR method of a single vehicle, our approach can improve the performance of RTK AR in terms of ratio values, success rate and fixed rate and improve positioning accuracy for the vehicles. Especially, even in the high occlusion environment, when the vehicles share complementary GNSS observations our method can still obtain satisfactory results. We investigate the performances of our approach below an open sky and below a viaduct, which is a typical urban occlusion environment. The following conclusions can be drawn based on the experimental results.
(1) The proposed cross matching method based on the window OLM can calculate high-precision relative position between the connected vehicles with a bias of a few centimeters in east, north and up directions, and this method does not require that the vehicles observe the same features simultaneously, which is more reliable and robust.
(2) This baseline constraint helps to fix the relative ambiguities by improving the success rate and ratio values. This approach works very well even when the number of common satellites of the connected vehicles is a few.
(3) In the absolute positioning, taking advantage the constraints of the intervehicle position and ambiguities, the geometric condition of the GNSS RTK observation equation is improved by our joint RTK algorithm in term of ratio values, success rate and fixed rate, so that the absolute positioning precision is improved for both vehicles. With our method, both vehicles achieved centimeter-level position accuracy with 98.61% fixed rate in the open area, while the standard GPS+BDS RTK obtained decimeter-level accuracy with 95.83% fixed rate. In the high occlusion environment below a viaduct, the vehicles still achieved centimeter-level accuracy with a fixed rate of 97.22%, comparing with a decimeter-level accuracy and a fixed rate of 35.42% of the standard GPS+BDS RTK method.

Author Contributions

C.Q. designed and performed the experiments and analyzed the data in the paper; H.Z. and W.L. helped to analyze the data and write the paper; J.T. provided the hardware platform and performed the experiments; and B.L. and H.L. conceived the framework of this research. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Key Research and Development Program (No. 2018YFB1600600), the National Natural Science Foundation of China (Project No. 41801377) and the Fundamental Research Funds for the Central Universities (No. 20422019KF0034).

Acknowledgments

The experiment and equipment of this work have been supported by Wuhan University. All this support is gratefully acknowledged.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wanninger, L.; Beer, S. BeiDou satellite-induced code pseudorange variations: Diagnosis and therapy. GPS Solut. 2015, 19, 639–648. [Google Scholar] [CrossRef] [Green Version]
  2. Li, X.; Ge, M.; Dai, X.; Ren, X.; Fritsche, M.; Wickert, J.; Schuh, H. Accuracy and reliability of multi-GNSS real-time precise positioning: GPS, GLONASS, BeiDou, and Galileo. J. Geod. 2015, 89, 607–635. [Google Scholar] [CrossRef]
  3. Kaplan, E.; Hegarty, C. Understanding GPS: Principles and Applications; Artech House: London, UK, 2005. [Google Scholar]
  4. Hofmann-Wellenhof, B.; Lichtenegger, H.; Wasle, E. GNSS–Global Navigation Satellite Systems: GPS, GLONASS, Galileo, and More; Springer Science & Business Media: Berlin, Germany, 2007. [Google Scholar]
  5. Han, H.; Wang, J.; Wang, J.; Moraleda, A.H. Reliable partial ambiguity resolution for single-frequency GPS/BDS and INS integration. GPS Solut. 2017, 21, 251–264. [Google Scholar] [CrossRef]
  6. Teunissen, P.; Odolinski, R.; Odijk, D. Instantaneous BeiDou+ GPS RTK positioning with high cut-off elevation angles. J. Geod. 2014, 88, 335–350. [Google Scholar] [CrossRef]
  7. Wu, S.; Zhao, X.; Zhang, L.; Pang, C.; Wu, M. Improving reliability and efficiency of RTK ambiguity resolution with reference antenna array: BDS+ GPS analysis and test. J. Geod. 2019, 1–15. [Google Scholar] [CrossRef]
  8. Odolinski, R.; Teunissen, P.J.; Odijk, D. Combined bds, galileo, qzss and gps single-frequency rtk. GPS Solut. 2015, 19, 151–163. [Google Scholar] [CrossRef]
  9. Odolinski, R.; Teunissen, P.J. Low-cost, high-precision, single-frequency GPS–BDS RTK positioning. Gps Solut. 2017, 21, 1315–1330. [Google Scholar] [CrossRef]
  10. Teunissen, P.; De Jonge, P.; Tiberius, C. The LAMBDA method for fast GPS surveying. In Proceedings of the International Symposium GPS Technology Applications, Bucharest, Romania, 26 September 1995. [Google Scholar]
  11. Li, B.; Qin, Y.; Liu, T. Geometry-based cycle slip and data gap repair for multi-GNSS and multi-frequency observations. J. Geod. 2019, 93, 399–417. [Google Scholar] [CrossRef]
  12. He, H.; Li, J.; Yang, Y.; Xu, J.; Guo, H.; Wang, A. Performance assessment of single-and dual-frequency BeiDou/GPS single-epoch kinematic positioning. GPS Solut. 2014, 18, 393–403. [Google Scholar] [CrossRef]
  13. Hide, C.; Moore, T.; Smith, M. Adaptive Kalman filtering for low-cost INS/GPS. J. Navig. 2003, 56, 143–152. [Google Scholar] [CrossRef]
  14. Ahmed, H.; Tahir, M. Terrain-based vehicle localization using low cost MEMS-IMU sensors. In Proceedings of the 2016 IEEE 83rd Vehicular Technology Conference (VTC Spring), Nanjing, China, 15 May 2016; pp. 1–5. [Google Scholar]
  15. Castorena, J.; Agarwal, S. Ground-edge-based LIDAR localization without a reflectivity calibration for autonomous driving. IEEE Robot. Autom. Lett. 2017, 3, 344–351. [Google Scholar] [CrossRef] [Green Version]
  16. Hata, A.Y.; Wolf, D.F. Feature detection for vehicle localization in urban environments using a multilayer LIDAR. IEEE Trans. Intell. Transp. Syst. 2015, 17, 420–429. [Google Scholar] [CrossRef]
  17. Lu, W.; Zhou, Y.; Wan, G.; Hou, S.; Song, S. L3-Net: Towards Learning Based LiDAR Localization for Autonomous Driving. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–20 June 2019; pp. 6389–6398. [Google Scholar]
  18. Hemann, G.; Singh, S.; Kaess, M. Long-range GPS-denied aerial inertial navigation with LIDAR localization. In Proceedings of the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, Korea, 9–14 October 2016; pp. 1659–1666. [Google Scholar]
  19. Wolcott, R.W.; Eustice, R.M. Fast LIDAR localization using multiresolution Gaussian mixture maps. In Proceedings of the 2015 IEEE International Conference on Robotics and Automation (ICRA), Washington, DC, USA, 26–30 May 2015; pp. 2814–2821. [Google Scholar]
  20. Wolcott, R.W.; Eustice, R.M. Robust LIDAR localization using multiresolution Gaussian mixture maps for autonomous driving. Int. J. Robot. Res. 2017, 36, 292–319. [Google Scholar] [CrossRef]
  21. Wolcott, R.W.; Eustice, R.M. Visual localization within lidar maps for automated urban driving. In Proceedings of the 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, Chicago, IL, USA, 14–18 September 2014; pp. 176–183. [Google Scholar]
  22. Wan, G.; Yang, X.; Cai, R.; Li, H.; Zhou, Y.; Wang, H.; Song, S. Robust and precise vehicle localization based on multi-sensor fusion in diverse city scenes. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, 21–25 May 2018; pp. 4670–4677. [Google Scholar]
  23. Cai, H.; Hu, Z.; Huang, G.; Zhu, D.; Su, X. Integration of GPS, Monocular Vision, and High Definition (HD) Map for Accurate Vehicle Localization. Sensors 2018, 18, 3270. [Google Scholar] [CrossRef] [Green Version]
  24. Liggins, M., II; Hall, D.; Llinas, J. Handbook of Multisensor Data Fusion: Theory and Practice; CRC Press: Boca Raton, FL, USA, 2017. [Google Scholar]
  25. Bridgelall, R.; Huang, Y.; Zhang, Z.; Deng, F. Precision enhancement of pavement roughness localization with connected vehicles. Meas. Sci. Technol. 2016, 27, 025012. [Google Scholar] [CrossRef]
  26. Shen, M.; Sun, J.; Peng, H.; Zhao, D. Improving localization accuracy in connected vehicle networks using rao-blackwellized particle filters: Theory, simulations, and experiments. IEEE Trans. Intell. Transp. Syst. 2018. [Google Scholar] [CrossRef] [Green Version]
  27. Liu, J.; Cai, B.-G.; Wang, J. Cooperative localization of connected vehicles: Integrating GNSS with DSRC using a robust cubature Kalman filter. IEEE Trans. Intell. Transp. Syst. 2016, 18, 2111–2125. [Google Scholar] [CrossRef]
  28. Rohani, M.; Gingras, D.; Vigneron, V.; Gruyer, D. A new decentralized Bayesian approach for cooperative vehicle localization based on fusion of GPS and VANET based inter-vehicle distance measurement. IEEE Intell. Transp. Syst. Mag. 2015, 7, 85–95. [Google Scholar] [CrossRef]
  29. Rohani, M.; Gingras, D.; Gruyer, D. A novel approach for improved vehicular positioning using cooperative map matching and dynamic base station DGPS concept. IEEE Trans. Intell. Transp. Syst. 2015, 17, 230–239. [Google Scholar] [CrossRef] [Green Version]
  30. Shen, M.; Sun, J.; Zhao, D. The Impact of Road Configuration in V2V-Based Cooperative Localization: Mathematical Analysis and Real-World Evaluation. IEEE Trans. Intell. Transp. Syst. 2017, 1–10. [Google Scholar] [CrossRef] [Green Version]
  31. Alam, N.; Balaei, A.T.; Dempster, A.G. A DSRC Doppler-based cooperative positioning enhancement for vehicular networks with GPS availability. IEEE Trans. Veh. Technol. 2011, 60, 4462–4470. [Google Scholar] [CrossRef]
  32. Alam, N.; Balaei, A.T.; Dempster, A.G. Relative positioning enhancement in VANETs: A tight integration approach. IEEE Trans. Intell. Transp. Syst. 2012, 14, 47–55. [Google Scholar] [CrossRef]
  33. Wang, D. Cooperative v2x Relative Navigation Using Tight-Integration of Dgps and v2x Uwb Range and Simulated Bearing. Ph.D. Thesis, University of Calgary, Calgary, AB, Cananda, 2015. [Google Scholar]
  34. Lassoued, K.; Bonnifait, P. Cooperative Localization for Autonomous Vehicles Sharing GNSS Measurements. In Cooperative Localization and Navigation Theory, Research, and Practice; Gao, C., Guorong, Z., Hassen, F., Eds.; CRC Press: Boca Raton, FL, USA, 2019. [Google Scholar]
  35. Chang, L.; Niu, X.; Liu, T.; Tang, J.; Qian, C. GNSS/INS/LiDAR-SLAM Integrated Navigation System Based on Graph Optimization. Remote Sens. 2019, 11, 1009. [Google Scholar] [CrossRef] [Green Version]
  36. Shin, E.-H. Estimation techniques for low-cost inertial navigation. UCGE Rep. 2005, 20219. [Google Scholar]
  37. Besl, P.J.; McKay, N.D. Method for registration of 3-D shapes. In Proceedings of the Sensor Fusion IV: Control Paradigms and Data Structures, Boston, MA, USA, 30 April 1992; pp. 586–606. [Google Scholar]
  38. Bosse, M.C. ATLAS: A Framework for Large Scale Automated Mapping and Localization. Ph.D. Thesis, Massachusetts Institute of Technology, Cambridge, MA, USA, 2004. [Google Scholar]
  39. Diosi, A.; Kleeman, L. Laser scan matching in polar coordinates with application to SLAM. In Proceedings of the 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems, Edmonton, AB, Canada, 2–6 August 2005; pp. 3317–3322. [Google Scholar]
  40. Qian, C.; Zhang, H.; Tang, J.; Li, B.; Liu, H. An Orthogonal Weighted Occupancy Likelihood Map with IMU-Aided Laser Scan Matching for 2D Indoor Mapping. Sensors 2019, 19, 1742. [Google Scholar] [CrossRef] [Green Version]
  41. Tang, J.; Chen, Y.; Niu, X.; Wang, L.; Chen, L.; Liu, J.; Shi, C.; Hyyppä, J. LiDAR scan matching aided inertial navigation system in GNSS-denied environments. Sensors 2015, 15, 16710–16728. [Google Scholar] [CrossRef]
  42. Chen, W. Performance improvement for GPS single frequency kinematic relative positioning under poor satellite visibility. SpringerPlus 2016, 5, 574. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Illustration of two vehicles moving below a viaduct and Global Navigation Satellite System (GNSS) satellites positioning.
Figure 1. Illustration of two vehicles moving below a viaduct and Global Navigation Satellite System (GNSS) satellites positioning.
Remotesensing 12 00949 g001
Figure 2. Flowchart of our methods, and satellites with a red background are common satellites of vehicle 1 and vehicle 2.
Figure 2. Flowchart of our methods, and satellites with a red background are common satellites of vehicle 1 and vehicle 2.
Remotesensing 12 00949 g002
Figure 3. GNSS information and constraints for joint Real Time Kinematic (RTK) for connected vehicles.
Figure 3. GNSS information and constraints for joint Real Time Kinematic (RTK) for connected vehicles.
Remotesensing 12 00949 g003
Figure 4. Vehicles used in our experiments with red frames, and locations and types of their GNSS, inertial measurement unit (IMU) and Light Detection And Ranging (LiDAR) equipment. (a) open area; (b) below a viaduct.
Figure 4. Vehicles used in our experiments with red frames, and locations and types of their GNSS, inertial measurement unit (IMU) and Light Detection And Ranging (LiDAR) equipment. (a) open area; (b) below a viaduct.
Remotesensing 12 00949 g004
Figure 5. Sky plots of available satellites from multi-GNSS system above the vehicles’ horizon at the first epoch for (a) vehicle 1 in the open area, (b) vehicle 1 below the viaduct, (c) vehicle 2 in the open area, (d) vehicle 2 below the viaduct, (e) vehicle 1 and vehicle 2 (the connected vehicles) in the open area, and (f) vehicle 1 and vehicle 2 (the connected vehicles) below the viaduct. The multi-GNSS system is used including GPS (in blue) and the BeiDou Navigation Satellite System (BDS) (in cyan). The serial numbers of GPS satellites start with letter “G” and those of BDS start with letter “C”.
Figure 5. Sky plots of available satellites from multi-GNSS system above the vehicles’ horizon at the first epoch for (a) vehicle 1 in the open area, (b) vehicle 1 below the viaduct, (c) vehicle 2 in the open area, (d) vehicle 2 below the viaduct, (e) vehicle 1 and vehicle 2 (the connected vehicles) in the open area, and (f) vehicle 1 and vehicle 2 (the connected vehicles) below the viaduct. The multi-GNSS system is used including GPS (in blue) and the BeiDou Navigation Satellite System (BDS) (in cyan). The serial numbers of GPS satellites start with letter “G” and those of BDS start with letter “C”.
Remotesensing 12 00949 g005
Figure 6. The LiDAR point clouds of the window occupancy likelihood map (OLM) of vehicle 1 (black) and the current laser scan of vehicle 2 (red) at one epoch. The black vehicle represents vehicle 1 and the red vehicle is vehicle 2.
Figure 6. The LiDAR point clouds of the window occupancy likelihood map (OLM) of vehicle 1 (black) and the current laser scan of vehicle 2 (red) at one epoch. The black vehicle represents vehicle 1 and the red vehicle is vehicle 2.
Remotesensing 12 00949 g006
Figure 7. Time series of the bias of relative positioning calculated by our LiDAR simultaneous position and mapping (SLAM) method in east, north and up directions.
Figure 7. Time series of the bias of relative positioning calculated by our LiDAR simultaneous position and mapping (SLAM) method in east, north and up directions.
Remotesensing 12 00949 g007
Figure 8. Time series of ratio values, success rate and the number of common reference satellites between the two vehicles for four cases.
Figure 8. Time series of ratio values, success rate and the number of common reference satellites between the two vehicles for four cases.
Remotesensing 12 00949 g008
Figure 9. Time series of ratio values, success rate and the number of common nonreference satellites between the two vehicles in absolute positioning for six cases.
Figure 9. Time series of ratio values, success rate and the number of common nonreference satellites between the two vehicles in absolute positioning for six cases.
Remotesensing 12 00949 g009
Figure 10. Positioning errors in east, north and height directions of the vehicle 1 in the absolute positioning for six cases.
Figure 10. Positioning errors in east, north and height directions of the vehicle 1 in the absolute positioning for six cases.
Remotesensing 12 00949 g010
Figure 11. Positioning errors in east, north and height directions of the vehicle 2 in the absolute positioning for six cases.
Figure 11. Positioning errors in east, north and height directions of the vehicle 2 in the absolute positioning for six cases.
Remotesensing 12 00949 g011
Figure 12. Positioning errors in east, north and height directions of the vehicle 1 in the absolute positioning for our joint RTK algorithm and single point positioning.
Figure 12. Positioning errors in east, north and height directions of the vehicle 1 in the absolute positioning for our joint RTK algorithm and single point positioning.
Remotesensing 12 00949 g012
Table 1. Pseudocode of the main procedures in our methods.
Table 1. Pseudocode of the main procedures in our methods.
Inputs at time step t:
 INS measurement of vehicle 1: L1
 INS measurement of vehicle 2: L2
 LiDAR measurement of vehicle 1: S1(1:t)
 LiDAR measurement of vehicle 2: S2(1:t)
 GNSS measurement of vehicle 1: G1
 GNSS measurement of vehicle 2: G2
 GNSS measurement of reference station: Gr
Methods:
 generate OLM for vehicle 1: OLM1 = M (L1, S1(1:t))
 generate OLM for vehicle 2: OLM2 = M (L2, S2(1:t))
 calculate relative position P12 between the two vehicles:
  P12 = F (OLM1, OLM2, S1(t), S2(t))
 calculate relative DD ambiguities Amb12 of their common satellites: Amb12 = К (P12, G1, G2)
 calculate absolute position P1 of vehicle 1 and absolute position P2 of vehicle 2:
  (P1, P2) = H (P12, Amb12, G1, G2, Gr)
return P1 and P2
Table 2. Characteristics of the gyroscopes and accelerometers from the Novatel Span Pwrpak7-e1 and the Honeywell HG4930.
Table 2. Characteristics of the gyroscopes and accelerometers from the Novatel Span Pwrpak7-e1 and the Honeywell HG4930.
DeviceSensorIn Run Bias Variation at Constant TemperatureRandom WalkMax Input
Novatel SPAN PwrPak7-E1Gyroscope1°/h0.0667°/√h±375°/s
Accelerometer0.25 mg55ug/√Hz±10 g
Honeywell HG4930Gyroscope1°/h0.09°/√h±200°/s
Accelerometer0.3 mg30ug/√Hz±20 g
Table 3. Absolute positioning precision and fixed rate of all cases.
Table 3. Absolute positioning precision and fixed rate of all cases.
CasesRMS (m)Fixed Rate (%) with a Ratio-Test Threshold of 3
Vehicle 1Vehicle 2
EastNorthHeightEastNorthHeight
in the open areano constraints0.190.110.350.030.060.1693.75
with d R P constraint0.050.050.050.030.050.0598.61
with d R P and N R A constraints0.050.050.050.030.040.0598.61
single positioning0.160.110.2195.83
below the viaductno constraints0.850.562.870.320.140.5256.25
with d R P constraint0.090.080.230.090.090.2395.13
with d R P and N R A constraints0.050.070.090.060.080.0997.22
single positioning0.950.582.9635.42

Share and Cite

MDPI and ACS Style

Qian, C.; Zhang, H.; Li, W.; Tang, J.; Liu, H.; Li, B. Cooperative GNSS-RTK Ambiguity Resolution with GNSS, INS, and LiDAR Data for Connected Vehicles. Remote Sens. 2020, 12, 949. https://doi.org/10.3390/rs12060949

AMA Style

Qian C, Zhang H, Li W, Tang J, Liu H, Li B. Cooperative GNSS-RTK Ambiguity Resolution with GNSS, INS, and LiDAR Data for Connected Vehicles. Remote Sensing. 2020; 12(6):949. https://doi.org/10.3390/rs12060949

Chicago/Turabian Style

Qian, Chuang, Hongjuan Zhang, Wenzhuo Li, Jian Tang, Hui Liu, and Bijun Li. 2020. "Cooperative GNSS-RTK Ambiguity Resolution with GNSS, INS, and LiDAR Data for Connected Vehicles" Remote Sensing 12, no. 6: 949. https://doi.org/10.3390/rs12060949

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop