Next Article in Journal
Optical Harmonic Vernier Effect: A New Tool for High Performance Interferometric Fiber Sensors
Next Article in Special Issue
Spatial and Temporal Variations of Polar Ionospheric Total Electron Content over Nearly Thirteen Years
Previous Article in Journal
Dynamic Hand Gesture Recognition Using 3DCNN and LSTM with FSM Context-Aware Model
Previous Article in Special Issue
Characteristics and Performance Evaluation of QZSS Onboard Satellite Clocks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Kinematic and Dynamic Vehicle Model-Assisted Global Positioning Method for Autonomous Vehicles with Low-Cost GPS/Camera/In-Vehicle Sensors

1
The Joint Laboratory for Internet of Vehicles, Ministry of Education-China Mobile Communications Corporation, Xi’an 710064, China
2
Information & Engineering School, Chang’an University, Xi’an 710064, China
*
Authors to whom correspondence should be addressed.
Sensors 2019, 19(24), 5430; https://doi.org/10.3390/s19245430
Submission received: 17 November 2019 / Revised: 2 December 2019 / Accepted: 3 December 2019 / Published: 9 December 2019
(This article belongs to the Special Issue GNSS Data Processing and Navigation)

Abstract

:
Real-time, precise and low-cost vehicular positioning systems associated with global continuous coordinates are needed for path planning and motion control in autonomous vehicles. However, existing positioning systems do not perform well in urban canyons, tunnels and indoor parking lots. To address this issue, this paper proposes a multi-sensor positioning system that combines a global positioning system (GPS), a camera and in-vehicle sensors assisted by kinematic and dynamic vehicle models. First, the system eliminates image blurring and removes false feature correspondences to ensure the local accuracy and stability of the visual simultaneous localisation and mapping (SLAM) algorithm. Next, the global GPS coordinates are transferred to a local coordinate system that is consistent with the visual SLAM process, and the GPS and visual SLAM tracks are calibrated with the improved weighted iterative closest point and least absolute deviation methods. Finally, an inverse coordinate system conversion is conducted to obtain the position in the global coordinate system. To improve the positioning accuracy, information from the in-vehicle sensors is fused with the interacting multiple-model extended Kalman filter based on kinematic and dynamic vehicle models. The developed algorithm was verified via intensive simulations and evaluated through experiments using KITTI benchmarks (A project of Karlsruhe Institute of Technology and Toyota Technological Institute at Chicago) and data captured using our autonomous vehicle platform. The results show that the proposed positioning system improves the accuracy and reliability of positioning in environments in which the Global Navigation Satellite System is not available. The developed system is suitable for the positioning and navigation of autonomous vehicles.

1. Introduction

Continuous, accurate and high-integrity positioning information is critical for the operation of autonomous vehicles, including path planning and motion control functions. Several concerns and challenges remain regarding the positioning and navigation systems of autonomous vehicles, including the following:
(1) Perception in extreme weather: It is difficult for visual or light detection and ranging (LiDAR) sensors to identify lane lines when the road is covered with water or snow. Furthermore, in open areas, a sufficient number of road markers cannot be detected. A high-precision Global Navigation Satellite System (GNSS) is not affected by these factors and can thus provide improved perception to some extent.
(2) Safety and comfort: Due to the performance limits of vehicular sensors like cameras and LiDAR, the vehicle sometimes cannot detect the road curvature or slope. This can result in safety and comfort concerns related to sudden braking or sharp turns.
(3) Vehicle-to-everything applications: When crossing an intersection, the vehicle can accurately determine its own position and share it with other vehicles through vehicle-to-vehicle communication or vehicle-to-infrastructure communication, which are fundamental technologies for fully automated vehicle scheduling.
(4) Computational and storage load: Global positioning information contributes to the initialisation process of high-definition (HD) maps. There is no need for a vehicle to load the entire HD map at once; the vehicle only needs to load a certain part of map from the map database based on the vehicle’s current position, greatly reducing the storage and computational load.
GNSS includes four main systems, namely the American Global Positioning System (GPS), the Russian Glonass, the European Galileo and Chinese BeiDou [1,2,3]. Taking the GPS for example, it can provide positioning information in the global coordinate system, while a civilian GPS has a low measurement frequency and a large error; thus, it cannot provide the accuracy and stability needed for autonomous vehicles. The error in the GPS is determined by three factors: (1) satellite orbit error and clock error at the launch stage; (2) refraction error during signal propagation, which is unavoidable because of the non-uniform ionosphere and troposphere; and (3) clock error and noise during signal transmission. In complex urban environments, the error in the GPS can reach tens of metres. To improve the accuracy of a common GPS, differential and real-time kinematic techniques have been developed. Although these strategies can improve the GPS positioning accuracy to the decimetre or centimetre level, the GPS signals are still blocked by tall buildings, tunnels, mountain roads, dense foliage and so on.
Some techniques can provide relatively precise positioning information even though they estimate the position in local coordinates. For example, simultaneous localisation and mapping (SLAM) involves constructing a model of the environment (the map) and estimating the state of the moving robot [4]. SLAM methods include visual SLAM (V-SLAM) and LiDAR SLAM. The map constructed by SLAM contains abundant driving assistance information, including road data and data on stable objects. Using these data, the navigation system can accurately identify the terrain, road contours and other information to guide the vehicle. The drawback of SLAM is that the map is difficult to save and update, and natural accumulative error cannot be eliminated. Thus, SLAM is a good choice for a short-distance positioning system rather than for a long-distance one. The in-vehicle sensor data (e.g., vehicle speed and steering angle) are easily obtained via a controller area network (CAN). Positioning based on in-vehicle sensors is a type of dead reckoning (DR) localisation, which results in a large amount of error in high-speed and tyre-slip scenarios [5]. Inertial measurement units (IMUs) can provide continuous pose information and are not affected by the environment. However, IMUs are also prone to integral errors due to drift error over time [6]. Moreover, precision-calibrated fibre-optic gyroscopes are too expensive for use in commercial autonomous vehicles. Positioning methods based on wireless networks calculate positions using information such as the time difference of the signal propagation or the signal strength. However, these methods are subject to errors caused by communication delays and the multipath effect [7]. An appropriate vehicle model that can cover various driving conditions will improve the accuracy of the positioning system [8].
Taking advantage of the complementary characteristics of a GPS, V-SLAM and in-vehicle sensors, this paper provides a continuous, accurate and high-integrity positioning method suitable for use in various challenging environments (e.g., urban areas with tall buildings, tunnels and overpasses). The new method was developed in consideration of computational complexity, cost and unexpected noise. The tightly coupled architecture was designed to update the global position measurement even when less than four satellites are available. In open spaces, image features are too sparse to stably and accurately estimate the position with V-SLAM; a GPS is more appropriate in this situation. In indoor environments where GPS signals are easily blocked, cameras can detect objects and provide abundant information. In this paper, two novel algorithms, image singular value decomposition (ISVD) and statistic filter of feature space displacement (SFFSD), were proposed to ensure the robustness and accuracy of the V-SLAM procedure in the local environment. Next, V-SLAM and a GPS were calibrated with regression analysis and error correction, and the calibration result was used to update the vehicle model-based positioning information. To account for various driving conditions, a kinematic vehicle model and a dynamic vehicle model were considered to refine the calibration result. Finally, we assessed the performance of the proposed algorithm from a practical perspective using simulations, KITTI benchmarks and experiments.
This paper is organised as follows. Section 2 provides a brief overview of related work. Section 3 discusses the problem framework and proposed positioning strategies, including the vehicle model set, V-SLAM and its calibration with a GPS and the interacting multiple model filter. Section 4 details the experiments and evaluations, and Section 5 presents the conclusions.

2. Related Work

Several positioning methods have been developed over the last five decades. Some of these methods are summarised below.

2.1. Global Navigation Satellite System (GNSS) Localisation

GNSS refers to all satellite navigation systems, including global and regional systems (e.g., the American GPS, the Russian Glonass system, the European Galileo system and the Chinese Beidou satellite navigation system) and related enhancement systems (e.g., the American Wide-Area Augmentation System, the European Geostationary Navigation Overlay System and the Japanese Multi-functional Transport Satellite Augmentation System) [9]. The GPS is currently the most common GNSS system due to its low cost, wide applicability and weather resistance. However, the GPS has several limitations. First, the satellite signals may be blocked in tunnels, mountain roads and streets surrounded by tall buildings, preventing the GPS receiver from receiving satellite signals. Second, satellite-based localisation performance is affected by several factors, including satellite clock drift, transmission through the atmosphere and multipath propagation. The typical localisation accuracy of the GPS is between 10 and 20 m, significantly larger than the width of a typical traffic lane. To reduce the error caused by clock drift and atmospheric transmission, a ground-based differential GPS (DGPS) was proposed. DGPS can significantly reduce the position estimation error, improving the localisation accuracy to several metres. However, the error caused by multipath propagation cannot be mitigated, and the accuracy and reliability of localisation remain insufficient for applications, such as collision warnings, platooning and automatic parking.

2.2. DR Localisation

DR is a classic localisation technique that is independent of the GNSS. For an object moving in two-dimensional space, if its initial position and all displacements at previous time points are known, the current position of the object can be calculated from the initial position and the cumulative displacement obtained by inertial sensors (e.g., odometers, gyroscopes, accelerometers and electronic compasses). There are two requirements for the implementation of DR localisation: (1) the initial position of the moving target must be known and (2) the distance and direction of the moving target at all moments must be obtained [10]. DR systems have several advantages, including high autonomy, high security, good resistance to radio interference and weather resistance. Furthermore, DR systems use only stand-alone inertial measurement components to calculate the position, speed and other navigation parameters. However, since DR localisation is an accumulative process, each estimated position of the target depends on the localisation of previous movements. Thus, the measurement and calculation errors accumulate over time, leading to continuous deterioration in accuracy. As a result, DR localisation is unsuitable for long-term operation. In addition, initial alignment in a DR system requires time, especially for position measurements [11].

2.3. Map Matching Localisation

Map matching is a software-based localisation correction method. By associating GPS localisation with road information in a digital map database, the vehicle position relative to the map can be determined [12]. Map matching applications are based on two assumptions: (1) all vehicles travel only on roads and (2) the accuracy of the digital map is higher than that of the estimated position of the vehicle on the road. When these conditions are met, the localisation trajectory is compared with the road information via an appropriate matching process to determine the road sections that the vehicle is most likely to travel along with the vehicle’s most likely position within these road sections. The performance of a map-matching algorithm depends strongly on the resolution of the digital map [13]. The digital map must have the correct network topology and high accuracy; otherwise, false matches will occur [14].

2.4. Mobile Radio-Based Localisation

Mobile radio-based localisation is a process of locating an object using a land-based wireless infrastructure. Generally, this process involves measuring the transmission parameters of radio waves (e.g., the difference in wave arrival time or phase or the variation in amplitude or frequency) travelling from known stationary objects to a moving target. Based on these parameters, the difference in distance between the known stationary objects and the target object, along with the moving direction of the target object, can be determined [15]. A common application of mobile radio localisation is the localisation of the mobile phone user by the American 911 telephone system. Other mobile radio-based localisation methods including localisation based on ultra-wideband and WiFi signals and cooperative localisation based on a Vehicular Ad-Hoc Network (VANET) [16,17,18]. However, mobile radio-based localisation requires an infrastructure with many roadside base stations, resulting in a high cost. Some researchers have incorporated SLAM into mobile radio-based localisation applications [19].

2.5. Vision/LiDAR-Based Localisation

Vision/LiDAR-based positioning has attracted considerable attention in recent years. Vision-based positioning has been widely explored because of its low cost and immunity to electromagnetic interference [20,21,22]. However, the resulting images are affected by changing light and image blurring (images captured during vehicle vibration or sharp turns), which break the projection relationship between the scene points and the image pixels. Most existing visual positioning methods suffer from a large cumulative error and non-real-time performance. Many studies [23,24,25] have reported the advantages of LiDAR-based positioning, including robustness in bad weather (variable light, rain, fog and snow). However, its cost and time-consuming nature restrict the large-scale application of LiDAR-based positioning. Global or local optimisation and loop-closure detection are tricky problems that need to be solved.

2.6. Multiple Sensor-Based Localisation

Approaches combining multiple sensor types are becoming the norm in practical positioning and navigation systems. Common multi-sensor systems combine a GPS and inertial navigation systems in loose-coupled or tightly coupled architectures [26]. However, professional and high-grade devices come with a high overall cost. In Jo et al. [8], a rule-based filter update method with a validation gate was adopted. The system did not update the GPS measurement when the number of satellites in use was four or less. That study completely ignored the fact that a positioning assessment based only on the number of satellites is not reliable, and the positioning result was affected by the GPS signal reflection and the multipath effect. Al Hage et al. [27] designed a tightly coupled GPS/odometer with faults diagnosis to mitigate GPS errors. Model-aided positioning was first applied in spaceflight position estimation, and its performance was evaluated. Some studies [5,8] have combined different in-vehicle sensors to improve the accuracy of the positioning system; however, most of these were designed for manned vehicles rather than driverless vehicles. In manned vehicles, metre- or sub-metre-level error, which results in occasional discontinuity or drift, is acceptable. In contrast, these situations are not acceptable in autonomous vehicles as they will seriously affect reliability and safety.

3. Proposed Fusion Positioning Strategy

In this section, we detail the proposed positioning strategy. Figure 1 shows the system structure. The vehicle models, which include kinematic and dynamic vehicle models, are explained first. Next, the V-SLAM algorithm and its calibration using a GPS track algorithm are presented. The, the calibration result used to update measurements are given. Finally, the in-vehicle sensors and vehicle models and how they were employed to refine the calibration result are discussed.

3.1. Vehicle Modelling

The incorporation of kinematic and dynamic vehicle models in the positioning method can compensate for a GPS signal loss. The kinematic vehicle model is suitable for low-speed and low-slip driving conditions, while the dynamic vehicle model is suitable for high-speed and large-slip motion. Herein, a vehicle motion model that combines the kinematic vehicle model and the dynamic vehicle model is established to account for numerous driving conditions.

3.1.1. Kinematic Vehicle Model

Although a rigid dynamic vehicle model is optimal for practical situations, the many degrees of freedom make the model complex, and the multi-parameter coupling is unfavourable for use in integrated navigation systems. According to the structure and characteristics of a front-wheel-steering vehicle, a four-wheeled vehicle can be simplified into a two-wheeled bicycle model [28]. As shown in Figure 2, in the bicycle model, the front and rear wheels are expressed by single wheels. In Figure 2, G is the vehicle’s centre of gravity; O is the centre of vehicle current motion; VG is the velocity of G; δ is the front wheel deflection; β is the corresponding slip angle of G; ψ and ψ + β are the heading and course angle of the vehicle, respectively; and lf and lr are the distance from G to the front (point F) and rear (point G) wheel, respectively; r F , r G , and r R are the radius from the motion centre O to the front wheel centre F, the gravity centre G and the rear centre R, respectively.
The kinematic vehicle model assumes that no slip occurs between the ground and the wheels, which is accurate for vehicles moving at low speeds. In this case, the velocity directions at points F and R (VF and VR, respectively) are consistent with the directions of the front and rear wheels, respectively. The kinematic relationship described above can be described using Equations (1)–(5):
X ˙ = V G cos ( ψ + β ) ,
Y ˙ = V G sin ( ψ + β ) ,
ψ ˙ = V G cos ( β ) tan ( δ ) / ( l f + l r ) ,  
V ˙ G = a ,
β = tan 1 l r l f + l r tan ( δ ) .

3.1.2. Dynamic Vehicle Model

Considering vehicle slip and lateral velocity caused by changes in road conditions, vehicle load, wind speed/direction, etc., a dynamic vehicle is introduced to account for cases of high-speed motion in which the vehicle slides laterally but does not completely roll [29,30]. As shown in Figure 3, the tyre-slip angle is the angle between the wheel speed vector and the longitudinal wheel axis. Here, we assume that the tyre-slip angle is proportional to the lateral force acting on the tyre.
Equations (6) and (7) were derived according to the force and motion state of G:
F y = F y f + F y r = m a y = m ( V ˙ y + V x r ) ,
M z = l f l r F y r = I z ψ ¨ ,
where m is the mass of the vehicle; r is the velocity of heading angular ψ ; Iz is the inertial yaw moment; and Vx and Vy are the longitudinal and lateral velocities of the vehicle, respectively.
As shown in Figure 3, the dynamic vehicle model assumes that the tyre-slip angle is proportional to the lateral force that acts on the tyre:
F y f = 2 C f α f 2 C f β + l f ψ ˙ V x δ ,
F y r = 2 C r α r 2 C r β l r ψ ˙ V x ,
where αf and αr are the slip angles relative to the front and rear wheels, respectively; and Cf and Cr are the cornering stiffnesses of the front and rear tyres, respectively.
By combining Equations (6) and (7) with Equations (8) and (9), the vehicle motion can be estimated by the dynamic vehicle model as follows:
X ˙ = V x cos ( ψ ) V y sin ( ψ ) ,
Y ˙ = V x sin ( ψ ) + V y cos ( ψ ) ,
ψ ˙ = r ,
V ˙ x = V y r + 1 m F x r F y f sin δ ,
V ˙ y = V x r + 1 m F y f cos δ + F y r ,
r ˙ = 1 I z l f F y f cos δ l r F y r .

3.2. V-SLAM Algorithm

Herein, two novel algorithms are introduced into V-SLAM: ISVD, an anti-blurring algorithm, and SFFSD, an algorithm to remove feature outliers [31]. ISVD ensures that the system chooses the less-blurred images. SFFSD removes false matches from the initial putative feature correspondences. In this paper, these two algorithms are employed to enhance the performance (e.g., robustness, efficiency and accuracy) of the V-SLAM module.

3.2.1. ISVD Algorithm

The ISVD algorithm was designed to mitigate the effect of image blurring. ISVD is based on the principal component analysis algorithm, which selects the top k eigenvectors with the highest eigenvalues to represent an image matrix. The quality of an image can be judged by the image singular value.
For an image matrix I R m × n , there exists orthogonal matrixes U R m × m and V R n × n that compose I = U W V T , where W = d i a g ( δ 1 , δ 2 , , δ n ) . δ 1 δ 2 δ n are the non-zero singular values. After the decomposition of the image matrix, the singular value histogram can be established with the non-zero singular values. The definition of d I S V D is given by Equation (16):
d I S V D = i n S i n , S i = 1 , i f   δ i C t h r e s 0 , i f   δ i < C t h r e s ,
where δ i is the singular value of W , C t h r e s is a singular value threshold, which was set to 100 in this study.
To verify the idea proposed above, Figure 4a–e with increasing degrees of blurring were obtained from the LIVE2 database [32] and are shown in Figure 4. According to ISVD, the image singular value curves of images a–e are depicted in Figure 5, in which the threshold value C t h r e s is marked with a red line.
The calculation of the degree of image blurring using ISVD is summarised in Algorithm 1. The original image matrix is too large to decompose; to speed up this procedure, the anti-blurring process was applied only to key frames, which is more efficient that the corresponding frame-by-frame process.
Algorithm 1: Calculation of degree of blurring using image singular value decomposition (ISVD).
Input: RGB image I
Output: Blurred degree d I S V D of image I
Initialisation: d I S V D = 0 , n u m = 0
1: Convert colour I to grayscale I g r a y
2: Calculate a singular value W = d i a g ( δ 1 , δ 2 , , δ n ) with singular value decomposition on I g r a y
3: For δ i in W
4:  If δ i C t h r e s
5:    n u m + +
6:  End
7: End
8: Return blurred degree d I S V D = n u m / n

3.2.2. SFFSD Algorithm

In Badino et al. [34], the tracking error in local invariant features between adjacent images was found to follow a Laplacian distribution. Similarly, we found that the feature-matching error of successive image frames ( I l , i 1 , I l , i ) or ( I r , i 1 , I r , i ) along with the feature-tracking error of the left and right matched image frames ( I l , i 1 , I r , i 1 ) or ( I l , i , I r , i ) in the stereo model, conform to a Laplacian distribution. Verified with the experiments, the pixel shift of matched features, including inliers and outliers, along the x-axis or y-axis was fitted with a Laplacian distribution, which had a long tail, as shown in Figure 6. In contrast, the pixel shift of matched features considering only inliers followed a Laplacian pattern with a short tail. In Figure 7, the line fitting was conducted only with inliers; the fitted line had a short tail and a peak bin. In these two figures, the x-axis indicates the displacement in pixels, and the y-axis is the number of features collected by histogram statistics.
Based on the above findings, we propose a SFFSD to remove the outliers and retain the inliers. Algorithm 2 briefly describes the SFFSD. Here, we present only the statistic filtering between two candidates matched images I1 and I2. In our experiments, the bin number n was set to 5. In this case, the corresponding features included 80% inliers. However, the value of n was allowed to vary; it depended on the diversity of the surrounding environment. To further refine the matches after statistic filtering, circle matching was employed to verify the matches [35].
Algorithm 2: Statistic filtering of feature displacement
Input: Candidate matched images I 1 , I 2 .
Output: Good feature matches V M g o o d
1: Detect features I 1 , I 2 to obtain descriptors M d e s 1 , M d e s 2 and key points M k e y s 1 , M k e y s 2
2: Match M d e s 1 , M d e s 2 to obtain the original matches V M m a t c h e s with brute force matcher and hamming distance
3: Calculate the key-points displacements for V M m a t c h e s in x and y components Δ u = u 1 u 2 , Δ v = v 1 v 2
4: Create a 2D histogram with Δ u and Δ v to confirm the highest bins for mode approximation
5: Use the sample within the radius to perform parameter estimation of the Laplacian distribution in x and y
6: Determine the min and max boundary values to include a certain percentage ratio = 0.9 of inliers, assuming a Laplacian distribution
7: Find the matches V M u _ m a t s according to the boundary in Step 6
8: Repeat Steps 6 and 7 to find the matches V M v _ m a t s
9: Calculate the common element V c o m from V M u _ m a t c h e s and V M v _ m a t c h e s
10: For i d in V M m a t c h e s
11:  If i d in V c o m
12:   Pushback the corresponding element into V M g o o d
13:  End
14: End

3.3. V-SLAM Track and GPS Track Calibration

In this part, the V-SLAM track p i is used to the correct GPS trajectory q i via regression analysis. This process included three main steps: (1) coordinate system conversion, (2) time alignment and improved iterative closest point (ICP) calibration, and (3) inverse coordinate system conversion.
In common applications, the GPS value is defined in the geocentric coordinate system. To perform regression analysis, the GPS value needs to be converted to the local spatial coordinate system. In this paper, we converted from the geocentric coordinate system to the “east, north, up” coordinate system via the universal transverse mercator (UTM) projection, as shown in Equations (17) and (18):
x l o c a l = k 0 M + N tan y x 2 2 + ( 5 T + 9 C + 4 C 2 ) x 4 24 + ( 61 58 T + T 2 + 600 C 330 e 1 2 ) x 6 720 ,
y l o c a l = k 0 N x + ( 1 T + C ) × x 3 6 + ( 5 18 T + T 2 + 72 C 58 e 1 2 ) × x 5 120 ,
where:
r = y π 180 ,
T = tan 2 r ,
e 1 = a 2 b 2 a ,
e 2 = a 2 b 2 b ,
C = e 2 2 cos 2 r ,
A = ( x L 0 ) π cos r 180 ,
N = a 1 e 1 2 sin 2 r ,
m 0 = 1 e 1 2 4 3 e 1 4 64 5 e 1 6 256 ,
m 1 = 3 e 1 2 8 3 e 1 4 32 45 e 1 6 1024 ,
m 2 = 15 e 1 4 256 + 45 e 1 6 1024 ,
m 3 = 35 e 1 6 3072 ,
M = a ( m 0 r + m 1 sin ( 2 r ) + m 2 sin ( 4 r ) + m 3 sin ( 6 r ) ) ,
where k0 is a scale factor (0.9996 in this paper); x and y are the longitude and latitude, respectively; a (6,378,245.0) and b (6,356,863.0188) are the equatorial radius and polar radius, respectively; L0 is the central meridian; e1 is the first eccentricity (0.0818191908); fe is the false easting value (500,000); and fn is the false northing value (10,000,000).
We improved the original ICP by accounting for the weight wi at each timestamp. First, we calculated the centroids of three-dimensional points generated using the V-SLAM algorithm and the centroids of GPS after the coordinate conversion. Next, the covariance matrix between the measurements of V-SLAM and GPS was calculated. A singular value decomposition was carried out to obtain the rotation and translation matrices. This process is summarised in Algorithm 3.
Algorithm 3: Improved weighted iterative closest point (ICP)
Input: V-SLAM track p i , GPS track q i , weights on timestamps w i w i 0 , i = 1 , , N
Output: Rotation matrix and translation vector that minimises i = 1 N w i q i R p i + t 2
1: Centroids p ¯ = i = 1 N w i p i / i = 1 N w i , q ¯ = i = 1 N w i q i / i = 1 N w i
2: Centred vectors x i = q i q ¯ , y i = p i p ¯
3: Covariance matrix S = X W Y T , where X and Y have x i and y i as columns, respectively, and W is a diagonal matrix with w i on the diagonal
4: Singular value decomposition S = U Σ V T
5:  R = U V T and t = q ¯ R p ¯
6: V-SLAM track after calibration in global coordinates p i = R p i + t
The traditional ICP method based on least squares is not robust against outliers (i.e. this method tends to be affected by bad GPS signals, which are common in challenging outdoor environments). To improve the robustness of calibration, least absolute deviations is used to eliminate the effect of outliers. The problem can be solved via iteratively re-weighted least squares. The result of iteration converges to least absolute deviations; that is, i = 1 N s i q i R p i + t is minimised, where s i = p i p i 1 . The final weights are used as a measurement of credibility, which is part of the weight assigned to the timestamps. The credibility for timestamp c i is calculated using:
c i = 1 max ϕ , q i R p i + t ,
where R and t are the rotation matrix and translation vector obtained in Algorithm 3, respectively; and ϕ is the credibility distance bound. We updated the weight using w i = s i c i . The loop continues until it meets the condition given by i = 1 N w i q i R p i + t 2 < e .
The last operation in this process is the inverse coordinate system conversion (i.e., inverse UTM projection). At this point, we could obtain the accurate global position information as follows:
x = L 0 + 1 cos B f D ( 1 2 T f + C f ) D 3 6 + ( 5 + 28 T f + 24 T 2 f 2 C f 3 C 2 f + 8 e 2 ) D 2 120 ,
y = B f N f tan B f R f D 2 2 ( 5 + 3 T f + 10 C f 4 C 2 f 9 e 2 ) D 4 24 + ( 61 + 90 T f + 45 T 2 f + 298 C f 3 C 2 f 252 e 2 ) D 6 720 ,
where:
e = 1 b / a 1 + b / a ,
M f = x l o c a l k 0 ,
D = y l o c a l k 0 N f ,
T f = tan 2 ( B f ) ,
C f = e 2 2 cos 2 ( B f ) ,
N f = a 1 e 1 2 sin 2 ( B f ) ,
R f = a ( 1 e 1 2 ) ( 1 e 1 2 sin 2 ( B f ) ) 3 / 2 ,
ϕ = M f a ( 1 e 1 2 / 4 3 e 1 4 / 64 5 e 1 6 / 256 ) ,
B f = ϕ + 3 e 2 27 e 3 32 sin ( 2 ϕ ) + 21 e 2 16 55 e 4 32 sin ( 4 ϕ ) + 151 e 3 96 sin ( 6 ϕ ) .

3.4. Interacting Multiple Model (IMM) Filter

The IMM filter is designed to calculate the weight of each model under changing external driving conditions and estimate the vehicle position. The IMM filter is composed of four parts: (1) interaction, (2) two extended Kalman filters (one for the kinematic vehicle model and a second for the dynamic vehicle model), (3) a model probability update and (4) the fusion of state and covariance estimation from the two models.

3.4.1. Interaction

The states from the kinematic and dynamic vehicle models are mixed with each other using a predicated probability. The mixing probability u k 1 j | i is expressed as:
u k 1 j | i = π j i u k 1 j / u k | k 1 i    ( i , j = 1 , 2 ) ,
where u k 1 j is the probability of the model j; and π j i is the probability for the transition from model j to model i, which is calculated based on a prior knowledge using a statistical method [32]. The index i, j = 1 refers to the kinematic vehicle model, while i, j = 2 represents the dynamic vehicle model. The predicated model probability u k | k 1 i is given as:
u k | k 1 i = j = 1 2 π j i u k 1 j    ( i = 1 , 2 ) .
The mixing state X ¯ k 1 | k 1 i and their covariance P ¯ k 1 | k 1 i of the model i are respectively computed using:
X ¯ k 1 | k 1 i = j = 1 2 u k 1 j | i X ^ k 1 | k 1 j    ( i = 1 , 2 ) ,
P ¯ k 1 | k 1 i = j = 1 2 u k 1 j | i P ^ k 1 | k 1 j + Δ X ¯ k 1 i j Δ X ¯ k 1 i j T    ( i = 1 , 2 ) .
In Equations (45) and (46), Δ X ¯ k 1 i j = X ¯ k 1 | k 1 i X ^ k 1 | k 1 j , where X ^ k 1 | k 1 j and P ^ k 1 | k 1 j are the state and covariance of the model j in the previous step, respectively.

3.4.2. Extended Kalman Filter

The extended Kalman filter was adopted to predict and update the state and covariance of each model. The prediction and update steps were carried out based on the equations shown in Figure 8. The state vector is represented as X k = ( x k , y k , h k , ψ k , β k , v k ) , while the input vector is u k = ( v k _ w h e e l , δ k ) .
In Figure 8, F k | k 1 i = δ f i δ x x = X ¯ k 1 | k 1 i ( i = 1 , 2 ) and G k 1 i are the Jacobian matrices of the process function f k 1 i ( ) with respect to X ¯ k 1 | k 1 i and u k 1 i , respectively; H is the Jacobian matrix of the measurement function; Q k 1 i is the covariance matrix of the process noise; K k i is the Kalman filter gain associated with the measurement sensors; and R k i is the covariance matrix of the measurement noise.
In the prediction step, the state X ¯ k | k 1 i and covariance P ¯ k | k 1 i are calculated with the mixing state X ¯ k 1 | k 1 i and their covariance P ¯ k 1 | k 1 i . In the update step, the corrected state X ¯ k | k i and covariance P ¯ k | k i are updated with the calibration result with the GPS and V-SLAM module.

3.4.3. Model Probability Update

Each model probability is updated based on the model innovation. Assuming Gaussian statistics, the probability of model i at time k for the observation is calculated using:
Λ k i = exp 0.5 ( z k H X ¯ k | k 1 i ) T ( S k i ) 1 ( z k H X ¯ k | k 1 i ) 2 π S k i    ( i = 1 , 2 ) ,
with
u k i = Λ k i u k | k 1 i j = 1 2 Λ k j u k | k 1 j    ( i = 1 , 2 ) ,
where S k i is the residual covariance matrix.

3.4.4. Estimation Fusion

The output state X ¯ k | k and its covariance P ¯ k | k are computed according to the Gaussian mixture equation:
X ¯ k | k = i = 1 2 u k i X ¯ k | k i ,
with
P ¯ k | k = i = 1 2 u k i P ¯ k | k i + Δ X ¯ k i ( Δ X ¯ k i ) T ,
where Δ X ¯ k i = X ¯ k | k X ¯ k | k i .

4. Experiment and Results

Figure 9 shows the experimental platform. An autonomous vehicle, “Sinda,” was equipped with a high-resolution stereo camera rig (Basler Ace1600 GigE, Basler, Ahrensburg, Germany, image size 960 pixels × 720 pixels), LiDAR (Velodyne HDL-32E, Velodyne, CA, USA) and GPS (NovAtel OEM718D, NovAtel, Calgary, Canada) and a micro-electro-mechanical system (MEMS)-based IMU (ADIS16465, Analog Devices, Norwood). The parameters of the vehicle are shown in Table 1.

4.1. Simulation

To ensure consistency with the simulation, we applied the same vehicle parameters in the experiment. To verify the reasonableness of the established vehicle model, two typical scenes were simulated: a circular trajectory (Figure 10) and a U-turn trajectory (Figure 11). In the first simulation, the vehicle speed was set to approximately 55 km/h. Due to a high level of tyre slip, the dynamic vehicle model was adopted for position estimation. In the second simulation, the vehicle speed was maintained at approximately 45 km/h. In a straight line, the vehicle kinematic model was consistent with the assumption of no tyre slip; however, this assumption broke down for sharp turns. When the vehicle drove in a straight line, the model probability returned from the dynamic vehicle model to the kinematic model. The simulations demonstrate that the IMM could adapt to several driving conditions, which is vital for application in real-world situations.

4.2. Benchmark Dataset

Due to a lack of in-vehicle sensor data in the KITTI benchmark dataset, verification experiments were conducted using only camera (Point Grey Flea 2 FL2-14S3C-C, FLIR System, Arlington, VA, USA) and GPS (OXTS RT3003, OXTS, Middleton Stoney, UK) data (No.2011_10_03_drive_0027, Figure 12; No.2011_10_03_drive_0034, Figure 13) from residential scenarios. When the vehicle made a sharp turn or moved through a cluttered environment, the GPS-based trajectory (marked by the blue line) had an unacceptable positioning error, while the trajectory of our method (marked in red) was closer to the real trajectory.
To our knowledge, the GPS signal will be blocked or reflected by tall buildings or thick trees around the road. The data 2011_09_29_drive_00117 (Figure 14), 2011_09_29_drive_0071 (Figure 15) and 2011_09_26_drive_0096 (Figure 16) of city scenarios verified the accuracy and stability of the positioning methods well. The instability was magnified when the vehicle stopped to wait at the traffic lights. There was a sudden jump in the GPS coordinate while the proposed method was stable and accurate. We denoted the vehicle-stop position with a stop sign on the trajectory. The abscissa axis and ordinate axis denotes the longitude and latitude values, respectively.
After comparisons, the basic information is shown in Table 2. The distance of the ground truth from OXTS RT3003 on the KITTI benchmark was different from the distance estimated using our proposed method. The extra distance was naturally caused by losing the GPS signal or a multipath effect. To specify the relative errors between these methods, the root mean square errors (RMSEs) are shown in below Table 2.

4.3. Real Data

To evaluate the accuracy and reliability of the developed algorithm, outdoor tests were conducted using various methods: normal DGPS (green), DGPS/IMU (cyan), DGPS/IMU/in-vehicle sensors (blue) and DGPS/IMU/in-vehicle sensors/SLAM calibration (red). Moreover, DGPS/IMU/in-vehicle sensors was regarded as the “ground truth” to calculate the RMSE of different methods. The experiments conducted with real data were divided into three parts. First, experiments were conducted in bad GPS conditions corresponding to the most challenge case regularly encountered by an autonomous vehicle. Second, a short-distance experiment including different driving behaviours was conducted to test the adaptability of the system. These driving behaviours were selected to introduce unstable factors into the vehicle positioning system. Finally, a long-distance trajectory experiment was performed as a full test. The proposed global optimisation model eliminated the cumulative error derived from the incremental characteristic of the methods (e.g., odometry and DR methods). The abscissa axis and ordinate axis denote the longitude and latitude values, respectively.

4.3.1. Bad GPS Conditions

The most challenging driving conditions for GPS localisation are those where the GPS signal is blocked or reflected. We experimentally tested the special case where an autonomous vehicle stopped in a place where the GPS signal was blocked or reflected by buildings, tunnels or bridges, as shown in Figure 17 and Figure 18. In these cases, the GPS signal was not stable or precise, and the radius of the circle that covered all GPS points was large when using normal DGPS, DGPS/IMU and DGPS/IMU/in-vehicle sensors. Not surprisingly, the proposed method exhibited absolute stability resulting from the local accuracy of the SLAM module. The comparison results in the bad GPS condition are shown in Table 3.

4.3.2. Short-Distance Trajectory with Different Driving Behaviours

To exhaustively explore the adaptability of the system, experiments involving different driving behaviours (e.g., quick starts and stops, sharp turns, reversing the car and driving in challenging conditions, such as on a rough road or in an indoor parking lot) were conducted. Quick starts and stops along with sharp turns resulted in tyre slip for vehicles driving on road surfaces with low adhesion coefficients. As shown in Figure 19, the vehicle passed through an indoor parking lot and reversed into an empty parking space to test the algorithm performance in a realistic scenario. With the exception of the proposed method, all positioning methods performed badly. Violent shaking introduced error in the IMU-based method, as shown in Figure 20. Unexpectedly, the DGPS-only method performed almost equal with IMU-based positioning methods. The comparison results of the short-distance trajectory with various driving behaviors are shown in Table 4.

4.3.3. Long-Distance Trajectory in a Cluttered Environment

A long-distance experiment including many of the scenarios found on urban and suburban roads (e.g., traffic lights and waiting zones) was performed as a final complete test. The main purpose of the long-distance experiment was to evaluate the effect of the SLAM module on the positioning result. Because SLAM is a locally accurate and incremental positioning method, cumulative error was inevitable. The proposed global optimisation strategy could eliminate this error, as shown in Figure 21. After the vehicle stopped at the stop line, we verified that the positioning point and the stop line on the map were coincident. The Comparison results of long-distance trajectory in a challenging environment are shown in Table 5.

5. Conclusions

The positioning and navigation system is a critical component of an autonomous vehicle. We have proposed a real-time, precise and low-cost vehicular positioning method. Common GPS and MEMS-based IMUs are affordable for use in commercial autonomous vehicles. For vehicles equipped with automatic braking systems and electronic stability control, the steering angle and the speeds of the four wheels can be directly obtained via the vehicle’s CAN bus. Thus, the proposed strategy can be applied in autonomous vehicles.
The main contribution of this paper is the combination of V-SLAM for local coordinates and GPS for global coordinates considering their complementary properties. ISVD and SFFSD algorithms were proposed to ensure an accurate and reliable V-SLAM. The fusion was performed using an improved weighted ICP and least absolute deviations. The appropriate vehicle or kinematic model was selected according to the current state of the moving vehicle.
The performance of the proposed method was sufficiently verified by simulations, benchmark tests and experimental scenarios. The relationship between the driving state and vehicle model was analysed through various simulations. Using the KITTI benchmark database, we compared the ground-truth data with our proposed calibration method. The proposed method performed better than conventional methods in some special scenarios. In the experimental scenarios, an autonomous vehicle was driven through challenging environments (e.g., a tunnel and interchange bridge). The results showed that the developed algorithm satisfied the accuracy and reliability requirements for autonomous vehicle positioning and navigation. In future work, the algorithm will be extended to consider more extreme driving conditions, such as icy pavements and long tunnels. The developed positioning system will be applied to the applications of an autonomous vehicle indoor parking system or truck platoon. In addition, to promote the development and application of embedded positioning systems, the efficiency should be comprehensively considered.

Author Contributions

Data curation, X.W. and C.C.; formal analysis, H.M., X.W., and C.C.; funding acquisition, X.Z.; methodology, H.M. and X.Z.; software, H.M.; supervision, X.Z.; writing – original draft, H.M.; writing – review and editing, X.W.

Funding

This research was supported in part by the National Natural Science Foundation of China (no. 61903046), Overseas Expertise Introduction Project for Discipline Innovation (no. B14043), and the Joint Laboratory for Internet of Vehicles, Ministry of Education-China Mobile Communications Corporation (no. 213024170015).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Li, X.; Zhang, X.; Ren, X.; Fritsche, M.; Wickert, J.; Schuh, H. Precise positioning with current multi-constellation global navigation satellite systems: GPS, GLONASS, Galileo and BeiDou. Sci. Rep. 2015, 5, 8328. [Google Scholar] [CrossRef] [PubMed]
  2. Pivarčiová, E.; Božek, P.; Turygin, Y.; Zajačko, I.; Shchenyatsky, A.; Václav, Š.; Císar, M.; Gemela, B. Analysis of control and correction options of mobile robot trajectory by an inertial navigation system. Int. J. Adv. Robot. Syst. 2018, 15, 1–10. [Google Scholar] [CrossRef]
  3. Pirník, R.; Hruboš, M.; Nemec, D.; Mravec, T.; Božek, P. Integration of inertial sensor data into control of the mobile platform. In Proceedings of the Federated Conference on Software Development and Object Technologies, Zilina, Slovakia, 19 November 2015; Springer: Berlin, Germany, 2016; pp. 271–282. [Google Scholar]
  4. Cadena, C.; Carlone, L.; Carrillo, H. Simultaneous localization and mapping: Present, future, and the robust-perception age. IEEE Trans. Robot. 2016, 32, 1309–1332. [Google Scholar] [CrossRef] [Green Version]
  5. Li, X.; Xu, Q. A reliable fusion positioning strategy for land vehicles in GPS-denied environments based on low-cost sensors. IEEE Trans. Ind. Electron. 2017, 64, 3205–3215. [Google Scholar] [CrossRef]
  6. Kok, M.; Hol, J.D.; Schön, T.B. Using inertial sensors for position and orientation estimation. Found. Trends Signal. Process. 2017, 11, 1–153. [Google Scholar] [CrossRef] [Green Version]
  7. Zhang, C.; Qi, W.; Wei, L.; Chang, J.; Zhao, Y. Multipath error correction in radio interferometric positioning systems. arXiv 2017, arXiv:1702.07624. Available online: https://arxiv.org/abs/1702.07624 (accessed on 2 November 2018).
  8. Jo, K.; Chu, K.; Sunwoo, M. Interacting multiple model filter-based sensor fusion of GPS with in-vehicle sensors for real-time vehicle positioning. IEEE Trans. Intell. Transp. Syst. 2012, 13, 329–343. [Google Scholar] [CrossRef]
  9. Zaidi, A.S.; Suddle, M.R. Global navigation satellite systems: A survey. In Proceedings of the IEEE International Conference on Advances in Space Technologies, Islamabad, Pakistan, 2–3 September 2006; pp. 84–87. [Google Scholar]
  10. King, T.; Füßler, H.; Transier, M.; Effelsberg, W. Dead-reckoning for position-based forwarding on highways. In Proceedings of the 3rd International Workshop on Intelligent Transportation, Hamburg, Germany, 18–19 May 2006; pp. 199–204. [Google Scholar]
  11. Bevly, D.M.; Parkinson, B. Cascaded Kalman filters for accurate estimation of multiple biases, dead-reckoning navigation, and full state feedback control of ground vehicles. IEEE Trans. Control. Syst. Technol. 2007, 15, 199–208. [Google Scholar] [CrossRef]
  12. Chausse, F.; Laneurit, J.; Chapuis, R. Vehicle localization on a digital map using particles filtering. In Proceedings of the IEEE Intelligent Vehicles Symposium, Las Vegas, NV, USA, 6–8 June 2005; pp. 243–248. [Google Scholar]
  13. Jagadeesh, G.R.; Srikanthan, T.; Zhang, X.D. A map matching method for GPS based real-time vehicle location. J. Navig. 2004, 57, 429–440. [Google Scholar] [CrossRef]
  14. Wiest, J.; Deusch, H.; Nuss, D.; Reuter, S.; Fritzsche, M.; Dietmayer, K. Localization based on region descriptors in grid maps. In Proceedings of the IEEE Intelligent Vehicles Symposium Proceedings, Seoul, Korea, 28 June–1 July 2014; pp. 793–799. [Google Scholar]
  15. Sun, G.; Chen, J.; Guo, W.; Liu, K.J.R. Signal processing techniques in network-aided positioning: A survey of state-of-the-art positioning designs. IEEE Signal. Process. Mag. 2005, 22, 12–23. [Google Scholar] [CrossRef]
  16. Lee, J.Y.; Scholtz, R.A. Ranging in a dense multipath environment using an UWB radio link. IEEE J. Sel. Areas Commun. 2003, 20, 1677–1683. [Google Scholar] [CrossRef]
  17. Cheng, Y.C.; Chawathe, Y.; Lamarca, A.; Krumm, J. Accuracy characterization for metropolitan-scale Wi-Fi localization. In Proceedings of the International Conference on Mobile Systems, Applications, and Services, Seattle, WA, USA, 6–8 June 2005; pp. 233–245. [Google Scholar]
  18. Thangavelu, A.; Bhuvaneswari, K.; Kumar, K.; Senthilkumar, K.; Sivanandam, S.N. Location identification and vehicle tracking using VANET. In Proceedings of the International Conference on Signal Processing, Communications and Networking, Chennai, India, 22–24 February 2007; pp. 112–116. [Google Scholar]
  19. Gentner, C.; Jost, T.; Wang, W.; Zhang, S.; Dammann, A.; Fiebig, U.C. Multipath assisted positioning with simultaneous localization and mapping. IEEE Trans. Wirel. Commun. 2016, 15, 6104–6117. [Google Scholar] [CrossRef] [Green Version]
  20. Mur-Artal, R.; Tardós, J.D. ORB-SLAM2: An open-source SLAM system for monocular, stereo, and RGB-D Cameras. IEEE Trans. Robot. 2017, 33, 1255–1262. [Google Scholar] [CrossRef] [Green Version]
  21. Wang, R.; Schworer, M.; Cremers, D. Stereo DSO: Large-scale direct sparse visual odometry with stereo cameras. IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 3923–3931. [Google Scholar]
  22. Min, H.; Zhao, X.; Xu, Z.; Zhang, L. Robust features and accurate inliers detection framework: application to stereo ego-motion estimation. KSII Trans. Internet Inf. Syst. 2017, 11, 302–320. [Google Scholar] [CrossRef]
  23. Zhang, J.; Singh, S. LOAM: Lidar odometry and mapping in real-time. In Proceedings of the Robotics: Science and Systems (RSS), Berkeley, CA, USA, 12–16 July 2014; pp. 109–111. [Google Scholar]
  24. Dubé, R.; Dugas, D.; Stumm, E.; Nieto, J.; Siegwart, R.; Cadena, C. Segmatch: Segment based place recognition in 3D point clouds. In Proceedings of the Robotics and Automation (ICRA), Singapore, 29 May–3 June 2017; pp. 5266–5272. [Google Scholar]
  25. Koide, K.; Miura, J.; Menegatti, E. A Portable 3D LIDAR-based system for long-term and wide-area people behavior measurement. IEEE Trans. Hum. Mach. Syst. 2018. accepted. [Google Scholar]
  26. Falco, G.; Pini, M.; Marucco, G. Loose and tight GNSS/INS integrations: Comparison of performance assessed in real urban scenarios. Sensors 2017, 17, 255. [Google Scholar] [CrossRef]
  27. Al Hage, J.; El Najjar, M.E. Improved outdoor localization based on weighted Kullback-Leibler divergence for measurements diagnosis. IEEE Intell. Transp. Syst. Mag. 2018. [Google Scholar] [CrossRef]
  28. Han, S.; Wang, J. Land vehicle navigation with the integration of GPS and reduced INS: Performance improvement with velocity aiding. J. Navig. 2010, 63, 153–166. [Google Scholar] [CrossRef] [Green Version]
  29. Bryson, M.; Sukkarieh, S. Vehicle model aided inertial navigation for a UAV using low-cost sensors. In Proceedings of the Australasian Conference on Robotics and Automation, Canberra, Australia, 6–8 December 2004; pp. 1–9. [Google Scholar]
  30. Miller, I.; Campbell, M.; Huttenlocher, D. Map-aided localization in sparse global positioning system environments using vision and particle filtering. J. Field Robot. 2011, 28, 619–643. [Google Scholar] [CrossRef]
  31. Zhao, X.; Min, H.; Xu, Z.; Wu, X.; Li, X.; Sun, P. Image anti-blurring and statistic filter of feature space displacement: Application to visual odometry for outdoor ground vehicle. J. Sens. 2018. [Google Scholar] [CrossRef] [Green Version]
  32. Sheikh, H.R.; Wang, Z.; Cormack, L.; Bovik, A.C. LIVE Image Quality Assessment Database Release 2. Available online: http://live.ece.utexas.edu/research/quality (accessed on 23 April 2017).
  33. Zhao, X.; Min, H.; Xu, Z.; Wang, W. An ISVD and SFFSD-based vehicle ego-positioning method and its application on indoor parking guidance. Transp. Res. Part C Emerg. Technol. 2019, 108, 29–48. [Google Scholar] [CrossRef]
  34. Badino, H.; Yamamoto, A.; Kanade, T. Visual odometry by multi-frame feature integration. IEEE International Conference on Computer Vision, Sydney, Australia, 1–8 December 2013; pp. 222–229. [Google Scholar]
  35. Geiger, A.; Ziegler, J.; Stiller, C. StereoScan: Dense 3D reconstruction in real-time. In Proceedings of the Intelligent Vehicles Symposium, Baden, Germany, 5–9 June 2011; pp. 963–968. [Google Scholar]
Figure 1. Structure of the overall algorithm architecture. GPS: global positioning system, IMU: inertial measurement unit, SLAM: simultaneous localisation and mapping.
Figure 1. Structure of the overall algorithm architecture. GPS: global positioning system, IMU: inertial measurement unit, SLAM: simultaneous localisation and mapping.
Sensors 19 05430 g001
Figure 2. Kinematic vehicle model.
Figure 2. Kinematic vehicle model.
Sensors 19 05430 g002
Figure 3. Dynamic vehicle model.
Figure 3. Dynamic vehicle model.
Sensors 19 05430 g003
Figure 4. Images with various degrees of blurring from the LIVE2 database. Reproduced from Zhao, X.; Min, H.; Xu, Z.; Wang, W. An ISVD and SFFSD-based vehicle ego-positioning method and its application on indoor parking guidance. Transportation Research Part C: Emerging Technologies, 2019; 108: 29–48. Copyright © 2019 Elsevier Masson SAS. All rights reserved [33].
Figure 4. Images with various degrees of blurring from the LIVE2 database. Reproduced from Zhao, X.; Min, H.; Xu, Z.; Wang, W. An ISVD and SFFSD-based vehicle ego-positioning method and its application on indoor parking guidance. Transportation Research Part C: Emerging Technologies, 2019; 108: 29–48. Copyright © 2019 Elsevier Masson SAS. All rights reserved [33].
Sensors 19 05430 g004
Figure 5. Singular value curves for the images in Figure 4. Reproduced from Zhao, X.; Min, H.; Xu, Z.; Wang, W. An ISVD and SFFSD-based vehicle ego-positioning method and its application on indoor parking guidance. Transportation Research Part C: Emerging Technologies, 2019; 108: 29–48. Copyright © 2019 Elsevier Masson SAS. All rights reserved [33].
Figure 5. Singular value curves for the images in Figure 4. Reproduced from Zhao, X.; Min, H.; Xu, Z.; Wang, W. An ISVD and SFFSD-based vehicle ego-positioning method and its application on indoor parking guidance. Transportation Research Part C: Emerging Technologies, 2019; 108: 29–48. Copyright © 2019 Elsevier Masson SAS. All rights reserved [33].
Sensors 19 05430 g005
Figure 6. Feature displacement between frames, including inliers and outliers: (a) x-coordinate and (b) y-coordinate. Blue: measured displacement error [31]. Red: fitted Laplacian probability distribution function (pdf) with a long tail.
Figure 6. Feature displacement between frames, including inliers and outliers: (a) x-coordinate and (b) y-coordinate. Blue: measured displacement error [31]. Red: fitted Laplacian probability distribution function (pdf) with a long tail.
Sensors 19 05430 g006
Figure 7. Feature displacement for inliers between frames: (a) x-coordinate and (b) y-coordinate. Blue: measured displacement error [32]. Red: fitted Laplacian pdf with a short tail.
Figure 7. Feature displacement for inliers between frames: (a) x-coordinate and (b) y-coordinate. Blue: measured displacement error [32]. Red: fitted Laplacian pdf with a short tail.
Sensors 19 05430 g007
Figure 8. Flowchart of the extended Kalman filter in the interacting multiple model (IMM).
Figure 8. Flowchart of the extended Kalman filter in the interacting multiple model (IMM).
Sensors 19 05430 g008
Figure 9. Autonomous vehicle platform. Reproduced from Zhao, X.; Min, H.; Xu, Z.; Wang, W. An ISVD and SFFSD-based vehicle ego-positioning method and its application on indoor parking guidance. Transportation Research Part C: Emerging Technologies, 2019; 108: 29-48. Copyright © 2019 Elsevier Masson SAS. All rights reserved [33].
Figure 9. Autonomous vehicle platform. Reproduced from Zhao, X.; Min, H.; Xu, Z.; Wang, W. An ISVD and SFFSD-based vehicle ego-positioning method and its application on indoor parking guidance. Transportation Research Part C: Emerging Technologies, 2019; 108: 29-48. Copyright © 2019 Elsevier Masson SAS. All rights reserved [33].
Sensors 19 05430 g009
Figure 10. A circular trajectory and vehicle model probability.
Figure 10. A circular trajectory and vehicle model probability.
Sensors 19 05430 g010
Figure 11. A U-turn trajectory and vehicle model probability.
Figure 11. A U-turn trajectory and vehicle model probability.
Sensors 19 05430 g011
Figure 12. Residential, 2011_10_03_drive_0027. There was an error in the trajectory because the car moved into a cluttered environment, and the GPS signal was blocked or had a multipath effect.
Figure 12. Residential, 2011_10_03_drive_0027. There was an error in the trajectory because the car moved into a cluttered environment, and the GPS signal was blocked or had a multipath effect.
Sensors 19 05430 g012
Figure 13. Residential, 2011_10_03_drive_0034. The ending point of the blue line tended to drift off lane, while the red one kept inside the lane.
Figure 13. Residential, 2011_10_03_drive_0034. The ending point of the blue line tended to drift off lane, while the red one kept inside the lane.
Sensors 19 05430 g013
Figure 14. City, 2011_09_29_drive_00117. There was a big error in the marked area because of the tall buildings on two sides of the road.
Figure 14. City, 2011_09_29_drive_00117. There was a big error in the marked area because of the tall buildings on two sides of the road.
Sensors 19 05430 g014
Figure 15. City, 2011_09_29_drive_0071. The GPS signal was blocked when the vehicle passed through city roads with tall buildings.
Figure 15. City, 2011_09_29_drive_0071. The GPS signal was blocked when the vehicle passed through city roads with tall buildings.
Sensors 19 05430 g015
Figure 16. City, 2011_09_26_drive_0096. At the position of the stop sign, the GPS signal was not very stable.
Figure 16. City, 2011_09_26_drive_0096. At the position of the stop sign, the GPS signal was not very stable.
Sensors 19 05430 g016
Figure 17. Tunnel scenario. To highlight the stability of the proposed positioning method, the vehicle moved slowly and stopped briefly at the end of the tunnel. The GPS signal was blocked or reflected. All GPS-based methods were affected, resulting in drift of several meters. In contrast, the proposed method was stable and accurate.
Figure 17. Tunnel scenario. To highlight the stability of the proposed positioning method, the vehicle moved slowly and stopped briefly at the end of the tunnel. The GPS signal was blocked or reflected. All GPS-based methods were affected, resulting in drift of several meters. In contrast, the proposed method was stable and accurate.
Sensors 19 05430 g017
Figure 18. Interchange bridge scenario. The GPS signal was clearly blocked when the vehicle passed through the interchange bridge. Because of the IMU and in-vehicle sensors, the DGPS/IMU and DGPS/IMU/in-vehicle sensors methods could handle this challenge. In contrast, the DGPS-only method was inaccurate, as marked with a rectangle (The abscissa axis and ordinate axis denote the latitude and longitude values, respectively).
Figure 18. Interchange bridge scenario. The GPS signal was clearly blocked when the vehicle passed through the interchange bridge. Because of the IMU and in-vehicle sensors, the DGPS/IMU and DGPS/IMU/in-vehicle sensors methods could handle this challenge. In contrast, the DGPS-only method was inaccurate, as marked with a rectangle (The abscissa axis and ordinate axis denote the latitude and longitude values, respectively).
Sensors 19 05430 g018
Figure 19. Scenario where the vehicle passes through an indoor parking lot and reverses into an empty parking space. The DGPS-only method lost the signal. The DGPS/IMU and DGPS/IMU/in-vehicle sensors methods resulted in a drift of approximately 6 m, which is unacceptable for autonomous vehicles.
Figure 19. Scenario where the vehicle passes through an indoor parking lot and reverses into an empty parking space. The DGPS-only method lost the signal. The DGPS/IMU and DGPS/IMU/in-vehicle sensors methods resulted in a drift of approximately 6 m, which is unacceptable for autonomous vehicles.
Sensors 19 05430 g019
Figure 20. Scenario involving violent shaking on a rough road. The shaking movement affected the IMU-based or in-vehicle sensor-based positioning, and the maximum error was approximately 1.2 m, as indicated by the yellow circle.
Figure 20. Scenario involving violent shaking on a rough road. The shaking movement affected the IMU-based or in-vehicle sensor-based positioning, and the maximum error was approximately 1.2 m, as indicated by the yellow circle.
Sensors 19 05430 g020
Figure 21. A 21-km trajectory that lasted about 47 min. All the methods had relatively poor performance except the proposed calibration method.
Figure 21. A 21-km trajectory that lasted about 47 min. All the methods had relatively poor performance except the proposed calibration method.
Sensors 19 05430 g021
Table 1. Parameter of the experiment vehicle.
Table 1. Parameter of the experiment vehicle.
ParameterSymbolValueUnit
Vehicle massm1395kg
Yaw moment of inertialIz4192kg·m2
The distance from G to the front wheel Fa1.04m
The distance from G to the rear wheel Rb1.62m
The height of center of gravityH0.54m
Wheel trackd1.52m
Rolling resistance coefficientf0.02_
The rolling radius of the tyresr0.335m
Frontal areaA1.8m2
The coefficient of air resistanceCD0.343_
The density of airρ1.206kg/m3
Table 2. The positioning results on KITTI benchmark.
Table 2. The positioning results on KITTI benchmark.
ScenariosNo.Duration (s−1)Distance (m−1)
OXTS RT3003/Proposed Method
RMSE (m−1)
ResidentialFigure 12593.243734.375/3727.3320.645
Figure 13584.795066.272/5051.7481.265
Figure 1467.08392.705/391.2320.959
CityFigure 15130.90341.491/333.7101.868
Figure 1648.61401.604/397.5090.858
Table 3. Comparison results in bad GPS condition.
Table 3. Comparison results in bad GPS condition.
No.Duration (s)MethodsDistance (m)RMSE (m)
Figure 17264.796GPS1685.75513.137
GPS + IMU1582.8604.880
GPS + IMU + CAN1589.760-
Proposed method1570.6161.564
Figure 18158.069GPS1066.82773.160
GPS+IMU1045.8511.765
GPS+IMU+CAN1044.588-
Proposed method1038.8251.229
Table 4. Comparison results of the short-distance trajectory with various driving behaviors.
Table 4. Comparison results of the short-distance trajectory with various driving behaviors.
No.DurationMethodsDistanceRMSE
Figure 1976.105GPS218.9407.105
GPS + IMU216.0194.657
GPS + IMU + CAN218.946-
Proposed method213.3822.341
Figure 2060.885GPS136.5381.491
GPS + IMU142.3031.139
GPS + IMU + CAN131.749-
Proposed method130.9970.941
Table 5. Comparison results of long-distance trajectory in a challenging environment.
Table 5. Comparison results of long-distance trajectory in a challenging environment.
No.DurationMethodsDistanceRMSE
Figure 212843.571GPS21,371.167180.517
GPS + IMU21,255.43411.436
GPS + IMU + CAN21,237.723-
Proposed method21,212.2371.165

Share and Cite

MDPI and ACS Style

Min, H.; Wu, X.; Cheng, C.; Zhao, X. Kinematic and Dynamic Vehicle Model-Assisted Global Positioning Method for Autonomous Vehicles with Low-Cost GPS/Camera/In-Vehicle Sensors. Sensors 2019, 19, 5430. https://doi.org/10.3390/s19245430

AMA Style

Min H, Wu X, Cheng C, Zhao X. Kinematic and Dynamic Vehicle Model-Assisted Global Positioning Method for Autonomous Vehicles with Low-Cost GPS/Camera/In-Vehicle Sensors. Sensors. 2019; 19(24):5430. https://doi.org/10.3390/s19245430

Chicago/Turabian Style

Min, Haigen, Xia Wu, Chaoyi Cheng, and Xiangmo Zhao. 2019. "Kinematic and Dynamic Vehicle Model-Assisted Global Positioning Method for Autonomous Vehicles with Low-Cost GPS/Camera/In-Vehicle Sensors" Sensors 19, no. 24: 5430. https://doi.org/10.3390/s19245430

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop