Next Article in Journal
Editorial for the Special Issue on Passive Micromixers
Previous Article in Journal
Micro Droplet Formation towards Continuous Nanoparticles Synthesis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Motion Constraints and Vanishing Point Aided Land Vehicle Navigation

1
School of Automation, Northwestern Polytechnical University, Xi’an 710129, China
2
Department of Geomatics Engineering, University of Calgary, Calgary, AB T2N 1N4, Canada
*
Author to whom correspondence should be addressed.
Micromachines 2018, 9(5), 249; https://doi.org/10.3390/mi9050249
Submission received: 15 April 2018 / Revised: 12 May 2018 / Accepted: 17 May 2018 / Published: 20 May 2018

Abstract

:
In the typical Inertial Navigation System (INS)/ Global Navigation Satellite System (GNSS) setup for ground vehicle navigation, measures should be taken to maintain the performance when there are GNSS signal outages. Usually, aiding sensors are utilized to reduce the INS drift. A full motion constraint model is developed allowing the online calibration of INS frame with respect to (w.r.t) the motion frame. To obtain better heading and lateral positioning performance, we propose to use of vanishing point (VP) observations of parallel lane markings from a single forward-looking camera to aid the INS. In the VP module, the relative attitude of the camera w.r.t the road frame is derived from the VP coordinates. The state-space model is developed with augmented vertical attitude error state. Finally, the VP module is added to a modified motion constrains module in the Extended Kalman filter (EKF) framework. Simulations and real-world experiments have shown the validity of VP-based method and improved heading and cross-track position accuracy compared with the solution without VP. The proposed method can work jointly with conventional visual odometry to aid INS for better accuracy and robustness.

1. Introduction

Accurate vehicular navigation is of great importance for some core parts in “smart cities”. It is not only used in the Guidance, Navigation and Control systems for autonomous driving, but also in V2X (vehicle-to-everything) technologies for effective transportation and cooperative safety communications among vehicles. For example, the positioning and heading information is shared among the V2V (vehicle-to-vehicle) network, according to the Cooperative Awareness Message (CAM) and Basic Safety Message (BSM), from the European Telecommunications Standards Institute (ETSI) and US Society of Automotive Engineers (SAE), respectively [1,2]. The inertial navigation system (INS) has the sole capability to produce a complete and continuous set of navigation state data, with high precision during a short time span. However, the positioning error grows considerably with time, especially when using low-cost MEMS inertial measurement units (IMU). Therefore, INS should be integrated with other aiding sensors. INS and Global Navigation Satellite System (GNSS) integration is commonly used for outdoor vehicles navigation. Nevertheless, GNSS may not be available in tunnels, and can suffer from obstruction and multipath errors in city centers and mountainous regions. There is also a possibility of a GNSS receiver being jammed or spoofed [3,4].
Various aiding sensors can be used to mitigate the error growth of INS in GNSS denied environment, such as motion constraints (e.g., Non-holonomic Constraints (NHC) as a “virtual” sensor) [5], speed sensors (wheel odometers, Doppler radars) [6], LiDAR sensors [7], and digital maps for map-matching or map-aiding [8,9]. Recently, many researchers proposed to use vision sensors to aid the navigation system, in loosely coupled form, with visual odometry (VO) module [10,11] or in tightly coupled form [12,13,14], and achieved remarkable results. Apart from commonly used point features in the pose estimation, vanishing points extracted from parallel lines in the scene can be used to obtain attitude information, which in then fused with INS in a loosely coupled manner. A simple and fundamental work of determining the rotation between two camera views was introduced by [15]. Rotation matrix between the two camera coordinate systems can be solved linearly when one obtains coordinates of three non-collinear vanishing points represented in both images. An alternative method using only two dominant directions is presented by [16] based on the Rodrigues’ formula. The vanishing point-based attitude estimation method has been investigated to help indoor pedestrian navigation or improve VO performance in hallway environments [17,18,19] and UAV navigation in urban areas with structured buildings [20,21].
For land vehicle applications, Kim et al. [22] showed improved accuracy when adding the omnidirectional vision attitude module to the INS/odometer integration for car navigation with a “Manhattan world” assumption [23] during GPS outages. These methods rely on the observation of parallel lines in structured buildings. However, there may be no sufficient building observations, for example, when driving through dense treed areas. Parallel lane markings or boundaries can be generally observed and detected in the structured road. Previous research considered ground plane tracking [24], camera calibration [25], and vehicles’ in-lane lateral distance calculation or prior road map-based shape registration for better localization performance [26,27,28]. Basically, these methods use observed lane markings for relative lateral positioning. Few works have been done on the navigation aiding using the vanishing point of parallel lane markings. In the earlier work, we used VP measurements of parallel lane markings to aid the INS onboard a car [29]. The relative heading aiding is treated in an absolute way, i.e., using true heading aiding method, which may cause large positioning error after certain time during GNSS outages. As the heading error grows larger, the measurement will be less relevant to the true heading error.
For the estimator used for integrated navigation, EKF (Extended Kalman filter) is most widely used due to its computation efficiency and the fact that nonlinearities of the error system model and measurement model are moderate in common situations. There are other nonlinear filters that do not linearize the system model at all, such as the Unscented Kalman filters (UKF) and particle filters (PF) [30,31,32]. For the application of GPS/INS navigation system, a performance comparison between EKF and UKF was made in [31]. It is reported that EKF and UKF offer identical performance except for unrealistic situation, e.g., a sixty kilometres initial position error. For the in-motion alignment of a low-cost INS with large initial attitude errors, the UKF outperforms EKF [32]. Recently, a dual Kalman filter method [33,34] is developed to estimate input and state simultaneously, which is suitable for structures fatigue damage identification. However, in the inertial navigation-based multi-sensor integration, the inputs are from IMU sensor readings, which are already known. The sensor biases are usually augmented into the state vector.
In this paper, the motion constraints module with a full module and the VP module are developed to aid the INS in the framework of EKF. We develop the VP aiding method based on the idea of Stochastic Cloning Kalman filter [35,36] considering the relative nature of the attitude measurement, which is also the estimation tool in the application of Micro Aerial Vehicle indoor navigation [37]. Using the sequential updating EKF, the proposed method can be easily incorporated into the conventional VO-based loosely-coupled vision-aided inertial navigation system (VINS) to improve the accuracy and robustness.
The paper is organized as follows. The coordinate systems involved are defined and the relationship is established between VP 2D coordinates and relative attitude of the camera frame w.r.t. the road frame. Based on this relationship, the VP aiding module is developed with an augmented state. VP module is then added into existing motion constraint aided INS. We evaluate the proposed algorithm by the Monte Carlo simulations and real data experiments. Discussions and conclusions are presented in the end.

2. Relative Attitude from Vanishing Point Coordinates

2.1. Coordinate Systems Definition

The coordinate systems defined here follow the right hand rule as shown in Figure 1 [29].
  • Navigation frame (n-frame): For the near-Earth navigation, it is defined as the local-level frame where x, y, z axes are in the directions of east, north and up.
  • Vehicle motion frame (m-frame) [38]: x-axis is parallel with the non-steered axle, pointing to the right, y-axis points to the forward, and z-axis points up. Both y and z-axes are in the vertical plane of symmetry. The origin, which is the measurement origin of the vehicle, is the position at road height, mid-way between the wheels of non-steered axle.
  • IMU body frame (b-frame): x-axis points right, the y-axis points forwards and the z-axis points up. The origin is the measurement origin of the IMU.
  • Camera frame (c-frame): The camera is looking forward, so the z-axis points forward, x-axis points right, and y-axis points down.
  • Camera intermediate frame ( c -frame): rotate 90 about x-axis of camera frame to get camera intermediate frame. It is introduced to derive the relationship between the vanishing point coordinates and relative attitude of the camera.
  • Road markings frame (r-frame): fixed to a road and rotated with the road terrain and road direction. Suppose the vehicle is moving on the road with parallel lane markings. In r-frame, x and y axes are in the road plane, perpendicular to and along the lane markings, respectively. z-axis is perpendicular to and pointing out from the road surface.

2.2. Camera Relative Attitude Derived from Vanishing Point Coordinates

A vanishing point (VP) is the point of intersection of image projections of a set of parallel 3D lines, e.g., lane-lines. Each set of parallel lines is associated to a VP in an image. As shown in Figure 2, the VP of the parallel lines is the intersection of the image plane with a ray parallel to the world lines through the camera center. The VP image coordinates are not affected by the relative translation, but only affected by the relative rotation between the camera and the scene [39]. This property enables the VP-based module a tool to determine the camera-to-scene relative attitude, which can be utilized as an aiding source for the INS. It is known that at least two sets of parallel lines are needed to determine all three degree of freedom of a camera’s relative attitude [16].
Here we define some Euler angles and rotation matrices that are important for our proposed methods. The rotation matrix from the r-frame to the c -frame ( C c r ) can be represented by a set of Euler angles ( θ , γ , ψ ), which denote the relative pitch, relative roll and relative yaw, respectively. The rotation order is as follows. First rotate ψ about z-axis of r-frame, then θ about x-axis, and finally γ about y-axis. When the first two rotations are completed, as illustrated in Figure 2, we derive the image coordinates of vanishing point
x v p 1 = x c + f cos θ tan ψ y v p 1 = y c + f tan θ
where f is the focal length of the camera, x c and y c are the coordinates of principle point. After the 3rd rotation, the final vanishing point coordinates ( x v p , y v p ) can be expressed as
x v p x c y v p y c = cos γ sin γ sin γ cos γ x v p 1 x c y v p 1 y c
Therefore, we have
x v p = x c + f cos θ tan ψ cos γ + f tan θ sin γ
y v p = y c f cos θ tan ψ sin γ + f tan θ cos γ
From Equations (3) and (4), one can find that there are two knowns ( x v p and y v p ), and three unknowns ( θ , γ , ψ ); therefore, additional constraints are needed to solve the equations.
Here we derive the relative yaw angle from the vanishing point coordinates. Combine Equations (3) and (4) and cancel out the relative yaw angle ψ , then we have
( x v p x c ) sin γ + ( y v p y c ) cos γ = f tan θ
From Equation (3), we can obtain relative yaw of c-frame w.r.t. r-frame.
ψ = arctan x v p x c f tan θ sin γ cos θ cos γ
Here we assume the relative roll is zero,
γ ^ = 0
Then the relative pitch and yaw can be expressed as
θ ^ = arctan y v p y c f
ψ ^ = arctan x v p x c f cos θ ^
Note that the relative attitude here is the camera intermediate frame c w.r.t. the road frame.

2.3. Straight Lane Determination and Vanishing Point Detection

Lane marking detection has been widely researched in the literature of autonomous driving and is gradually being incorporated into vehicles navigation modules [40]. In this paper, the measurement is derived from the image observations of parallel straight lane markings, so straight lane detection should be performed before the VP extraction. The VP extraction is based on the commonly used Hough transform which can extract dominant lines in the binary edge image.
The following will provide more details on the straight lane detection module (shown in Figure 3). First, the region of interest (ROI) is selected from the original image, and the edges are detected using the Canny Edge detector. The Hough transform is applied to the bottom ROI, which is near straight, to get multiple straight line parameters (polar coordinates ρ θ space), which are grouped with into several lane marking groups using DBSCAN (Density-Based Spatial Clustering of Applications with Noise) clustering. Only one lane marking in the bottom ROI is selected, and we obtain the interested lane marking points by searching the binary image from bottom to top with small regions along the main direction trend. Then the straight lane marking detection consists of two parts: (1) curve-fitting using interested lane points to suggest initial decisions of straight or curved lane, and (2) a delayed-decision mechanism based on the accumulated initial decisions among frames.
For the first part, the interested lane marking points’ coordinates are stored for a quadratic polynomial curve fitting. The initial straight detection criterion is the comparison between estimated coefficient and the threshold of the second-degree term. Erroneous points may be selected as interested lane points, for example, the straight lane may be wrongly identified as a curved lane. For this reason, we add second part, a simple “delayed” decision mechanism to obtain final results. It works as follows: the initial decisions of both “straight” or “curve” lanes are accumulated among image frames. At least N c curve detections suggest a curved lane, and at least N s straight lane detection after a long-curved segment will determine a straight lane. The values of the thresholds ( N c and N s ) in the decision mechanism are based on the velocity of the vehicle, heading change, and so on.

2.4. Measurement Uncertainty

Because the relative attitude measurement will be fused with INS solutions, it is essential to determine the measurement uncertainty. The error sources include IMU-camera calibration error, zero-roll assumption error, vanishing point detection error, and so on. We assume the accurate inter-sensor calibration, so calibration error is not considered here.
Now we investigate the measurement uncertainty caused by the zero-roll assumption error. As presented in the following subsection, relative yaw is the main aiding source, so we conduct a sensitivity analysis on the relative yaw w.r.t. the zero-roll assumption error.
From Equations (3), (4) and (6), we derive the partial derivation of ψ w.r.t. γ
ψ γ = cos 2 ψ ( tan ψ tan γ sin θ )
Assuming the zero-roll assumption error is γ , the induced relative yaw error is
ψ = ψ γ γ = cos 2 ψ ( tan ψ tan γ sin θ ) γ
We set an example for better illustration of how large the induced error can be. The ranges of the relative yaw and the relative pitch are set [ 20 , 20 ] and [ 3 , 3 ] , respectively. The true relative roll is set 3 (which means the relative roll error is 3 ). As shown in Figure 4, the maximum absolute value of ψ is less than 0.2 , occurring at the maximum relative pitch and yaw values.
For the error of VP detection, it usually comes from the Hough transform in the line detection. The empirical VP uncertainty is set as pixel level. The measurement errors can also come from the faulty identifying curved lane marks as straight, which should be avoided as much as possible. Due to the nature of inertial navigation, an incorrectly identified straight lane marking can lead to larger errors in the navigation solution than a straight line not found. Therefore, our strategy is to give the relatively strict thresholding as illustrated in the “Straight Lane Determination” procedure in Figure 3. The influence of false detections will be presented in the discussion section.

3. Sensor Fusion Process

This section describes the system model, measurement model and EKF used for the sensor integration using vehicle motion constraints, odometer information and vanishing point observations. In this paper, we make following assumptions of the system:
(1)
The vehicle is moving on a certain road segment, where the road boundaries or lane markings are parallel;
(2)
The camera has been calibrated, i.e., the Interior Orientation Parameters (IOPs) are known. The relative rotations between IMU frame and camera frame are also known.

3.1. Filter State

The navigation state is presented by the position (latitude L, longitude λ and height h), the velocity (east velocity, north velocity, and up velocity), and the attitude (pitch, roll, and heading). Differential equations of position P n , velocity V n , and attitude matrix C b n are
P ˙ n = L ˙ λ ˙ h ˙ = 0 1 R M + h 0 1 ( R N + h ) cos L 0 0 0 0 1 V n
V ˙ n = C b n f b 2 ω i e n + ω e n n × V n + g n
C ˙ b n = C b n [ ω i b b × ] [ ( ω i e n + ω e n n ) × ] C b n
where C b n is the attitude matrix, R M and R N are meridian radius of curvature and prime vertical radius respectively, f b is the specific force vector measured by the accelerometers, and ω i b b is the angular rate vector of the body frame w.r.t. the inertial frame, measured by the gyroscopes. ω i e n is the earth rotation angular rate vector w.r.t. the inertial frame, and ω e n n is the angular rate of the navigation frame w.r.t. ECEF frame. g n = [ 0 0 g ] T is the gravity acceleration vector in navigation frame, and g is the magnitude of local gravity acceleration. The notation [ a × ] is the asymmetric matrix of a vector a . INS mechanization is the process to calculate the navigation states, propagated through Equations (12)–(14) by using numerical integration methods, given the initial navigation states and IMU measurements [41,42].
The navigation states can be considered as the full states, and the associated errors are called error states. For the EKF-based integrated navigation systems, error states are estimated and used to correct the full states once the measurement update is accomplished [43]. For convenience, the filter state refers to the error state, which is denoted as X . For the normal INS-based EKF, the state is a 15-dimensional vector, not including the scale factors of the IMU sensor:
X I = δ V n T ϕ n T δ p T b g T b a T T
where the navigation state error vectors δ V n , ϕ n , and δ p are the velocity errors, attitude errors, and position errors, respectively. INS error model describes the differential equations of navigation state errors. b g and b a are the bias vectors of gyroscopes and accelerometers, each element of which is modeled as a 1st order Gauss-Markov process.
Here we augment two groups of states, one for the motion constraints aiding and the other for the VP aiding.
X = X I T δ β x δ β z K o d δ r b T motion constraint ϕ U 0 VP aiding T
where δ β x and δ β z are the misalignment errors between b-frame and m-frame about x and z axes. δ r b is the lever-arm error vector of the IMU measurement origin w.r.t. the origin of m-frame. K o d is the scale factor error of the odometer. ϕ U 0 is the cloned state of the third component of ϕ n at the beginning of the detection of a straight lane segment. Thus, the dimension of the system state is 22.
The reason we augment the error state of δ β x , δ β z , K o d and δ r b is to achieve accuracy improvement of the system. Specifically, these values may not be calibrated beforehand or may not be accurate. Also, values can change in various driving scenarios due to the suspending system and different tyre pressures. In this paper, δ β x , δ β z , K o d and δ r b are modeled as random constants. ϕ U 0 is also modeled as a random constant because it will not propagate with time.

3.2. Motion Constraint Aiding

Here Non-holonomic Constraint (NHC) is applied as the velocity constraint along the body x and z axes. It is based on the assumption that the land vehicle does not jump off the ground or slide sideways under normal conditions. The forward velocity can be derived from the wheel odometer if available. In this condition, the odometer measurement and pseudo-measurement of lateral and vertical velocity in the vehicle motion frame are formed as
V ˜ veh m = 0 V ˜ o d 0 = 1 0 0 0 1 + K o d 0 0 0 1 V veh m + v V
where V veh m is the vehicle velocity in m-frame, V ˜ veh m is the measurement of NHC and the velocity indicator, and v V is the measurement noise vector.
Based on the equation of relative linear motion in [44], a basic relationship exists between IMU body frame velocity and vehicle frame velocity:
V veh m = C b m ( V b + [ ω e b b × ] r b )
where V b is the IMU velocity in the b-frame, [ ω e b b × ] is the cross product of the angular velocity of the b-frame w.r.t. Earth-centered Earth-fixed (ECEF) frame, r b and C b m are the lever-arm vector and the rotation matrix between the m-frame and the b-frame.
Based on Equation (18), we can estimate V ^ veh m from INS as follows,
V ^ veh m = C ^ b m ( C ^ n b V ^ n + [ ω ^ e b b × ] r ^ b )
where V ^ n and C ^ n b are the INS calculated velocity and attitude matrix. C ^ b m and r ^ b are estimated from the calibration, otherwise through an initial guess (for example, an identity matrix and a zero vector, respectively, for nearly aligned b and m frames).
The residual measurement is formed by subtracting the INS derived vehicle and velocity from odometer and NHC pseudo-measurement.
Z N H C / O D = V ^ veh m V ˜ veh m
Neglecting the second-order error items, the measurement equation can be written as [29]
Z N H C / O D = C ^ n m δ V n C ^ n m [ V ^ n × ] ϕ n + C ^ b m ( ω ^ e b b × ) δ r b + 0 V y 0 0 0 V y V y 0 0 δ β x δ β z K o d + v V
where V y is the forward speed measured by the speed sensor or calculated from INS.
NHC aiding acts as a virtual sensor for land vehicle navigation, and can be applied in most driving conditions. If there is no odometer in the system, we select the first and third rows of Equation (21) to formulate the measurement.

3.3. Vanishing Point Aiding

In Section 2, we calculated the relative attitude from vanishing point coordinates. Now we will describe how this information can be used to aid the INS. As usual, we construct the measurement to associate with the state.
When the VP is detected for the first time (at time t 0 ) on the straight lane segment, we can calculate the attitude of this road segment ( C ^ r n ) w.r.t. the navigation frame:
C ^ r n = C ^ b , 0 n C ˜ r b , 0 = C ^ b , 0 n C c b ( C ˜ c , 0 r ) T
where C ^ b , 0 n is the INS attitude matrix at t 0 , C c b is the IMU-camera rotation matrix, C ˜ c , 0 r is the camera attitude matrix w.r.t the road frame at time t 0 , which is calculated by
C ˜ c , 0 r = C ˜ c , 0 r C c c = f D C M [ θ ^ , γ ^ , ψ ^ ] t 0 · f D C M [ 90 , 0 , 0 ]
where f D C M ( · ) is the function to calculate the direct cosine matrix from the Euler angles. The value of C ˜ c , 0 r is then stored for later use. As the vehicle moves on this road segment (at time t), we obtain the relative attitude from the VP measurement.
C ˜ c , t r = f D C M [ θ ^ , γ ^ , ψ ^ ] t · f D C M [ 90 , 0 , 0 ]
Then we can derive the attitude matrix C ˜ b V P n from VP module at time t
C ˜ b V P n = C ^ r n C ˜ c , t r C b c
The measurement matrix is constructed as:
Z M = C ^ b n C ˜ b V P n T
where C ^ b n is the current attitude matrix from INS.
Then we derive the relationship between the measurement matrix and the state with small error angle assumption.
Z M ( I ϕ n × ) C b n C r b ( I + ϑ V P , t r × ) ( I ϑ V P , 0 r × ) C b , 0 r C n b , 0 ( I + ϕ 0 n × )
Neglecting the second-order error items,
Z M = I ( ϕ n ϕ 0 n + ϑ V P , 0 r ϑ V P , t r ) ×
where I is the 3-dimensional identity matrix, ϕ 0 n is the attitude error at time t 0 , ϑ V P , t r and ϑ V P , 0 r are the relative attitude error from VP at current time instant t and time t 0 , respectively. The vector form of the measurement is
Z V = 1 2 Z M ( 3 , 2 ) Z M ( 2 , 3 ) Z M ( 1 , 3 ) Z M ( 3 , 1 ) Z M ( 2 , 1 ) Z M ( 1 , 2 ) = ϕ n ϕ 0 n + ϑ V P , 0 r ϑ V P , t r
Here we do not want to incorporate the erroneous pitch/roll information from VP, which may deteriorate the pitch/roll estimation. Hence, the final measurement is the third component of Z V .
Z V P = 0 0 1 Z V = ϕ U ϕ U 0 + v V P
where v V P is the measurement noise.

3.4. State Augmentation Treatment

We treat the state ϕ U 0 as an augmented state, which is the cloned state of ϕ U at the very beginning when a straight lane segment is detected. It is based on the idea of Stochastic Cloning Kalman filter [35,36]. Stochastic Cloning deals with the relative state measurement which depends on the current state as well as the previous state of the system. The core of Stochastic Cloning is considering the interdependencies (cross-correlation terms) between the previous states and current states. At the start time ( t 0 ) of the relative measurement, the relevant state is cloned and augmented in the state vector, and the state covariance is also augmented with considering the cross-correlation between the original state and cloned state. This essential step is often called “re-initialization”, because there is a life-time for the cloned state. During the life-time of the cloned state, EKF will evolve the state covariance matrix properly as long as the cloned state is modeled as a random constant.
Denote the first 21 states as normal states X n , and last cloned state as X c , such that the state in Equation (16) is written as
X = [ X n T X c T ] T
Once the start of a new straight line segment is detected, we re-initialize the cloned state augmentation. The re-initialization is established as follows:
  • Clone the value of X n ( 6 ) to X c ;
  • Set the state covariance as
    P k = I 21 J 1 × 21 P n n I 21 J 1 × 21 T = P n n P n c P n c T P c c
where P n n is the covariance matrix of normal state X n , P c c is the covariance matrix of the cloned state X c , and P n c is the cross-correlation between normal state and cloned state. J 1 × 21 is the Jacobian of X c to X n , which is presented as
J 1 × 21 = O 1 × 3 0 0 1 O 1 × 15
From the calculation, we can find that P c c is the element of P n n at index of (6, 6), which is the variance of the attitude error about the up direction.
The flowchart of integrating vanishing point module and motion constraint module with INS is illustrated in Figure 5. In the VP aiding module, when a set of new parallel straight lane markings is first detected, the road attitude of this segment is calculated and stored, which is used for future vehicle heading computation. Also, the state and covariance re-initiation is performed. Then subsequent VP observations can be used to construct the measurement residual information for measurement update.

4. Results

The described algorithm in the preceding subsections has been evaluated using simulations and real experiments. The results are given in this subsection. The simulation and experiments have shown the validity of using VP of lane markings to mitigate the INS heading error in order to achieve better positioning accuracy.

4.1. Simulation Results

We conducted Monte-Carlo simulations to evaluate the performance of VP aided land vehicle navigation algorithm. Specifically, we compared two schemes: INS/NHC/Odometer integration with and without VP aiding. A trajectory lasting 356 s was designed and generated. Several straight lines, turns, accelerations and climbing maneuvers were conducted. As shown in Figure 6, the main trend is along the north direction. The IMU, camera, odometer, GNSS parameters, lever-arm and boresight error of IMU are listed in Table 1. The distance traveled is 1571 m during the GNSS outage, which starts from 80 s till the end of the simulation. The valid VP flags over time are shown in Figure 7, indicating when the VP is used as a measurement.
The estimation of IMU-vehicle frame boresight misalignment and scale factor of the odometer are shown in Figure 8. It can be seen that the boresight alignment angles β x and β z and the scale factor of the odometer K o d can be quickly estimated in the first tens of seconds. The reason is that the vehicle undergoes some accelerations and decelerations during this period, and these maneuvers make the misalignment and scale factor observable [38]. The subsequent turns can also benefit their observability, contributing to good estimation accuracy.
The east and north position error, heading error, and vertical gyro bias estimation error of two schemes with 50 Monte Carlo trials and 3-sigma slope are illustrated in Figure 9 and Figure 10. We can see from the 3-sigma RMS error slope that the overall performance of INS/NHC/Odometer with VP aiding method is better than the one without VP aiding. For the final cross-track error (east position error in this trajectory), the VP-based method achieves 0.32% DT (1 σ ) with 33% improvement compared with no VP method ( 0.48% DT, 1 σ ). The along track errors (north direction) are almost the same, and relatively small, for both schemes, because the forward velocity information from the odometer is utilized, which can benefit the forward positioning accuracy. As illustrated in Figure 10a,b, VP-based method has better heading accuracy, and the heading error growth is slower when the VP measurement is valid during GNSS outage. This is the primary reason for the improved performance of lateral positioning. For the vertical gyroscope bias estimation, we can see that when VP-aiding is used, nearly half of the bias has been estimated in the end, which outperforms the method without VP aiding (Figure 10c,d).

4.2. Experimental Results

The vanishing point and NHC aided fusion algorithm described in the preceding section were tested using “2011_09_26_drive_0028” and “2011_09_26_drive_0101” dataset (shortened as Path-0028 and Path-0101, respectively) from the KITTI benchmark dataset [45]. The raw IMU data (100 Hz) and rectified grayscale images (10Hz, global shutter) are used to verify the algorithm. The biases of gyroscopes and accelerometers are 36 /h and 1 mg, respectively. The reference is navigation results from an IMU/GPS data with the L1/L2 RTK positioning accuracy of 0.01 m, pitch/roll accuracy of 0.03 , and heading accuracy of 0.1 [46]. A rough value of the lever-arm between IMU and the vehicle frame can be calculated based on the IMU’s mounted position. The boresight angles between IMU and the vehicle frame is unknown, so the initial values are assumed to be zeros.
To illustrate the performance of the proposed algorithms, we simulate the GPS outages on the trajectory. There is no wheel odometer or Doppler radar data in the dataset. For Path-0028, we compared the performance of the following navigation schemes.
(1)
Free INS: Only INS mechanization is performed to calculate the navigation states.
(2)
INS/NHC: NHC is triggered every 1 s with the measurement noise being 0.1 m/s (1 σ ).
(3)
INS/NHC/VP: Vanishing point aiding module is added into the system. It is triggered by the straight lane markings detection and valid vanishing point measurement.
For Path-0101, we added an odometer output module derived from reference data to perform the INS/NHC/OD integration. The performances are compared among free INS, INS/NHC/OD, and INS/NHC/OD/VP.
In the VP detection, we select three Hough peaks in the image and get three intersections of three lines to have better accuracy and robustness. The vanishing point coordinates are determined by averaging the three intersections. A thresholding testing is conducted to detect the wrong lane marking selection. A χ 2 statistic for the measurement residuals is generated for blunder detection to further improve the robustness of VP-based measurement update. The valid VP flags over time are recorded and shown in Figure 11 for Path-0028 and Path-0101, indicating when the VP can be used as a measurement.

4.2.1. Results of experiment #1: Path-0028

The path-0028 is approximately 776m through dense trees and lasts 45 s (430 images), containing several direction changes of the path, as shown on Figure 12a. Here the whole trajectory of the GPS data was blocked except for the initial value of navigation states.
The position estimation results of the three navigation schemes are shown in Figure 12, and a summary of the navigation errors is given in Table 2. We can see that NHC/INS integration improves the positioning accuracy about 71% (horizontal error dropped to 4 m from 14 m). When the VP module is added, a further 33% improvement over INS/NHC scheme is achieved and the positioning error reduces to 0.32% DT. Pitch estimation gets improved when NHC is applied, while roll estimation does not. The reason is that there is a relationship between vertical velocity and pitch angle inherently in NHC, which makes the pitch angle observable. There is no improvement for pitch/roll estimation when VP is added on INS/NHC, since only relative yaw information is utilized. As illustrated in Figure 13, the heading error gets smaller when the VP-aiding works, which is the main reason for increased positioning accuracy. The vertical gyroscope bias (not shown here) is estimated to a value of 3 /h when using VP. In this sense, we can see the complementary benefits to the INS when using motion constraint and VP observation.
The estimation of relative pose (lever-arm and boresight misalignment) error between IMU frame and the vehicle frame is presented in Figure 14. The value of misalignment about the pitch axis reaches to about 0.4 . If the misalignment has not been augmented into the states, the induced velocity error in vertical direction will be 0.126 m/s under the speed of 18 m/s. A method to account for this is to increase the measurement noise variance of NHC. However, the biased measurement error will still degrade the system performance. State augmenting has the potential strength to achieve improved results under the condition that the augmented states are observable.

4.2.2. Results of Experiment #2: Path-0101

The path-0101 lasts 91 s, and the total traveled distance is about 1236 m with a standstill at the beginning. The simulated GNSS outage starts from 32 s until the end of trajectory. The positioning results and navigation errors are shown in Figure 15 and in Table 3, respectively. In the scheme of INS/NHC/OD, the horizontal position performance in the last epoch achieved 61% improvement compared with free INS. With the help of VP module, the cross-track error (most in north direction) is further improved over 50% to 0.153 m of RMS, and the heading RMS error is reduced to 0.167 from 0.21 .
The lane markings in the whole trajectory are generally straight with several small curved segments. In the experiments, we find that if these small curved lane markings are also considered as straight and VP measurements are used, there will large variations in the heading and lateral position estimation, while the mean value of their errors are small. That is to say, small change of the lane marking direction will cause the heading estimation drift to the opposite direction, if the lane direction variation has not been detected. Thus, careful and strict treatment should be conducted on the straight lane detection module. We will discuss it in the following section.

5. Discussion

5.1. Robustness of VP-Aiding Approach

There may still be the circumstances that the curvature of the lane mark is small, but wrongly identified as the straight line. Assume the vehicle moves in a curved road, while the camera detects the lane markings as straight. When the initial road frame r 0 is obtained, it will be registered and act as the reference frame when the camera is still observing the “same” segment. However, the true road frame will change along the curve road as shown in Figure 16, from r 1 -frame to r 3 -frame, for instance. Now we will describe its influence on the Kalman filter. In this situation, the attitude matrix from VP module described in Equation (25) will be changed to the following form.
C ˜ b V P n = C ^ r 0 n C ˜ c , t r i C b c
where the subscript r 0 is the initial road frame and the superscript r i ( i = 1 , 2 , ) is the instant road frame along the road. After the derivation, the measurement in Equation (30) can be written as
Z V P = 0 0 1 Z V = ϕ U ϕ U 0 α i + v V P
where α i is the angle from r 0 -frame to r i -frame around the vertical axis. The term α i + v V P will be considered as the measurement noise. The sensitivity analysis turns out to be the analysis of measurement error α i to the filter performance. The most desirable situation is that the road lane markings are randomly curved, i.e., the direction changes slightly from one side to the other randomly, such that α i is considered to be zero-mean white noise. The worst case is that the road is bending to one direction, causing α i continuously growing, which will violate the Kalman filter assumption. The heading error state will be erroneously estimated and the gyro bias in z-axis will also get influenced. This kind of “soft” failure can be detected, for example by using the AIME (Autonomous Integrity Monitored Extrapolation) method. The concept is to form the averaged normalized innovation from last N measurements [4]. The AIME innovation test statistic for the filter at time k is [47]
s k 2 = δ z μ T C μ 1 δ z μ
where
C μ 1 = i = k + 1 N k C i 1
δ z μ = C μ i = k + 1 N k C i 1 δ z i
where δ z i and C i are filter innovation and corresponding covariance at time i. The test statistic has a chi-square distribution with the degree of freedom being the measurement vector size. The chi-square test is conducted for the fault detection. Here the test statistic s k 2 has a chi-square distribution with one degree of freedom, since the VP measurement has one dimension. Two parameters should be set for the fault detection. One is the threshold T AIME to be compared with the test statistic s k 2 . The other is the sample size N.
If s k 2 exceeds T AIME , then the soft-fault is reported. The threshold T AIME can be calculated or looked up in a table when the significance level ( α ) or confidence level ( 1 α ) is given. Here we choose the normal significance level α = 0.05 (false alarm rate), so that T AIME = 3.84 with one degree of freedom. The selection of T AIME is a trade-off between the detection sensitivity and false-alarm rate.
The selection of sample size N is a trade-off between detection sensitivity and response time. Larger the sample size is, easier the detection will be. However, it is also demanded to have faster response time for less contaminated state estimates. In our setup, N is selected as 50 for 10 Hz image rate, which means a 5-s time window for the fault detection. As shown in Table 4, it can also detect the very slowly drifting case (#4: radius = 8000 m) with 5 s time window. Larger drift can be detected more quickly, if multiple test statistics with different sample sizes are computed.
To verify the robustness of proposed algorithm, we simulate several trajectories, each of which consists of a straight lane and a curved lane with a different radius, the worst case mentioned above. The straight and curved parts both last 120 s, with the constant speed of 10 m/s.
The sensors’ specifications are listed in Table 1. The GNSS is available during first 80 s, while motion constraints are used throughout the trajectory. To have a better idea of how the curviness can affect the proposed method, we examine three VP-aiding schemes, namely,
(1)
No VP: only motion constraints are used to aid the INS;
(2)
VP: Proposed VP-aiding method is used without soft faults detection;
(3)
VP & AIME: Proposed VP-aiding method is used and soft faults detection (AIME) is utilized.
Monte Carlo simulations are conducted 50 runs for each scheme and each trajectory. Corresponding curved lane detection and heading accuracy results (1-sigma RMS errors at the endpoint) are presented in Table 4. The heading and vertical gyro bias estimation error for the trajectory #3 are illustrated in Figure 17 without AIME and in Figure 18 when AIME is used.
For the straight line detection algorithm (the threshold of second-degree term a 0 = 1 × 10 4 in Figure 3 in our case), we find that it can report the curved lane successfully when the curve radius is less than 3150 m (for example the case radius R = 3000 m). When the radius is larger (cases #2 to #4 in Table 4), the straight line detection algorithm will wrongly detect the lane as straight. As a result, VP-aiding demonstrates very large heading and vertical-gyro bias estimation errors. More curved the lane is, larger errors will be. The heading error growth rate is approximately the turning rate in such curved lane. For example, for R = 4584 m, the turning rate is 10 m/s/4584 m = 450 /h, and heading error growth rate is 14.18 /120 s = 425.4 /h.
When we add the AIME soft fault detection function (AIME detection sample size N is 50 (5 s measurements)), the heading and vertical-gyro bias errors drop significantly compared with VP without AIME. The heading error is slightly larger than the scheme without VP, because VP still works in the 5 s detection period. The fault measurements have the direct influence to heading angle, while the vertical-gyro bias estimation is less influenced by the short fault interval, so we can observe that the vertical-gyro bias error is smaller than the scheme without VP in Table 4. As a result, the AIME is suggested for the soft failure detection in proposed VP-aided INS.

5.2. A Complement to Point-Based VO-Aiding

We emphasis that the proposed method is not necessarily superior to the conventional point-based VO. It acts as a complementary method to aid the INS. One of the main advantages of VP-aiding method is that it does not need to track or match the features among frames. Compared to point features, lane markings are "higher level" features containing road direction information. Therefore, our method has the potential to incorporate the digital map to obtain the absolute heading information for additional aiding, which is somehow related with our previous work [29]. However, the disadvantages are also obvious. VP-aiding can only work in the environment with parallel lines, and using the lane VPs only provides one dimension attitude information. For the conventional VO aiding, there is no shape requirement in the scene, and it can output 3-dimentional relative rotation and relative translation (up to scale for monocular case).
The main challenge for conventional monocular VO is the data degeneracy in the pose estimation. The essential (E) matrix is unstable when the feature points lie close to planes, for example, on the ground plane, or when there is no motion or pure rotation of the camera. On the contrary, homography (H) computation requires the feature points on a plane or pure rotation of the camera. To handle this, it is required to detect the degenerate configurations and automatically switch between E and H, which will lead to additional computation cost [16].
As VP-aiding and VO-aiding utilize different environmental features, they can work together to improve accuracy and robustness of the vision-aided INS.

6. Conclusions

In this work, we have investigated an alternative approach to mitigate the navigation error growth by using the vanishing point observations to aid the INS for land vehicle navigation applications. The VP module, extracting information from parallel lane markings, is added to the modified NHC/odometer aiding module to further constrain the navigation errors during GNSS outages. The main contributions are the development of VP-aided INS measurement model and the motion constraint aiding module considering the body-to-vehicle frame misalignment. The Monte Carlo simulations and real experiments have shown the smaller heading and position drifting by fusing VP measurements and motion constraint to the navigation system, because of the complementary information provided by these two aiding modules for the INS.

Author Contributions

Z.L. conceived and designed the experiments; Z.L. and C.Y. analyzed the data; Z.L. and Y.Q. contributed simulation tools; Z.L. wrote the paper. N.E.-S. supervised the whole work and provided very important technical feedbacks.

Acknowledgments

This work was supported by Naser El-Sheimy research funds from NSERC and Canada Research Chairs programs. The first author would also thank the support from the China Scholarship Council and the University of Calgary.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Eckhoff, D.; Sofra, N.; German, R. A performance study of cooperative awareness in ETSI ITS G5 and IEEE WAVE. In Proceedings of the 2013 10th Annual Conference on Wireless On-Demand Network Systems and Services, WONS 2013, Banff, AB, Canada, 18–20 March 2013; pp. 196–200. [Google Scholar]
  2. European Telecommunications Standards Institute. Intelligent Transport Systems (ITS)—Vehicular Communications—Basic Set of Applications—Part 2: Specification of Cooperative Awareness Basic Service ETSI EN 302 637-2 V1.3.2; Technical Report; European Telecommunications Standards Institute: Sophia Antipolis, France, 2014. [Google Scholar]
  3. Petit, J.; Shladover, S.E. Potential Cyberattacks on Automated Vehicles. IEEE Trans. Intell. Transp. Syst. 2015, 16, 546–556. [Google Scholar] [CrossRef]
  4. Liu, Y.; Fu, Q.; Liu, Z.; Li, S. GNSS spoofing detection ability of a loosely coupled INS/GNSS integrated navigation system for two integrity monitoring methods. In Proceedings of the 2017 International Technical Meeting of the Institute of Navigation, ITM 2017, Monterey, CA, USA, 30 January–2 February 2017. [Google Scholar]
  5. Dissanayake, G.; Sukkarieh, S.; Nebot, E.; Durrant-Whyte, H. The aiding of a low-cost strapdown inertial measurement unit using vehicle model constraints for land vehicle applications. IEEE Trans. Robot. Autom. 2001, 17, 731–747. [Google Scholar] [CrossRef]
  6. Niu, X.; Nassar, S.; El-Sheimy, N. An Accurate Land-Vehicle MEMS IMU/GPS Navigation System Using 3D Auxiliary Velocity Updates. J. Inst. Navig. 2007, 54, 177–188. [Google Scholar] [CrossRef]
  7. Atia, M.M.; Liu, S.; Nematallah, H.; Karamat, T.B.; Noureldin, A. Integrated indoor navigation system for ground vehicles with automatic 3-D alignment and position initialization. IEEE Trans. Veh. Technol. 2015, 64, 1279–1292. [Google Scholar] [CrossRef]
  8. Yu, C.; El-Sheimy, N.; Lan, H.; Liu, Z. Map-Based Indoor Pedestrian Navigation Using an Auxiliary Particle Filter. Micromachines 2017, 8, 225. [Google Scholar] [CrossRef]
  9. Attia, M.; Moussa, A.; El-Sheimy, N. Bridging integrated GPS/INS systems with geospatial models for car navigation applications. In Proceedings of the 23rd International Technical Meeting of the Satellite Division of the Institute of Navigation (ION GNSS 2010), Portland, OR, USA, 21–24 September 2010; pp. 1697–1703. [Google Scholar]
  10. Tardif, J.P.; George, M.; Laverne, M.; Kelly, A.; Stentz, A. A new approach to vision-aided inertial navigation. In Proceedings of the 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Taipei, Taiwan, 18–22 October 2010; pp. 4161–4168. [Google Scholar]
  11. Schmid, K.; Ruess, F.; Suppa, M.; Burschka, D. State estimation for highly dynamic flying systems using key frame odometry with varying time delays. In Proceedings of the IEEE International Conference on Intelligent Robots and Systems, Vilamoura, Portugal, 7–12 October 2012; pp. 2997–3004. [Google Scholar]
  12. Veth, M.J. Fusion of Imaging and Inertial Sensors for Navigation. Ph.D. Thesis, Air Force Institute of Technology, Dayton, OH, USA, 2008. [Google Scholar]
  13. Mourikis, A.I.; Roumeliotis, S.I. A multi-state constraint Kalman filter for vision-aided inertial navigation. In Proceedings of the IEEE International Conference on Robotics and Automation, Roma, Italy, 10–14 April 2007; IEEE: Roma, Italy, 2007; pp. 3565–3572. [Google Scholar]
  14. Leutenegger, S.; Lynen, S.; Bosse, M.; Siegwart, R.; Furgale, P. Keyframe-based visual-inertial odometry using nonlinear optimization. Int. J. Robot. Res. 2015, 34, 314–334. [Google Scholar] [CrossRef]
  15. Caprile, B.; Torre, V. Using vanishing points for camera calibration. Int. J. Comput. Vis. 1990, 4, 127–139. [Google Scholar] [CrossRef]
  16. Bazin, J.C.; Demonceaux, C.; Vasseur, P.; Kweon, I.S. Motion estimation by decoupling rotation and translation in catadioptric vision. Comput. Vis. Image Underst. 2010, 114, 254–273. [Google Scholar] [CrossRef]
  17. Keßler, C.; Ascher, C.; Flad, M.; Trommer, G.F. Multi-sensor indoor pedestrian navigation system with vision aiding. Gyroscopy Navig. 2012, 3, 79–90. [Google Scholar] [CrossRef]
  18. Ruotsalainen, L.; Kuusniemi, H.; Bhuiyan, M.Z.H.; Chen, L.; Chen, R. A two-dimensional pedestrian navigation solution aided with a visual gyroscope and a visual odometer. GPS Solut. 2013, 17, 575–586. [Google Scholar] [CrossRef]
  19. Camposeco, F.; Pollefeys, M. Using vanishing points to improve visual-inertial odometry. In Proceedings of the 2015 IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 26–30 May 2015; pp. 5219–5225. [Google Scholar]
  20. Williams, B.; Hudson, N.; Tweddle, B.; Brockers, R.; Matthies, L. Feature and pose constrained visual Aided Inertial Navigation for computationally constrained aerial vehicles. In Proceedings of the 2011 IEEE International Conference on Robotics and Automation, Shanghai, China, 9–13 May 2011; pp. 431–438. [Google Scholar]
  21. Hwangbo, M.; Kanade, T. Visual-inertial UAV attitude estimation using urban scene regularities. In Proceedings of the IEEE International Conference on Robotics and Automation, Shanghai, China, 9–13 May 2011; pp. 2451–2458. [Google Scholar]
  22. Kim, S.B.; Bazin, J.C.; Lee, H.K.; Choi, K.H.; Park, S.Y. Ground vehicle navigation in harsh urban conditions by integrating inertial navigation system, global positioning system, odometer and vision data. IET Radar Sonar Navig. 2011, 5, 814. [Google Scholar] [CrossRef]
  23. Bazin, J.C.; Demonceaux, C.; Vasseur, P.; Kweon, I. Rotation estimation and vanishing point extraction by omnidirectional vision in urban environment. Int. J. Robot. Res. 2012, 31, 63–81. [Google Scholar] [CrossRef]
  24. Schwarze, T.; Lauer, M. Robust ground plane tracking in cluttered environments from egocentric stereo vision. In Proceedings of the IEEE International Conference on Robotics and Automation, Seattle, WA, USA, 26–30 May 2015; pp. 2442–2447. [Google Scholar]
  25. Lee, B.; Zhou, J.; Ye, M.; Guo, Y.; Sensing, R.; Calibration, C.; Phone, S. A Novel Approach to Camera Calibration Method for Smart Phones. In Proceedings of the the International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Prague, Czech Republic, 12–19 July 2016; Volume XLI-B5, pp. 49–54. [Google Scholar]
  26. Seo, Y.W.; Rajkumar, R.R. Use of a Monocular Camera to Analyze a Ground Vehicle’s Lateral Movements for Reliable Autonomous City Driving. In Proceedings of the IEEE IROS Workshop on Planning, Perception and Navigation for Intelligent Vehicles, Seattle, WA, USA, 26–30 May 2013; pp. 197–203. [Google Scholar]
  27. Tao, Z.; Bonnifait, P.; Fremont, V.; Ibanez-Guzman, J. Lane marking aided vehicle localization. In Proceedings of the IEEE Conference on Intelligent Transportation Systems, ITSC, The Hague, The Netherlands, 6–9 October 2013; pp. 1509–1515. [Google Scholar]
  28. Cui, D.; Xue, J.; Zheng, N. Real-Time Global Localization of Robotic Cars in Lane Level via Lane Marking Detection and Shape Registration. IEEE Trans. Intell. Transp. Syst. 2016, 17, 1039–1050. [Google Scholar] [CrossRef]
  29. Liu, Z.; El-sheimy, N.; Yu, C.; Qin, Y. Vanishing Point/Vehicle Motion Constraints Aided Ground Vehicle Navigation. In Proceedings of the 2017 International Technical Meeting of the Institute of Navigation, Monterey, CA, USA, 30 January–2 February 2017; pp. 292–300. [Google Scholar]
  30. Chatzi, E.N.; Smyth, A.W. The unscented Kalman filter and particle filter methods for nonlinear structural system identification with non-collocated heterogeneous sensing. Struct. Control Health Monit. 2009, 16, 99–123. [Google Scholar] [CrossRef]
  31. Wendel, J.; Metzger, J.; Moenikes, R.; Maier, A.; Trommer, G.F. A Performance comparison of tightly coupled GPS/INS navigation systems based on extended and sigma point Kalman filters. Navig. J. Inst. Navig. 2006, 53, 21–31. [Google Scholar] [CrossRef]
  32. Shin, E.H.; El-Sheimy, N. Unscented Kalman Filter and Attitude Errors of Low-Cost Inertial Navigation Systems. Navigation 2007, 54, 1–9. [Google Scholar] [CrossRef]
  33. Eftekhar Azam, S.; Chatzi, E.; Papadimitriou, C. A dual Kalman filter approach for state estimation via output-only acceleration measurements. Mech. Syst. Signal Process. 2015, 60, 866–886. [Google Scholar] [CrossRef]
  34. Azam, S.E.; Chatzi, E.; Papadimitriou, C.; Smyth, A. Experimental validation of the Kalman-type filters for online and real-time state and input estimation. J. Vib. Control 2017, 23, 2494–2519. [Google Scholar] [CrossRef]
  35. Roumeliotis, S.I.; Burdick, J.W. Stochastic cloning: A generalized framework for processing relative state measurements. In Proceedings of the 2002 IEEE International Conference on Robotics and Automation, Washington, DC, USA, 11–15 May 2002; Volume 2, pp. 1788–1795. [Google Scholar]
  36. Mourikis, A.I.; Roumeliotis, S.I.; Burdick, J.W. SC-KF Mobile Robot Localization : A Stochastic-Cloning Kalman Filter for Processing Relative-State Measurements. IEEE Trans. Robot. 2007, 23, 717–730. [Google Scholar] [CrossRef]
  37. Popp, M.; Crocoll, P.; Ruppelt, J.; Trommer, G.F. A Novel Multi Image Based Navigation System to Aid Outdoor—Indoor Transition Flights of Micro Aerial Vehicles. In Proceedings of the 27th International Technical Meeting of the Satellite Division of the Institute of Navigation ION GNSS+, Tampa, FL, USA, 8–12 September 2014; pp. 1595–1608. [Google Scholar]
  38. Liu, Z.; El-Sheimy, N.; Qin, Y. Low-cost INS/Odometer Integration and Sensor-to-sensor Calibration for Land Vehicle Applications. In Proceedings of the IAG/CPGPS International Conference on GNSS+ (ICG+ 2016), Shanghai, China, 27–30 July 2016. [Google Scholar]
  39. Hartley, R.; Zisserman, A. Multiple View Geometry in Computer Vision; Cambridge University Press: Cambridge, UK, 2003. [Google Scholar]
  40. Bar Hillel, A.; Lerner, R.; Levi, D.; Raz, G. Recent progress in road and lane detection: A survey. Mach. Vis. Appl. 2014, 25, 727–745. [Google Scholar] [CrossRef]
  41. El-Sheimy, N. Inertial Surveying and INS/GPS Integration, Lecture Notes for ENGO 623 Course; Department of Geomatics Engineering, The University of Calgary: Calgary, AB, Canada, 2016. [Google Scholar]
  42. Qin, Y. Inertial Navigation, 2nd ed.; Press of Science: Beijing, China, 2014. [Google Scholar]
  43. Liu, Z.; El-Sheimy, N.; Qin, Y.; Yu, C.; Zhang, J. Partial State Feedback Correction for Smoothing Navigational Parameters. In China Satellite Navigation Conference (CSNC) 2016 Proceedings: Volume II, Changsha, China, 18–20 May 2016; Sun, J., Liu, J., Fan, S., Wang, F., Eds.; Springer Singapore: Singapore, 2016; pp. 461–472. [Google Scholar]
  44. Liu, Z.; Qin, Y.; Li, S.; Cui, X. A new IMU-based method for relative pose determination. In Proceedings of the 22nd Saint Petersburg International Conference on Integrated Navigation Systems, Saint Petersburg, Russia, 25–27 May 2015; pp. 425–428. [Google Scholar]
  45. Geiger, A.; Lenz, P.; Stiller, C.; Urtasun, R. Vision meets robotics: The KITTI dataset. Int. J. Robot. Res. 2013, 32, 1231–1237. [Google Scholar] [CrossRef]
  46. OXTS. RT3000 Brochure. Available online: https://www.oxts.com/app/uploads/2017/07/RT3000-brochure-170606.pdf (accessed on 25 December 2017).
  47. Groves, P.D. Principles of GNSS, Inertial, and Multisensor Integrated Navigation Systems; Artech House: Norwood, MA, USA, 2008. [Google Scholar]
Figure 1. Coordinate systems.
Figure 1. Coordinate systems.
Micromachines 09 00249 g001
Figure 2. Vanishing point and relative attitude.
Figure 2. Vanishing point and relative attitude.
Micromachines 09 00249 g002
Figure 3. Straight Lane Detection Procedure.
Figure 3. Straight Lane Detection Procedure.
Micromachines 09 00249 g003
Figure 4. zero-roll assumption induced relative yaw error distribution.
Figure 4. zero-roll assumption induced relative yaw error distribution.
Micromachines 09 00249 g004
Figure 5. Flowchart of vanishing point and motion constraint aided INS.
Figure 5. Flowchart of vanishing point and motion constraint aided INS.
Micromachines 09 00249 g005
Figure 6. Horizontal trajectory.
Figure 6. Horizontal trajectory.
Micromachines 09 00249 g006
Figure 7. Valid vanishing point flags in the simulation.
Figure 7. Valid vanishing point flags in the simulation.
Micromachines 09 00249 g007
Figure 8. Boresight misalignment and scale factor of odometer (50 Monte Carlo trails estimation using INS/NHC/OD/VP are in thin solid lines; reference values are in thick dash green lines).
Figure 8. Boresight misalignment and scale factor of odometer (50 Monte Carlo trails estimation using INS/NHC/OD/VP are in thin solid lines; reference values are in thick dash green lines).
Micromachines 09 00249 g008
Figure 9. Position errors (50 Monte Carlo trials in solid lines and 3-sigma slope in dashed lines).
Figure 9. Position errors (50 Monte Carlo trials in solid lines and 3-sigma slope in dashed lines).
Micromachines 09 00249 g009
Figure 10. Heading and vertical gyro errors (50 Monte Carlo trials in solid lines and 3-sigma slope in dashed lines).
Figure 10. Heading and vertical gyro errors (50 Monte Carlo trials in solid lines and 3-sigma slope in dashed lines).
Micromachines 09 00249 g010
Figure 11. Valid vanishing point flags in the experiments.
Figure 11. Valid vanishing point flags in the experiments.
Micromachines 09 00249 g011
Figure 12. Position estimation results (path-0028).
Figure 12. Position estimation results (path-0028).
Micromachines 09 00249 g012
Figure 13. Attitude estimation error.
Figure 13. Attitude estimation error.
Micromachines 09 00249 g013
Figure 14. IMU-Vehicle relative pose estimation.
Figure 14. IMU-Vehicle relative pose estimation.
Micromachines 09 00249 g014
Figure 15. Reference and calculated trajectory on Google Earth (path-0101).
Figure 15. Reference and calculated trajectory on Google Earth (path-0101).
Micromachines 09 00249 g015
Figure 16. Curved lane markings and instant road frames.
Figure 16. Curved lane markings and instant road frames.
Micromachines 09 00249 g016
Figure 17. Heading and vertical-axis gyro bias estimation errors for trajectory No.3 without AIME (50 trials and 3-sigma error bound).
Figure 17. Heading and vertical-axis gyro bias estimation errors for trajectory No.3 without AIME (50 trials and 3-sigma error bound).
Micromachines 09 00249 g017
Figure 18. Heading and vertical-axis gyro bias estimation errors for trajectory No.3 when AIME is used (50 trials and 3-sigma error bound).
Figure 18. Heading and vertical-axis gyro bias estimation errors for trajectory No.3 when AIME is used (50 trials and 3-sigma error bound).
Micromachines 09 00249 g018
Table 1. Sensor Parameters.
Table 1. Sensor Parameters.
SensorsItemsValues
Gyroscopes (200 Hz)Bias36 /h (1  σ )
ARW 0.6 / h ( 1 σ )
Accelerometers (200 Hz)Bias1 mg ( 1 σ )
VRW0.05 m/s/ h ( 1 σ )
Camera (10 Hz)IOPsSame with the experiment
VP accuracy2 pixels (1  σ )
Speed indicator (odometer)Scale factor error0.001 (1  σ )
Noise standard deviation0.005 m/s (1  σ )
Relative PoseIMU/Vehicle misalignmentx-axis: 0.8 , z-axis: 1 (1 σ )
IMU/Vehicle lever-arm0.1 m in three directions (1 σ )
camera/vehicle boresight error 1 for pitch (1 σ )
GNSS (1 Hz, valid in first 80 s)Position accuracy2 m (1 σ )
Velocity accuracy0.5 m/s (1 σ )
Table 2. Navigation Errors During GPS Outage (path-0028).
Table 2. Navigation Errors During GPS Outage (path-0028).
Free INSINS/NHCINS/NHC/VP
MeanMaxRMSMeanMaxRMSMeanMaxRMS
East position (m)0.9184.3371.731−0.8932.0971.0860.7011.5820.840
North position (m)−4.33713.235.670−1.3273.3131.616−0.8392.1391.026
Height (m)0.0630.5580.2820.0910.5660.2670.0720.5470.271
Pitch ( )0.0960.2160.1080.0120.1030.0380.0090.1030.038
Roll ( )−0.0320.1140.046−0.0240.0990.045−0.0200.0950.043
Heading ( )−0.2410.4090.254−0.2480.4230.262−0.1890.4230.206
East velocity (m/s)0.0890.2210.124−0.0440.1320.069−0.0310.1220.059
North velocity (m/s)0.3150.8240.372−0.0800.1710.091−0.0550.1320.072
Up velocity (m/s)0.0040.0380.0220.0040.0400.0210.0040.0390.021
Table 3. Navigation Errors During GPS Outage (path-0101).
Table 3. Navigation Errors During GPS Outage (path-0101).
Free INSINS/NHCINS/NHC/VP
MeanMaxRMSMeanMaxRMSMeanMaxRMS
East position (m)2.7687.0763.561−1.9704.3362.355−1.8774.12242.241
North position (m)−4.33713.235.670−1.3273.3131.616−0.8392.1391.026
Height (m)0.0630.5580.2820.0910.5660.2670.0720.5470.271
Pitch ( )0.0960.2160.1080.0120.1030.0380.0090.1030.038
Roll ( )−0.0320.1140.046−0.0240.0990.045−0.0200.0950.043
Heading ( )−0.2410.4090.254−0.2480.4230.262−0.1890.4230.206
East velocity (m/s)0.0890.2210.124−0.0440.1320.069−0.0310.1220.059
North velocity (m/s)0.3150.8240.372−0.0800.1710.091−0.0550.1320.072
Up velocity (m/s)0.0040.0380.0220.0040.0400.0210.0040.0390.021
Table 4. Simulation Results of Different VP-aiding Schemes under Different Curve Radii (RMS Error).
Table 4. Simulation Results of Different VP-aiding Schemes under Different Curve Radii (RMS Error).
Trajectory No.Curve DetectedNo VPWith VPVP & AIME
Headingz-Gyro BiasHeadingz-Gyro BiasHeadingz-Gyro Bias
# 1: R = 3000 mSuccess0.27 4.35 /h0.25 3.98 /h
# 2: R = 4584 mFailed0.27 4.67 /h14.18 128.99 /h0.34 4.20 /h
# 3: R = 6000 mFailed0.32 5.45 /h10.87 97.92 /h0.32 4.40 /h
# 4: R = 8000 mFailed0.28 4.72 /h8.12 73.54 /h0.31 3.99 /h

Share and Cite

MDPI and ACS Style

Liu, Z.; El-Sheimy, N.; Yu, C.; Qin, Y. Motion Constraints and Vanishing Point Aided Land Vehicle Navigation. Micromachines 2018, 9, 249. https://doi.org/10.3390/mi9050249

AMA Style

Liu Z, El-Sheimy N, Yu C, Qin Y. Motion Constraints and Vanishing Point Aided Land Vehicle Navigation. Micromachines. 2018; 9(5):249. https://doi.org/10.3390/mi9050249

Chicago/Turabian Style

Liu, Zhenbo, Naser El-Sheimy, Chunyang Yu, and Yongyuan Qin. 2018. "Motion Constraints and Vanishing Point Aided Land Vehicle Navigation" Micromachines 9, no. 5: 249. https://doi.org/10.3390/mi9050249

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop