Next Article in Journal
Sensing Capacity in Dysprosium Metal–Organic Frameworks Based on 5-Aminoisophthalic Acid Ligand
Previous Article in Journal
Material-Specific Determination Based on Microscopic Observation of Single Microplastic Particles Stained with Fluorescent Dyes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Tightly Coupled Visual-Inertial GNSS State Estimator Based on Point-Line Feature

1
Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen 518055, China
2
Research Institute of Tsinghua, Pearl River Delta, Guangzhou 510530, China
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(9), 3391; https://doi.org/10.3390/s22093391
Submission received: 1 April 2022 / Revised: 25 April 2022 / Accepted: 26 April 2022 / Published: 28 April 2022
(This article belongs to the Section Remote Sensors)

Abstract

:
Visual-inertial odometry (VIO) is known to suffer from drifting and can only provide local coordinates. In this paper, we propose a tightly coupled GNSS-VIO system based on point-line features for robust and drift-free state estimation. Feature-based methods are not robust in complex areas such as weak or repeated textures. To deal with this problem, line features with more environmental structure information can be extracted. In addition, to eliminate the accumulated drift of VIO, we tightly fused the GNSS measurement with visual and inertial information. The GNSS pseudorange measurements are real-time and unambiguous but experience large errors. The GNSS carrier phase measurements can achieve centimeter-level positioning accuracy, but the solution to the whole-cycle ambiguity is complex and time-consuming, which degrades the real-time performance of a state estimator. To combine the advantages of the two measurements, we use the carrier phase smoothed pseudorange instead of pseudorange to perform state estimation. Furthermore, the existence of the GNSS receiver and IMU also makes the extrinsic parameter calibration crucial. Our proposed system can calibrate the extrinsic translation parameter between the GNSS receiver and IMU in real-time. Finally, we show that the states represented in the ECEF frame are fully observable, and the tightly coupled GNSS-VIO state estimator is consistent. We conducted experiments on public datasets. The experimental results demonstrate that the positioning precision of our system is improved and the system is robust and real-time.

1. Introduction

Localization plays an important role in many applications such as robotics, unmanned driving, and unmanned aerial vehicles (UAVs). The fusion of information from multiple sensors for localization is the current mainstream method. The sensor fusion approaches have been widely studied for decades and can be divided into two streams: loosely coupled and tightly coupled approaches. Loosely coupled approaches [1,2] process the measurements of each sensor individually and then fuse all the results to obtain the final results. Tightly coupled approaches [3,4] combine measurements from all sensors and then use an estimator to process these measurements to obtain the final results. In general, the accuracy of tightly coupled approaches will be higher than that of loosely coupled approaches.
Visual odometry (VO) has received more attention due to the low price of cameras. Davison et al. [5] introduced MonoSLAM, which is the first monocular visual SLAM system that extracts the Shi-Tomasi corner [6]. MonoSLAM actively searches for matching point features in projected ellipses. However, because of the high computational complexity of MonoSLAM, it can only handle some small scenes. Klein et al. [7] proposed PTAM, which is the first optimization-based monocular visual SLAM system that executes feature tracking and mapping as two independent tasks in parallel in two threads.
Because PTAM is designed for small-scale AR scenes, it also has some disadvantages. For example, it can only handle small-scale scenes, and the tracked features are easy to lose. A milestone work after PTAM is ORB-SLAM [8] and ORB-SLAM2 [9]. ORB-SLAM improves PTAM by detecting and tracking ORB features and adding loop closure. These improvements greatly improve the positioning precision of VO. However, ORB-SLAM can only build sparse point cloud maps and cannot track keyframes during pure rotation, which also limits its application.
In addition, when encountering environments with repeated textures, point features cannot express environmental structure information well. On the contrary, line features contain more information related to the environmental structure and can overcome the interference caused by repeated textures. Therefore, point-line feature fusion can ensure that the VO system is more robust. He Yijia et al. [10] proposed PL-VIO, which is a VIO system based on point-line features. It uses the LSD line segment extraction algorithm [11] from OpenCV [12] to detect line features. However, LSD becomes the bottleneck for real-time performance due to its high computational cost [13], which causes the performance of PL-VIO to be seriously affected. To improve the speed and positioning precision of PL-VIO, Fu Qiang et al. [14] proposed PL-VINS, which is also a VIO framework based on point and line features. PL-VINS designs a modified LSD algorithm by studying hidden parameter tuning and length rejection strategy. The modified LSD runs at least three times faster than the LSD. PLF-VINS [15] is another VIO system based on point-line features, which introduces two methods for fusing point and line features. The first method is to use the positional similarity of point-line features to search for the relationship between point and line features. The second method is to fuse 3D parallel lines. The residuals formed by the two methods are then added to the VIO system. The results of PLF-VINS show that its positioning precision is greatly improved compared to some classic SLAM systems, such as OKVIS [4] and VINS-Mono [16].
Line features can improve the positioning precision of the VO system. However, in weak textures or dark scenes, the system still cannot extract a sufficient number of point and line features, which can cause large positioning errors. To overcome the shortcoming, fusing IMU measurements with images is a very effective method. MSCKF [17,18] is a VIO system based on EKF, which adds camera poses to the states of the system. When cameras observe landmarks, constraints can be formed between camera poses. The system is then updated by an observation model derived from geometric constraints. Since the number of camera poses is much smaller than the number of landmarks, this greatly reduces the time complexity of the system. VINS-Mono [16] is a tightly coupled, nonlinear optimization-based method that can obtain high-precision positioning results. Due to the existence of the loop closure and 4-DoF pose graph optimization, even if the system runs in large-scale scenes, it can still obtain accurate positioning results. In addition, to improve the performance of ORB-SLAM and eliminate the errors of ORB-SLAM, ORB-SLAM3 [19] adds IMU measurements based on ORB-SLAM. ORB-SLAM3 is more robust and has higher positioning precision compared with ORB-SLAM.
The VIO system is more robust than the VO system. Nevertheless, the VIO system has four unobservable directions, namely x, y, z, and yaw, which lead to the accumulated drift of VIO. To eliminate the accumulated drift, an effective approach is to combine VIO with GNSS measurements. Lee et al. [20] demonstrated that the GPS-aided VIO system is fully observable in the ENU frame. GVINS [21] is a tightly coupled GNSS-VIO state estimator that combines VIO with GNSS pseudorange and Doppler frequency shift measurements, which achieves high positioning accuracy and is a state-of-the-art method. Li Xingxing et al. [22] introduced a semi-tightly coupled framework based on GNSS Precise Point Positioning (PPP) and stereo VINS. The system can use S-VINS to return accurately predicted positions in GNSS-unfriendly areas. Liu Jinxu et al. [23] also proposed a tightly coupled GNSS-VIO state estimator that fuses GNSS raw measurements with VIO. It drops all GNSS measurements in GNSS-degradation scenes, which limits its positioning precision.
However, there are many challenges in fusing the information from multiple sensors. First, the VIO system cannot extract enough point features in areas with repeated textures. Second, GNSS Single Point Positioning (SPP) uses pseudorange measurement; it can only achieve meter-level positioning accuracy. Therefore, if the pseudorange measurement and VIO are directly fused, the positioning accuracy is not greatly improved. Finally, since the GNSS receiver and IMU are fixed in different spatial positions, it is necessary to estimate the extrinsic parameter between them.
In response to the above problems, this paper proposes a new tightly coupled GNSS-VIO system. Our proposed system is drift-free and can provide global coordinates. The main contributions of the text are as follows:
  • To obtain environmental structure information and deal with environments with repeated textures, we extracted line features based on point features.
  • To combine the merits of the pseudorange and carrier phase measurements, we used the carrier phase smoothed pseudorange instead of the pseudorange measurement, which can make the GNSS-VIO system run in real-time and improve the positioning accuracy.
  • We demonstrate that the states represented in the ECEF frame are fully observable, and the tightly coupled GNSS-VIO state estimator is consistent.
The rest of this paper is organized as follows: Section 2 introduces the implementation method of GNSS-VIO in detail, including line features, GNSS raw measurements processing, and observability analysis. Section 3 summarizes the detailed structure of our system. Section 4 conducts experiments on public datasets. Finally, conclusions are given in Section 5.

2. Methods

2.1. Frames and Notations

The frames involved in our system consist of:
  • Sensor Frame: In our system, ( ) c , ( ) b , and ( ) r denote the camera frame, the body frame, and the GNSS receiver frame respectively.
  • Local World Frame: The origin of the local world frame ( ) w is the position where the VIO system starts running, and the z-axis is gravity aligned as illustrated in Figure 1.
  • ENU Frame: The ENU frame is also called the East–North–Up frame. The x-, y-, and z-axis of the ENU frame point to the east, north, and up directions, respectively. In our system, the ENU frame is at the same origin as the local world, and the z-axis of the two frames is aligned (Figure 1). We use ( ) N to represent the ENU frame.
  • Geodetic Frame: As shown in Figure 1, geodetic coordinate ( ) G of a point p is represented by geodetic longitude L, geodetic latitude B, and geodetic height H. The geodetic longitude L of p is the angle between the reference meridian and another meridian that passes through p. The geodetic latitude B of p is the angle between the ellipsoid normal vector that passes through p and the projection of the ellipsoid normal vector into the equatorial plane. The geodetic height H of p is the minimum distance between p and the reference ellipsoid.
  • ECEF Frame: The Earth-Centered, Earth-Fixed (ECEF) frame ( ) E is fixed to Earth. As depicted in Figure 1, the origin of the ECEF frame is at the center of mass of Earth. The x-axis points to the intersection of the Equator and the Prime Meridian. The z-axis is perpendicular to the equatorial plane in the direction of the North Pole. The y-axis is chosen to form a right-handed coordinate system with the x- and z-axis.
In this paper, R b w and p b w represent the rotation and translation of the body frame with respect to the local world frame, and q b w is the corresponding quaternion form of the rotation R b w . v b w , b a , and b g denote the velocity of the origin of the body frame measured in the local world frame, accelerometer bias, and gyroscope bias, respectively. p c b and q c b stand for the extrinsic parameters between the camera and IMU. δ t r and δ t ˙ r represent the receiver clock error and receiver clock drifting rate, respectively. φ is the yaw between the local world frame and the ENU frame. R w N is the rotation matrix of the local world frame with respect to the ENU frame. p N E denotes the translation of the ENU frame with respect to the ECEF frame. p r b is the extrinsic translation parameter between the GNSS receiver and IMU. [ ] × represents the skew-symmetric matrix of a 3D vector.

2.2. Line Feature

2.2.1. Plücker Coordinates

In our system, we described a spatial line with the Plücker Coordinates. Given a spatial line L w π w , its Plücker Coordinates are represented by L w = ( n w T , d w T ) T , where n w 3 is the normal vector of the plane determined by L w , and the origin of π w and d w 3 is the line direction vector. We can transform L w in the local world frame to L c in the camera frame by:
L c = R w c [ p w c ] × R w c 0 R w c L w = T w c L w ,

2.2.2. Line Feature Triangulation

Assuming that L w is observed by camera c i and camera c j in the normalized image plane as z L w c i and z L w c j , the two line segments can be denoted by two endpoints as shown in Figure 2. The two endpoints of z L w c i are s L w c i = u s c i , v s c i , 1 T and e L w c i = u e c i , v e c i , 1 T , and the two endpoints of z L w c j are s L w c j = u s c j , v s c j , 1 T and e L w c j = u e c j , v e c j , 1 T .
A 3D plane π can be modeled as:
π x x + π y y + π z z + π w = 0 ,
where n π = [ π x , π y , π z ] T is the normal vector of π . For a point p 0 = x 0 , y 0 , z 0 T on the plane, we can obtain:
π w = π x x 0 + π y y 0 + π z z 0 = n π T p 0 ,
According to Equations (2) and (3), we can obtain the plane π i = n π i , π w i and π j = n π j , π w j . As shown in Figure 2, the normal vector of π i is n π i = s L w c i × e L w c i , and π w i can be computed by the optical center c i = 0 , 0 , 0 T as π w i = n π i T c 0 = 0 . To obtain π j , we need to transform the two endpoints s L w c j and e L w c j to the camera frame c i . Therefore, the corresponding reprojected endpoints are s ˜ L w c j = R c j c i s L w c j + p c j c i and e ˜ L w c j = R c j c i e L w c j + p c j c i , where R c j c i and p c j c i can be obtained by visual-inertial alignment. Similarly, the normal vector of π j is n π j = s ˜ L w c j × e ˜ L w c j , and π w j can be calculated by the translation p c j c i from the camera frame c j to the camera frame c i as π w j = n π j T p c j c i .
After π i and π j are computed, we can obtain the Plücker Coordinates of L w according to the dual Plücker matrix L w :
L w = d w × n w n w T 0 = π i π j T π j π i T ,

2.2.3. Orthonormal Representation

Since spatial lines only have 4-DoF, there is a problem of overparameterization using the Plücker Coordinates to represent spatial lines. In contrast, the orthonormal representation ( U , W ) s O ( 3 ) × s O ( 2 ) is more suitable for nonlinear optimization. Let U = R ( ψ ) and W = R ( ϕ ) denote 3D and 2D rotation matrix, respectively, then we have:
U = R ( ψ ) = u 1 , u 2 , u 3 = n w n w , d w d w , n w × d w n w × d w ,
W = R ( ϕ ) = w 1 w 2 w 2 w 1 = cos ( ϕ ) sin ( ϕ ) sin ( ϕ ) cos ( ϕ ) = 1 n w 2 + d w 2 n w d w d w n w ,
where ψ = ψ 1 , ψ 2 , ψ 3 T and ϕ are the 3D rotation angles around the x-, y-, and z-axis of the camera frame and the 2D rotation angle.
Therefore, we can define the orthonormal representation of a spatial line by a four-dimensional vector:
o = ( ψ , ϕ ) T ,
In addition, given an orthonormal representation ( U , W ) , the corresponding Plücker Coordinates can be obtained by:
L w = w 1 u 1 T , w 2 u 2 T T = 1 n 2 + d 2 L w ,
where w 1 = cos ( ϕ ) , w 2 = sin ( ϕ ) , and u i is the ith column of U . L w and L w have a scale factor, but they denote the same spatial line.

2.2.4. Line Feature Reprojection Residual

The line feature reprojection residual is defined in terms of point-to-line distance. Given a spatial line L c = ( n c T , d c T ) T , it can be projected to the image plane by [24]:
l = l 1 , l 2 , l 3 T = K L n c ,
where K L is the line projection matrix.
Finally, assume that L w j represents the jth spatial line L j , which is observed by the ith camera frame c i . Then the spatial line reprojection residual can be modeled as:
r L z ~ L j c i , χ = d s L j c i , l L j c i d e L j c i , l L j c i
where s L j c i = [ u s j c i , v s j c i , 1 ] T and e L j c i = [ u e j c i , v e j c i , 1 ] T denote the two endpoints of the line segment projected on the normalized image plane. d ( s , l ) is the point-to-line distance:
d ( s , l ) = s T l l 1 2 + l 2 2 ,
For the corresponding Jacobian matrix, we followed the routine of [10].

2.3. GNSS Measurements

GNSS measurements include pseudorange, carrier phase, and Doppler frequency shift.

2.3.1. Pseudorange Measurement

The pseudorange is defined as the measured distance obtained by multiplying the travel time of the satellite signal by the speed of light. Due to the influence of satellite clock error, receiver clock error, and ionospheric and tropospheric delays, the pseudorange is prefixed with “pseudo” to distinguish it from the true distance from the satellite to the GNSS receiver. Generally, although the pseudorange measurement only has meter-level positioning precision (the positioning precision of P code is about 10 m, and the positioning precision of C/A code is 20 m to 30 m), it is real-time and has no ambiguity. Therefore, the pseudorange measurement is still very important for GNSS positioning technology. For a certain satellite s j and a GNSS receiver r k at time k, the pseudorange P ˜ r k s j can be modeled as:
P ˜ r k s j = p s j E p r k E + c δ t r k δ t s j + T r k s j + I r k s j + ε p k j ,
where p s j E and p r k E represent the position of satellite s j and the GNSS receiver r k at time k in the ECEF frame, respectively. c is the speed of light. δ t s j , T r k s j , and I r k s j are the satellite clock error and tropospheric and ionospheric delays, which can be computed according to the satellite ephemeris. ε p k j 0 , σ p k j 2 denotes the multipath and random error of the pseudorange measurement, which is subject to the Zero-mean Gaussian distribution.

2.3.2. Carrier Phase Measurement

Although pseudorange positioning is an important method for GNSS, its error is too large for some applications that require high-precision positioning. In contrast, due to the short wavelength of the carrier phase ( λ L 1 = 19 cm and λ L 2 = 24 cm ), if the carrier phase measurement is used for positioning, it can achieve centimeter-level positioning accuracy. However, since the carrier phase is a periodic sinusoidal signal, and the GNSS receiver can only measure the part of less than one wavelength, there is the problem of the whole-cycle ambiguity, which makes the positioning process time-consuming. The carrier phase is defined as the phase difference between the phase transmitted by the satellite and the reference phase generated by the GNSS receiver. Similar to the pseudorange measurement, the carrier phase measurement is also related to the position of satellite and GNSS receiver. The carrier phase measurement can be modeled as:
Φ ˜ r k s j = p s j E p r k E + c δ t r k δ t s j + T r k s j I r k s j + λ N + ε Φ k j ,
where λ is the carrier wavelength, and N is the whole-cycle ambiguity. ε Φ k j 0 , σ Φ k j 2 represents the multipath and random error of the carrier phase measurement, which is subject to the Zero-mean Gaussian distribution.

2.3.3. Doppler Frequency Shift Measurement

The Doppler effect reveals a phenomenon in the spread of waves, that is, the wavelength of the radiation emitted by objects changes accordingly due to the relative motion of the wave source and the observer. As the wave source moves toward the observer, the wavelength becomes shorter and the frequency becomes higher due to the compression of the wave. On the contrary, as the wave source moves away from the observer, the wavelength becomes longer and the frequency becomes lower. Similarly, the Doppler effect can also occur between the satellite and the GNSS receiver. When a satellite orbits Earth in its elliptical orbit, due to the relative motion between the satellite and the GNSS receiver, the frequency of the satellite signal received by the GNSS receiver changes accordingly. This frequency change is called Doppler frequency shift, which can be modeled as:
Δ f ˜ r k s j = 1 λ κ k T v s j E v r k E + c δ t ˙ r k δ t ˙ s j + ε Δ f k j ,
where κ k = p s j E p r k E p s j E p r k E is the unit vector pointing along the line of sight from the GNSS receiver r k to the satellite s j . v s j E and v r k E denote the velocity of the jth satellite and the receiver at time k in the ECEF frame, respectively. δ t ˙ s j is the satellite clock drifting rate, which can be calculated according to the satellite ephemeris. ε Δ f k j 0 , σ Δ f k j 2 represents the multipath and random error of the Doppler frequency shift measurement, which is subject to the Zero-mean Gaussian distribution.

2.4. Carrier Phase Smoothed Pseudorange

As mentioned above, the pseudorange measurement proves fast and efficient, but can only achieve meter-level positioning precision. In contrast, the carrier phase measurement can achieve centimeter-level positioning precision but is required to solve the whole-cycle ambiguity, which is complex and time-consuming. Therefore, to combine the merits of the two measurements, we can use carrier phase smoothed pseudorange (short for smoothed pseudorange) to improve the positioning precision. In general, the positioning precision of the smoothed pseudorange measurement is several times higher than that of the pseudorange measurement. For a single-frequency receiver, when the whole-cycle ambiguity and ionospheric delay are nearly constant within a period of time, the pseudorange can be smoothed by using the carrier phase. The Hatch filter is the most widely used for carrier phase smoothed pseudorange, which assumes that the ionospheric delay is nearly constant among GNSS epochs and then averages the multiepoch whole-cycle ambiguity and ionospheric delay to improve the positioning accuracy of the pseudorange measurement. The carrier phase smoothed pseudorange based on the Hatch filter can be modeled as:
ρ ¯ r k s j = 1 k m P ˜ r k s j + 1 1 k m ρ ¯ r k 1 s j + Φ ˜ r k s j Φ ˜ r k 1 s j + ε ρ k j , 1 k m m ,
where m is the smoothed interval of the Hatch filter, usually 20 to 200 epochs. ε ρ k j 0 , σ ρ k j 2 denotes the multipath and random error of the carrier phase smoothed pseudorange measurement, which is also subject to the Zero-mean Gaussian distribution. According to Equation (15), the variance of the smoothed pseudorange is related to the variance of the pseudorange and carrier phase measurements. Assuming that the variance at different times is independent, then the variance of the smoothed pseudorange can be modeled as:
σ ρ k j 2 = 1 k m σ P k j 2 + 1 1 k m σ Φ k j 2 + σ Φ k 1 j 2 ,

2.5. Factor Graph Representation

The factor graph of our system is shown in Figure 3, which shows the factors in a sliding window, including point feature factors, line feature factors, IMU factors, carrier phase smoothed pseudorange factors, and Doppler frequency shift factors. Visual observation consists of the point and line features detected by our system. In nonlinear optimization, the states are optimized according to the residuals of these factors. The states of our system include:
X = x 0 , x 1 , x n , μ 0 , μ 1 , μ i , o 0 , o 1 , , o j , p c b , q c b , φ , p N E , p r b , x k = p b k w , v b k w , q b k w , b a , b g , δ t r k , δ t ˙ r k , k = 0 , 1 , , n , o = ψ , ϕ ,
where n is the sliding window size, i is the number of point features, and j is the number of line features. μ i is the inverse depth of the ith point feature in the sliding window.
The IMU preintegration and point feature residuals can be obtained according to [16], and the line feature residual can be obtained from Equation (10). In the following, we compute the smoothed pseudorange and Doppler frequency shift residuals in detail. The position of the receiver at time k in the ECEF frame is:
p r k E = R N E R w N p r k w + p N E ,
where p r k w is the position of the receiver in the local world frame, and it can be modeled as:
p r k w = R b k w p r b + p b k w ,
R w N is the rotation matrix of the local world frame with respect to the ENU frame. Since the two frames are gravity-aligned, the 1-DoF R w N is only related to the yaw offset ϕ and can be modeled as:
R w N = c o s ϕ s i n ϕ 0 s i n ϕ c o s ϕ 0 0 0 1 ,
R N E represents the rotation matrix of the ENU frame with respect to the ECEF frame, which is determined by the longitude L and latitude B of p N E . Given a position p N E = [ X , Y , Z ] T in the ECEF frame, it can be denoted in the geodetic frame as:
L = arctan Y / X B = arctan Z N r + H / X 2 + Y 2 N r 1 e 2 + H H = Z / sin B N r 1 e 2 , N r = a / 1 e 2 sin 2 B , e 2 = a 2 b 2 / a 2 ,
where a and b denote the semimajor axis and the semiminor axis of the elliptical orbit, respectively. N r is the radius of curvature in prime vertical, and e is the eccentricity that is related to a and b.
After the longitude L and latitude B of p N E are computed, R N E can be obtained by:
R N E = sin L sin B cos L cos B cos L cos L sin B sin L cos B sin L 0 cos B sin B ,
According to Equation (15), the smoothed pseudorange residual can be represented as:
r ρ ( z ˜ r k s j , X ) = 1 k m P ˜ r k s j + 1 1 k m ρ ¯ r k 1 s j + Φ ˜ r k s j Φ ˜ r k 1 s j ρ ¯ r k s j ,
In nonlinear optimization, it is necessary to calculate the Jacobian matrix of the smoothed pseudorange residual with respect to the states. Therefore, from Equation (23), the Jacobian matrix of the smoothed pseudorange can be obtained as:
J ρ = r ρ p b k 1 w r ρ R b k 1 w r ρ p b k w r ρ R b k w r ρ δ t r k 1 r ρ δ t r k r ρ φ r ρ p N E r ρ p r b ,
with
r ρ p b k 1 w = 1 1 k m p s j E p r k 1 E T p s j E p r k 1 E R N E R w N , r ρ R b k 1 w = 1 1 k m p s j E p r k 1 E T p s j E p r k 1 E R N E R w N R b k 1 w p r b × , r ρ p b k w = p s j E p r k E T p s j E p r k E R N E R w N , r ρ R b k w = p s j E p r k E T p s j E p r k E R N E R w N R b k w p r b × , r ρ δ t r k 1 = c 1 1 k m , r ρ δ t r k = c , r ρ φ = p s j E p r k E T p s j E p r k E R N E R ˙ w N p r k w + 1 1 k m p s j E p r k 1 E T p s j E p r k 1 E R N E R ˙ w N p r k 1 w , r ρ p N E = p s j E p r k E T p s j E p r k E + 1 1 k m p s j E p r k 1 E T p s j E p r k 1 E , r ρ p r b = p s j E p r k E T p s j E p r k E R N E R w N R b k w + 1 1 k m p s j E p r k 1 E T p s j E p r k 1 E R N E R w N R b k 1 w ,
where
R ˙ w N = d R w N d φ = s i n ϕ c o s ϕ 0 c o s ϕ s i n ϕ 0 0 0 0 ,
The derivation details are provided in Appendix A.
In Equation (14), the velocity of the receiver in the ECEF frame is transformed from that in the local world frame:
v r k E R N E R w N v b k w ,
Therefore, the Doppler frequency shift residual can be obtained by:
r Δ f ( z ˜ r k s j , X ) = 1 λ κ k T v s j E v r k E + c δ t ˙ r k δ t ˙ s j + Δ f ˜ r k s j ,
Similar to the smoothed pseudorange, the Jacobian matrix of the Doppler frequency shift is:
J Δ f = r Δ f p b k w r Δ f R b k w r Δ f v b k w r Δ f δ t ˙ r k r Δ f φ r Δ f p N E r Δ f p r b
with
r Δ f p b k w = v s j E v r k E T I p s j E p r k E 2 p s j E p r k E p s j E p r k E T λ p s j E p r k E 3 R N E R w N , r Δ f R b k w = v s j E v r k E T I p s j E p r k E 2 p s j E p r k E p s j E p r k E T λ p s j E p r k E 3 R N E R w N R b k w p r b × , r Δ f v b k w = 1 λ κ k T R N E R w N , r Δ f δ t ˙ r k = c λ , r Δ f ϕ 1 λ κ k T R N E R ˙ w N v b k w , r Δ f p N E 0 , r Δ f p r b = v s j E v r k E T I p s j E p r k E 2 p s j E p r k E p s j E p r k E T λ p s j E p r k E 3 R N E R w N R b k w ,
The corresponding derivation rule is similar to the Jacobian matrix of the smoothed pseudorange and will not be repeated here.
After the residuals of all factors are obtained, then the system can optimize the states by Ceres solver [25]. The cost function of our system is:
min X r p H p X 2 + k B r B ( z ˜ b k + 1 b k , X ) P b k + 1 b k 2 + ( l , j ) C α r C ( z ˜ l c j , X ) P l c j 2 + i , j L α r L z ˜ j c i , P j c i 2 + ( k , j ) ρ r ρ ( z ˜ r k s j , X ) σ ρ 2 2 + ( k , j ) Δ f r Δ f ( z ˜ r k s j , X ) σ Δ f 2 2 ,
where r p is the prior residual for marginalization. B is the set of IMU preintegration measurements in the sliding window. C and L are the set of point and line features in the sliding window, respectively. α is the Cauchy robust function used to suppress outliers.

2.6. GNSS-IMU Calibration

p r b is the extrinsic translation parameter between the GNSS receiver and IMU. After our system performs ENU Origin Estimation and Yaw Estimation, then we can calibrate p r b . We estimate p r b as the initial value of nonlinear optimization through the following optimization problem:
min p r b ( k , j ) Δ f r Δ f ( z ˜ r k s j , X ) σ Δ f 2 2 ,
After the GNSS-IMU calibration is successfully initialized, our system performs the nonlinear optimization.

2.7. Observability Analysis of Tightly Coupled GNSS-VIO System

A SLAM system can be described using a state equation and an output equation, where the input and output are the external variables of the system, and the state is the internal variables of the system. If the states of the system can be completely represented by the output, the system is fully observable; otherwise, the system is not fully observable. Observability plays a very important role in the state estimation problem. If some states of the system are not observable, the positioning precision of the system will be affected when running in long trajectories. The observability of the system can be represented by the observability matrix. If the dimension of the null space of the observability matrix is equal to 0, then the system is fully observable. To facilitate the observability analysis of our proposed system, some simplifications were required. First, the accelerometer and gyroscope biases were not included in the states, because the biases were observable and they did not change the results of the observability analysis [26]. Second, we considered a single point and line feature [27]. Third, the translation parameter p r b was successfully calibrated. Then the discrete-time linear error state model and residual of the system are:
δ x k + 1 Φ k δ x k + w k , r k = H k δ x k + n k ,
where δ x k is the error state, and r k is the residual. Φ k and H k are the error-state transition matrix and the measurement Jacobian matrix, respectively. w k and n k represent the system noise process and the measurement noise process, respectively. The noise process is modeled as a Zero-mean white Gaussian process.
According to [28], the observability matrix can be obtained as:
M = H k H k + 1 Φ k H k + t Φ k + t 1 Φ k ,
In Equation (34), the observability matrix is defined as a function of the error-state transition matrix Φ k and the measurement Jacobian matrix H k .
Therefore, given the linearized system in Equation (33), its observability can be demonstrated according to Equation (34). The proof is as follows:
Theorem 1.
The states represented in the ECEF frame are fully observable.
Proof of Theorem1.
The simplified states include:
x k E = p b k E , v b k E , q b k E , μ , o , ϕ , p N E ,
Generally, the raw accelerometer and gyroscope measurements from IMU can be obtained by:
a ˜ b = R w b ( a w + g w ) + b a + n a , ω ˜ b = ω b + b g + n g ,
where a ˜ b and ω ˜ b are the accelerometer and gyroscope measurements. The measurements are represented in the local world frame.
Given two time instants, position, velocity, and orientation states can be propagated by the IMU measurements:
p b k + 1 w = p b k w + v b k w Δ t k + t ( t k , t k + 1 ) R b t w a ˜ b t b a g w d t 2 , v b k + 1 w = v b k w + t ( t k , t k + 1 ) R b t w a ˜ b t b a g w d t , q b k + 1 w = t ( t k , t k + 1 ) q b t w 0 1 2 ω ˜ b t b g d t ,
where denotes the quaternion multiplication operation.
Equation (37) is the continuous state propagation model. To analyze the observability of the system, it was necessary to perform the discretization of the continuous-time system model. Therefore, we used the Euler method to compute the discrete-time model of Equation (37):
p b k + 1 w = p b k w + v b k w Δ t k + 1 2 R b k w a ˜ b k b a g w Δ t k 2 , v b k + 1 w = v b k w + R b k w a ˜ b k b a g w Δ t k , q b k + 1 w = q b k w 1 1 2 ω ˜ b k b g Δ t k ,
According to Equation (38), the error propagation equation of position, velocity, and orientation can be obtained by:
δ p b k + 1 w δ v b k + 1 w δ θ b k + 1 w = I Δ t k I 1 2 R b k w a ˜ b k b a × Δ t k 2 0 I R b k w a ˜ b k b a × Δ t k 0 0 I ω ˜ b k b g × Δ t k δ p b k w δ v b k w δ θ b k w + w k ,
The details of Equation (39) are provided in Appendix B.
Similarly, when the states are represented in the ECEF frame, Equation (39) is still held, namely:
δ p b k + 1 E δ v b k + 1 E δ θ b k + 1 E = I Δ t k I 1 2 R b k E a ˜ b k b a × Δ t k 2 0 I R b k E a ˜ b k b a × Δ t k 0 0 I ω ˜ b k b g × Δ t k δ p b k E δ v b k E δ θ b k E + w k ,
Therefore, the discrete-time error state model of Equation (35) can be obtained:
δ p b k + 1 E δ v b k + 1 E δ θ b k + 1 E δ μ k + 1 δ o k + 1 δ φ k + 1 δ p N E ( k + 1 ) = I 3 Δ t k I 3 Φ k 1 0 3 × 9 0 3 I 3 Φ k 2 0 3 × 9 0 3 0 3 Φ k 3 0 3 × 9 0 9 × 3 0 9 × 3 0 9 × 3 I 9 δ p b k E δ v b k E δ θ b k E δ μ k δ o k δ φ k δ p N E ( k ) + w k , δ x k + 1 E = Φ k δ x k E + w k ,
where
Φ k 1 = 1 2 R b k E a ˜ b k b a × Δ t k 2 Φ k 2 = R b k E a ˜ b k b a × Δ t k Φ k 3 = I ω ˜ b k b g × Δ t k ,
Since Φ k is an upper triangular matrix, the error-state transition matrix Φ k + t , k from time step k to k + t is also an upper triangular matrix, namely:
Φ k + t , k = Φ k + t 1 Φ k + 1 Φ k = I 3 ( t k + t 1 t k 1 ) I 3 Φ 13 k + t 0 3 × 9 0 3 I 3 Φ 23 k + t 0 3 × 9 0 3 0 3 Φ 33 k + t 0 3 × 9 0 9 × 3 0 9 × 3 0 9 × 3 I 9 ,
where Φ 13 k + t , Φ 23 k + t , and Φ 33 k + t are nonzero entries.
For the measurement Jacobian matrix, our system consists of visual observations and GNSS measurements. GNSS measurements include pseudorange, carrier phase, and Doppler frequency shift, and any of them yields the same results for observability analysis. Therefore, in the following, we used the smoothed pseudorange measurement for observability analysis.
The Jacobian matrix of the smoothed pseudorange measurement with respect to the states in Equation (35) can be obtained by:
H k + t g = p k + t 0 1 × 11 p k + t R N E R ˙ w N p r k + t w 1 1 k m p k + t 1 R N E R ˙ w N p r k + t 1 w p k + t 1 1 k m p k + t 1 = h k + t 1 0 1 × 11 h k + t 2 h k + t 3 , p k + t = p s j E p r k + t E T p s j E p r k + t E , p k + t 1 = p s j E p r k + t 1 E T p s j E p r k + t 1 E ,
The derivation details are similar to the Jacobian matrix of the smoothed pseudorange, which has been computed in Appendix A. Since the smoothed pseudorange measurement does not include velocity, orientation, the inverse depth of point feature, and the orthonormal representation of line feature, the corresponding entry of the Jacobian matrix is equal to zero.
In addition, we can obtain the Jacobian matrix of the visual observations by:
H k + t v = 0 2 × 9 r C δ μ k + t 0 2 × 4 0 2 × 4 0 2 × 9 0 2 × 1 r L δ o k + t 0 2 × 4
where r C δ μ k + t and r L δ o k + t can be obtained from [10].
Therefore, we can obtain the entry of the observability matrix:
H k + t v H k + t g Φ k + t , k = 0 4 × 3 0 4 × 3 0 4 × 3 Π 0 4 × 1 0 4 × 3 h k + t 1 h k + t 1 ( t k + t 1 t k 1 ) h k + t 1 Φ 13 k + t 0 3 × 5 h k + t 2 h k + t 3
where
Π = r C δ μ k + t 0 2 × 4 0 2 × 1 r L δ o k + t
According to Equation (46), we can clearly observe that the dimension of the null space of M is equal to zero, which means the states represented in the ECEF frame are fully observable and the tightly coupled GNSS-VIO state estimator is consistent. □
The fact that the tightly coupled GNSS-VIO system is fully observable means that even if the system runs in long trajectories, the accumulated error can be eliminated. By leveraging the global measurements from GNSS, our system can achieve high-precision and drift-free positioning compared with the VIO system, which has four unobservable directions.

3. System Overview

The architecture of our proposed system is shown in Figure 4.
The proposed system implements four threads including data input, preprocessing, initialization, and nonlinear optimization. As shown in Figure 4, the white block diagrams represent the work that has been implemented by VINS-Mono [16] and GVINS [21], and the green block diagrams represent improvements we made. The inputs of our system are image, IMU, and GNSS measurements. In the preprocessing step, point and line feature detection and tracking were performed, IMU measurements were preintegrated, and the pseudorange measurements were smoothed by the carrier phase. In the initialization step, we followed the routine of VINS-Mono [16] for VI-Alignment. After VI-Alignment was completed, we performed GNSS initialization, which was divided into four stages: ENU Origin Estimation (Coarse), Yaw Estimation, ENU Origin Estimation (Fine), and GNSS-IMU Calibration. The first three stages were implemented in GVINS [21]. Finally, nonlinear optimization was performed. Nonlinear optimization will optimize the states in Equation (17) by leveraging the residuals and Jacobian matrices of different factors, which were computed in Section 2.5.

4. Experimental Results

Our experiments were conducted on the public dataset GVINS-Dataset [29], which captured scenes from the Hong Kong University of Science and Technology. The measurements were collected by a helmet that is equipped with a VI-Sensor and a u-blox ZED-F9P GNSS receiver. The dataset sports field captured a sports field scene where the device followed an athletic track for five laps. The sports field is an outdoor environment with an open area where the satellites are well locked and the RTK solution remains fixed. The other dataset complex environment was a complex indoor–outdoor environment where many challenging scenes were captured. For example, point and line features cannot be detected in bright or dim scenes, and the GNSS signal was highly corrupted or unavailable in cluttered or indoor environments (about 25 m). The overall distance of the complex environment dataset was over 3 km. We compared our proposed system with some open-source SLAM systems, including GVINS and VINS-Mono where VINS-Mono includes results with and without loop closure. For comparison with our results, we transformed the trajectories of VINS-Mono from the local world frame to the ECEF frame. Furthermore, since RTK has centimeter-level positioning precision, we compared it with our trajectories as the ground truth.

4.1. Sports Field

We plotted the trajectories of the sports field on Google Earth as shown in Figure 5. We see that VINS-Mono without loop closure suffers from accumulated drift among all three directions, which leads to the worst positioning precision among methods. Obviously, the drift increases with each lap around the sports field. VINS-Mono with loop closure can significantly improve the positioning precision, but there is still an obvious drift in yaw direction, which is mainly because the VINS-Mono has four unobservable directions. As a comparison, since GVINS is fully observable in the ECEF frame, it has a smaller positioning error and is drift-free. As depicted in Figure 5, its trajectory is very close to RTK. In addition, since our method is also a tightly coupled GNSS-VIO system, which is fully observable in the ECEF frame and the ENU frame, the positioning result of our method is also very accurate.
To quantitatively analyze results, we compared the positioning errors of GVINS and our method in detail. As shown in Figure 6, we visualized the positioning errors of GVINS and our proposed method. We see that the results of GVINS and our system fluctuate within a small range in the ENU frame and ECEF frame. The positioning errors of our method are less than 1m among all three directions of the ENU frame. Compared with GVINS, the error of our system is smaller in the east and north directions. In addition, the error of our method is smooth and stable in the ECEF frame, which is because we smooth the pseudorange measurements with the carrier phase. In addition, we listed the RMSE and Spherical Error Probable (SEP) for the positioning error of GVINS and our method in the ENU frame and ECEF frame. From Table 1, we can see that the RMSE of our positioning errors is lower than that of GVINS except in the up direction. The SEP refers to the radius of a sphere in which 50% of the estimated positions occur, and it can also evaluate the positioning precision of our system. The smaller the SEP, the higher the positioning precision of our system. We see that the SEP of our method in the ECEF frame is lower than that of GVINS, which shows that the positioning precision of our method is more accurate in the ECEF frame.

4.2. Complex Environment

In the following, we conducted experiments on the dataset complex environment, and we compared VINS-Mono, GVINS, RTK, and our proposed method. As shown in Figure 7, since VINS-Mono has four unobservable directions, the trajectory of VINS-Mono has a large drift. In addition, we saw that a section of the trajectory of VINS-Mono deviated from RTK by about 34 m, which is an unacceptable result. On the contrary, since GVINS and our proposed system are tightly coupled GNSS-VIO systems, there is no accumulated drift theoretically. In addition, due to a section of the trajectory being collected indoors, the GNSS receiver cannot obtain measurements at this time, which causes the failure of the RTK solution. In contrast, even if the GNSS measurements cannot be obtained, our system works well with VIO, and our trajectory is still accurate.
To further compare the performance of GVINS and our method, we analyzed the positioning errors of the two methods in the dataset complex environment as shown in Figure 8. It should be noted that we compared the RTK solution with our method as the ground truth. However, when the receiver is occluded, the satellite signal cannot be received, and the RTK solution is useless at this time. Thus, we only compared with segments where the RTK solution is available. Due to the complexity of the dataset, the overall positioning precision of the system is inferior to that of the sports field. The RMSE and SEP of GVINS and our method are shown in Table 2. We see that the positioning errors of GVINS in the east and north directions are slightly lower than that of our method, while the positioning error of our method in the up direction is much lower than that of GVINS. The SEP metric also shows that our method outperforms GVINS on this dataset.
Furthermore, since the ENU frame is gravity-aligned, we further analyzed the 2D trajectory error in the East–North plane as shown in Figure 9. The trajectory projected on the East–North plane is stained in red. The z-axis represents the positioning errors in the corresponding East–North plane, and the errors are represented by a scatter plot. In some bright or dim scenes, only a few point and line features can be detected, which will affect the positioning precision of the system. In addition, buildings, indoor environments, and trees will block satellite signals, which also degrade the performance of our system. Therefore, the positioning error in the complex environment is larger than that in the sports field, and the maximum error exceeds 2 m. As depicted in Figure 9, a sudden increase in positioning error occurs at the curves of the trajectory due to the large change in the orientation direction. On the contrary, the positioning error of the trajectory with small curvature is smoother compared with that of the trajectory with large curvature, which shows that our method is still effective even in complex scenes.

4.3. Observability Analysis

In Section 2.7, we demonstrated that the states represented in the ECEF are fully observable. However, according to [20], if the states are represented in the local world frame, the GNSS-VIO system has still four unobservable directions. Therefore, we can perform observability analysis from experimental results. Figure 10 shows the 3D trajectories of the complex environment represented in the ECEF frame and local world frame. We zoomed in on the trajectories of turning and climbing stairs for the convenience of the analysis. From Figure 10b, we see that since the states are unobservable in the local world frame, the two trajectories of climbing the same stairs have a drift. Even if the GNSS measurements are included in our system, the two trajectories are still inconsistent. In contrast, due to the states represented in the ECEF frame being fully observable, the two trajectories of climbing the same stairs are consistent and very close. Similarly, the same conclusions can be obtained for the trajectories of turning as shown in Figure 10. According to the results of the observability analysis, there is an obvious drift in the two trajectories of turning in the local world frame, while the phenomenon does not occur in the ECEF frame. In addition, the consistency of the system also shows that even in large-scale scenarios, it can still achieve high-precision positioning results as illustrated in Figure 10a.

4.4. Smoothed Interval

Equation (15) requires the receiver to continuously lock the carrier phase. If the receiver loses lock, the smoothed pseudorange is corrupted by any cycle slip that occurs and must be reinitialized when that happens. Therefore, it is necessary to investigate the influence of different smoothed intervals on the positioning precision of our system. The positioning error with six different settings is illustrated in Figure 11. When the smoothed interval is set to 200 epochs, the system has the largest positioning error in the three directions of the ENU frame. This happens because the receiver loses lock frequently within a smoothed interval and then cycle slip occurs, which causes an increase in the positioning error. By reducing the smoothed interval, the phenomenon that the receiver loses lock can be eliminated, which can improve the positioning precision of the carrier phase smoothed pseudorange. Obviously, with the smoothed intervals of 5, 10, 20, and 50 epochs, our system has smaller errors in east and north directions compared to the smoothed intervals of 100 or 200 epochs. However, if the smoothed interval is too small, the positioning precision in up direction cannot be further improved, and may even become worse. As shown in Figure 11, when the smoothed interval is set to 5 epochs, the positioning error in the up direction is the largest among different settings. The reason for this phenomenon is that when the smoothed interval is too small, the carrier phase cannot smooth the pseudorange well, which causes the smoothed pseudorange measurement to degenerate into the pseudorange measurement.

4.5. Line Feature Tracking Threshold

In the nonlinear optimization stage, if the number of keyframes for which line features are continuously observed is less than a threshold, then these line features are filtered out. A different threshold has a great impact on the robustness and positioning precision. As shown in Figure 12, a threshold of 2 means that those line segments that are observed by two consecutive keyframes or more can be added to the factor graph. However, since these line segments are continuously tracked for too few frames, they are not stable enough for nonlinear optimization, which will definitely reduce the positioning precision. On the contrary, if the line feature tracking threshold varies between 3 and 4, the proposed system can obtain more accurate positioning results in the ENU frame. Because the line segments observed by three or four consecutive keyframes are more stable and more suitable for nonlinear optimization.

5. Conclusions

In this paper, we propose a tightly coupled GNSS-VIO system based on point-line features. First, since line features contain more environmental structure information, we added line features to the system. Second, the pseudorange measurement can only achieve meter-level positioning precision but is fast and unambiguous. On the contrary, the carrier phase measurement can achieve the centimeter-level positioning precision but is required to solve the whole-cycle ambiguity, which is time-consuming. Therefore, we propose to combine the advantages of the two measurements and replace the pseudorange with the carrier phase smoothed pseudorange, which can greatly improve the performance of our system. Third, we considered the extrinsic translation parameter between the GNSS receiver and IMU, and our system can perform real-time parameter calibration. Finally, we demonstrated that if the states are represented in the ECEF frame, the tightly coupled GNSS-VIO system is fully observable. We conducted experiments on public datasets, which show our system achieves high-precision, robust, and real-time localization. In the future, we will further focus on improved methods for tightly coupled GNSS-VIO systems.

Author Contributions

Conceptualization, B.D. and K.Z.; methodology, B.D. and K.Z.; validation, B.D. and K.Z.; investigation, B.D.; writing—original draft preparation, B.D.; writing—review and editing, B.D. and K.Z.; All authors have read and agreed to the published version of the manuscript.

Funding

This research work was supported by the Key-Area Research and Development Program of Guangdong Province (No. 2020B0909050003).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

To derive the Jacobian matrix of the smoothed pseudorange residual with respect to the states, Equation (15) is expanded into:
ρ ¯ r k s j = 1 k m p s j E p r k E + c δ t r k δ t s j + T r k s j + I r k s j + 1 1 k m ρ ¯ r k 1 s j + p s j E p r k E + c δ t r k δ t s j + T r k s j I r k s j + λ N k p s j E p r k 1 E + c δ t r k 1 δ t s j + T r k 1 s j I r k 1 s j + λ N k 1 + ε ρ k j ,
The derivations can be obtained by the chain rule. For the translation p b k 1 w and rotation matrix R b k 1 w , we can calculate their derivations by:
r ρ p b k 1 w = r ρ p r k 1 E p r k 1 E p r k 1 w p r k 1 w p b k 1 w = 1 1 k m p s j E p r k 1 E T p s j E p r k 1 E R N E R w N I , r ρ R b k 1 w = r ρ p r k 1 E p r k 1 E p r k 1 w p r k 1 w R b k 1 w = 1 1 k m p s j E p r k 1 E T p s j E p r k 1 E R N E R w N R b k 1 w p r b × ,
The derivation of the states at time k is similar to that at time k − 1 and will not be repeated here. In the following we introduce the Jacobian matrix of the smoothed pseudorange with respect to the yaw offset φ :
r ρ φ = 1 k m P ˜ r k s j p r k E p r k E ϕ + 1 1 k m Φ ˜ r k s j p r k E p r k E φ Φ ˜ r k 1 s j p r k 1 E p r k 1 E φ = P ˜ r k s j p r k E p r k E φ 1 1 k m Φ ˜ r k 1 s j p r k 1 E p r k 1 E φ = p s j E p r k E T p s j E p r k E R N E R ˙ w N p r k w + 1 1 k m p s j E p r k 1 E T p s j E p r k 1 E R N E R ˙ w N p r k 1 w ,
where
R ˙ w N = d R w N d φ = sin φ cos φ 0 cos φ sin φ 0 0 0 0 ,
The Jacobian matrix of the smoothed pseudorange with respect to the translation p N E can be calculated by:
r ρ p N E = 1 k m P ˜ r k s j p r k E p r k E p N E + 1 1 k m Φ ˜ r k s j p r k E p r k E p N E Φ ˜ r k 1 s j p r k 1 E p r k 1 E p N E = P ˜ r k s j p r k E p r k E p N E 1 1 k m Φ ˜ r k 1 s j p r k 1 E p r k 1 E p N E = p s j E p r k E T p s j E p r k E I + 1 1 k m p s j E p r k 1 E T p s j E p r k 1 E I ,
Similarly, the Jacobian matrix of the smoothed pseudorange with respect to the extrinsic translation parameter p r b can be calculated by:
r ρ p r b = 1 k m P ˜ r k s j p r k E p r k E p r k w p r k w p r b + ( 1 1 k m ) [ Φ ˜ r k s j p r k E p r k E p r k w p r k w p r b Φ ˜ r k 1 s j p r k 1 E p r k 1 E p r k 1 w p r k 1 w p r b ] = P ˜ r k s j p r k E p r k E p r k w p r k w p r b ( 1 1 k m ) Φ ˜ r k 1 s j p r k 1 E p r k 1 E p r k 1 w p r k 1 w p r b = ( p s j E p r k E ) T p s j E p r k E R N E R w N R b k w + ( 1 1 k m ) ( p s j E p r k 1 E ) T p s j E p r k 1 E R N E R w N R b k 1 w ,

Appendix B

We rewrite the discrete-time propagation equation of position in Equation (38):
p b k + 1 w = p b k w + v b k w Δ t k + 1 2 R b k w a ˜ b k b a g w Δ t k 2 ,
To derive the derivation of the position at time k + 1 with respect to the orientation at time k, we add a tiny perturbation to the rotation matrix at time k by:
δ p b k + 1 w δ θ b k w = 1 2 Δ t k 2 lim δ θ b k w 0 R b k w exp δ θ b k w × a ˜ b k b a R b k w a ˜ b k b a δ θ b k w 1 2 Δ t k 2 lim δ θ b k w 0 R b k w I + δ θ b k w × a ˜ b k b a R b k w a ˜ b k b a δ θ b k w = 1 2 Δ t k 2 lim δ θ b k w 0 R b k w δ θ b k w × a ˜ b k b a δ θ b k w = 1 2 R b k w a ˜ b k b a × Δ t k 2 ,
The derivation of the orientation at time k + 1 with respect to the orientation at time k is similar to the routine of Equation (A8). We can also add a tiny perturbation to both sides of the orientation propagation equation by:
q b k + 1 w 1 1 2 δ θ b k + 1 w = q b k w 1 1 2 δ θ b k w 1 1 2 ω ˜ b k b g Δ t k ,
then
1 1 2 δ θ b k + 1 w = q b k + 1 w * q b k w 1 1 2 δ θ b k w 1 1 2 ω ˜ b k b g Δ t k 1 1 2 ω ˜ b k b g Δ t k 1 1 2 δ θ b k w 1 1 2 ω ˜ b k b g Δ t k = 1 1 2 R δ θ b k w ,
where R = e x p ω ˜ b k b g × Δ t k . Considering only the imaginary part of Equation (A10), then we have:
δ θ b k + 1 w = R δ θ b k w = exp ω ˜ b k b g × Δ t k δ θ b k w I ω ˜ b k b g × Δ t k δ θ b k w ,
Therefore, according to Equation (A11), we can obtain:
δ θ b k + 1 w δ θ b k w = I ω ˜ b k b g × Δ t k ,

References

  1. Weiss, S.; Siegwart, R. Real-time metric state estimation for modular vision-inertial systems. In Proceedings of the 2011 IEEE International Conference on Robotics and Automation (ICRA), Shanghai, China, 9–13 May 2011; pp. 4531–4537. [Google Scholar]
  2. Kneip, L.; Weiss, S.; Siegwart, R. Deterministic initialization of metric state estimation filters for loosely-coupled monocular vision-inertial systems. In Proceedings of the 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), San Francisco, CA, USA, 25–30 September 2011; pp. 2235–2241. [Google Scholar]
  3. Bloesch, M.; Burri, M.; Omari, S.; Hutter, M.; Siegwart, R. Iterated extended Kalman filter based visual-inertial odometry using direct photometric feedback. Int. J. Robot. Res. 2017, 36, 1053–1072. [Google Scholar] [CrossRef] [Green Version]
  4. Leutenegger, S.; Lynen, S.; Bosse, M.; Siegwart, R.; Furgale, P. Keyframe-based visual–inertial odometry using nonlinear optimization. Int. J. Robot. Res. 2015, 34, 314–334. [Google Scholar] [CrossRef] [Green Version]
  5. Davison, A.J.; Reid, I.D.; Molton, N.D. MonoSLAM: Real-time single camera SLAM. IEEE Trans. Pattern Anal. Mach. Intell. 2007, 29, 1052–1067. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Shi, J. Good features to track. In Proceedings of the 1994 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 21–23 June 1994; pp. 4531–4537. [Google Scholar]
  7. Klein, G.; Murray, D. Parallel tracking and mapping for small AR workspaces. In Proceedings of the 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality, Nara, Japan, 13–16 November 2007; pp. 225–234. [Google Scholar]
  8. Mur-Artal, R.; Montiel, J.M.M.; Tardós, J.D. ORB-SLAM: A versatile and accurate monocular SLAM system. IEEE Trans. Robot. 2015, 31, 1147–1163. [Google Scholar] [CrossRef] [Green Version]
  9. Mur-Artal, R.; Tardós, J.D. Orb-slam2: An open-source slam system for monocular, stereo, and rgb-d cameras. IEEE Trans. Robot. 2017, 33, 1255–1262. [Google Scholar] [CrossRef] [Green Version]
  10. He, Y.; Zhao, J.; Guo, Y.; He, W.; Yuan, K. PL-VIO: Tightly-Coupled Monocular Visual–Inertial Odometry Using Point and Line Features. Sensors 2018, 18, 1159. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  11. Von, G.; Jakubowicz, J.; Morel, J.M.; Randall, G. LSD: A fast line segment detector with a false detection control. IEEE Trans. Pattern Anal. Mach. Intell. 2008, 32, 722–732. [Google Scholar]
  12. Kaehler, A.; Bradski, G. Learning OpenCV 3: Computer vision in C++ with the OpenCV Library; O’Reilly Media, Inc.: Sebastopol, CA, USA, 2016. [Google Scholar]
  13. Yang, Y.; Geneva, P.; Eckenhoff, K.; Huang, G. Visual-inertial odometry with point and line features. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 3–8 November 2019; pp. 2447–2454. [Google Scholar]
  14. Fu, Q.; Wang, J.; Yu, H.; Ali, I.; Guo, F.; He, Y.; Zhang, H. PL-VINS: Real-time monocular visual-inertial SLAM with point and line features. arXiv 2020, arXiv:2009.07462. [Google Scholar]
  15. Lee, J.; Park, S.Y. PLF-VINS: Real-time monocular visual-inertial SLAM with point-line fusion and parallel-line fusion. IEEE Robot. Autom. Lett. 2021, 6, 7033–7040. [Google Scholar] [CrossRef]
  16. Qin, T.; Li, P.; Shen, S. Vins-mono: A robust and versatile monocular visual-inertial state estimator. IEEE Trans. Robot. 2018, 34, 1004–1020. [Google Scholar] [CrossRef] [Green Version]
  17. Mourikis, A.I.; Roumeliotis, S.I. A Multi-State Constraint Kalman Filter for Vision-aided Inertial Navigation. In Proceedings of the 2007 IEEE International Conference on Robotics and Automation (ICRA), Roma, Italy, 10–14 April 2007; pp. 3565–3572. [Google Scholar]
  18. Li, M.; Mourikis, A.I. Improving the accuracy of EKF-based visual-inertial odometry. In Proceedings of the 2012 IEEE International Conference on Robotics and Automation (ICRA), St Paul, MN, USA, 14–18 May 2012; pp. 828–835. [Google Scholar]
  19. Campos, C.; Elvira, R.; Rodríguez, J.J.G.; Montiel, J.M.; Tardós, J.D. Orb-slam3: An accurate open-source library for visual, visual–inertial, and multimap slam. IEEE Trans. Robot. 2021, 37, 1874–1890. [Google Scholar] [CrossRef]
  20. Lee, W.; Eckenhoff, K.; Geneva, P.; Huang, G. Intermittent gps-aided vio: Online initialization and calibration. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–4 June 2020; pp. 5724–5731. [Google Scholar]
  21. Cao, S.; Lu, X.; Shen, S. GVINS: Tightly Coupled GNSS–Visual–Inertial Fusion for Smooth and Consistent State Estimation. IEEE Trans. Robot. 2022, 1–18. [Google Scholar] [CrossRef]
  22. Li, X.; Wang, X.; Liao, J.; Li, X.; Li, S.; Lyu, H. Semi-tightly coupled integration of multi-GNSS PPP and S-VINS for precise positioning in GNSS-challenged environments. Satell. Navig. 2021, 2, 1. [Google Scholar] [CrossRef]
  23. Liu, J.; Gao, W.; Hu, Z. Optimization-based visual-inertial SLAM tightly coupled with raw GNSS measurements. In Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 30 May–5 June 2021; pp. 11612–11618. [Google Scholar]
  24. Zhang, G.; Lee, J.H.; Lim, J.; Suh, I.H. Building a 3-D Line-Based Map Using Stereo SLAM. IEEE Trans. Robot. 2015, 31, 1364–1377. [Google Scholar] [CrossRef]
  25. Agarwal, S.; Mierle, K. Ceres Solver. Available online: http://ceres-solver.org (accessed on 25 April 2022).
  26. Jones, E.S.; Soatto, S. Visual-inertial navigation, mapping and localization: A scalable real-time causal approach. Int. J. Robot. Res. 2011, 30, 407–430. [Google Scholar] [CrossRef]
  27. Roumeliotis, S.I.; Kottas, D.G.; Guo, C.; Hesch, J. Observability-Constrained Vision-Aided Inertial Navigation. U.S. Patent 9,243,916, 26 January 2016. [Google Scholar]
  28. Li, M.; Mourikis, A.I. High-precision, consistent EKF-based visual-inertial odometry. Int. J. Robot. Res. 2013, 32, 690–711. [Google Scholar] [CrossRef]
  29. Cao, S.; Lu, X. GVINS-Dataset. Available online: https://github.com/HKUST-Aerial-Robotics/GVINS-Dataset (accessed on 25 April 2022).
Figure 1. An illustration of the ECEF, Geodetic, ENU, and local world frames.
Figure 1. An illustration of the ECEF, Geodetic, ENU, and local world frames.
Sensors 22 03391 g001
Figure 2. Line feature triangulation.
Figure 2. Line feature triangulation.
Sensors 22 03391 g002
Figure 3. Factor graph illustration of our system.
Figure 3. Factor graph illustration of our system.
Sensors 22 03391 g003
Figure 4. A block diagram illustrating the workflow of our proposed system.
Figure 4. A block diagram illustrating the workflow of our proposed system.
Sensors 22 03391 g004
Figure 5. The trajectories of VINS-Mono, GVINS, RTK, and our proposed method in the sports field.
Figure 5. The trajectories of VINS-Mono, GVINS, RTK, and our proposed method in the sports field.
Sensors 22 03391 g005
Figure 6. The positioning errors of GVINS and our proposed system in the sports field. The left three rows are the positioning errors in the east, north, and up directions of the ENU frame. The right three rows are the positioning errors in the X, Y, and Z directions of the ECEF frame.
Figure 6. The positioning errors of GVINS and our proposed system in the sports field. The left three rows are the positioning errors in the east, north, and up directions of the ENU frame. The right three rows are the positioning errors in the X, Y, and Z directions of the ECEF frame.
Sensors 22 03391 g006
Figure 7. The trajectories of VINS-Mono, GVINS, RTK, and our proposed method in complex environments.
Figure 7. The trajectories of VINS-Mono, GVINS, RTK, and our proposed method in complex environments.
Sensors 22 03391 g007
Figure 8. The positioning errors of GVINS and our proposed system in the complex environment.
Figure 8. The positioning errors of GVINS and our proposed system in the complex environment.
Sensors 22 03391 g008
Figure 9. The 2D trajectory error of the complex environment on the East–North plane.
Figure 9. The 2D trajectory error of the complex environment on the East–North plane.
Sensors 22 03391 g009
Figure 10. Comparison of the 3D trajectories represented in different frames. (a) is the ECEF frame and (b) is the local world frame.
Figure 10. Comparison of the 3D trajectories represented in different frames. (a) is the ECEF frame and (b) is the local world frame.
Sensors 22 03391 g010
Figure 11. The positioning errors with different smoothed interval settings (m = 5, 10, 20, 50, 100, and 200).
Figure 11. The positioning errors with different smoothed interval settings (m = 5, 10, 20, 50, 100, and 200).
Sensors 22 03391 g011
Figure 12. The positioning errors of different line feature tracking thresholds.
Figure 12. The positioning errors of different line feature tracking thresholds.
Sensors 22 03391 g012
Table 1. RMSE[m] and SEP[m] statistic of GVINS and our proposed method in the sports field.
Table 1. RMSE[m] and SEP[m] statistic of GVINS and our proposed method in the sports field.
ENU FrameECEF Frame
RMSESEPRMSESEP
EastNorthUpXYZ
GVINS0.6610.4770.2960.6830.8340.7660.5811.002
Proposed0.5240.3600.7250.7610.5220.4400.4660.651
Table 2. RMSE[m] and SEP[m] statistic of GVINS and our proposed method in the complex environment.
Table 2. RMSE[m] and SEP[m] statistic of GVINS and our proposed method in the complex environment.
ENU Frame
RMSESEP
EastNorthUp
GVINS1.0910.7823.0162.540
Proposed1.2301.4042.4992.390
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Dong, B.; Zhang, K. A Tightly Coupled Visual-Inertial GNSS State Estimator Based on Point-Line Feature. Sensors 2022, 22, 3391. https://doi.org/10.3390/s22093391

AMA Style

Dong B, Zhang K. A Tightly Coupled Visual-Inertial GNSS State Estimator Based on Point-Line Feature. Sensors. 2022; 22(9):3391. https://doi.org/10.3390/s22093391

Chicago/Turabian Style

Dong, Bo, and Kai Zhang. 2022. "A Tightly Coupled Visual-Inertial GNSS State Estimator Based on Point-Line Feature" Sensors 22, no. 9: 3391. https://doi.org/10.3390/s22093391

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop