Next Article in Journal
An Electrochemical NO2 Sensor Based on Ionic Liquid: Influence of the Morphology of the Polymer Electrolyte on Sensor Sensitivity
Previous Article in Journal
Application of Novel Lateral Tire Force Sensors to Vehicle Parameter Estimation of Electric Vehicles
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Optimization Algorithm for Kalman Filter Exploiting the Numerical Characteristics of SINS/GPS Integrated Navigation Systems

1
School of Mechanical Engineering and Automation, Beihang University, Beijing 100191, China
2
Ministry of Education Key Laboratory of 3D Information Acquisition and Application, Capital Normal University, Beijing100089, China
*
Author to whom correspondence should be addressed.
Sensors 2015, 15(11), 28402-28420; https://doi.org/10.3390/s151128402
Submission received: 26 August 2015 / Revised: 21 October 2015 / Accepted: 3 November 2015 / Published: 11 November 2015
(This article belongs to the Section Remote Sensors)

Abstract

:
Aiming at addressing the problem of high computational cost of the traditional Kalman filter in SINS/GPS, a practical optimization algorithm with offline-derivation and parallel processing methods based on the numerical characteristics of the system is presented in this paper. The algorithm exploits the sparseness and/or symmetry of matrices to simplify the computational procedure. Thus plenty of invalid operations can be avoided by offline derivation using a block matrix technique. For enhanced efficiency, a new parallel computational mechanism is established by subdividing and restructuring calculation processes after analyzing the extracted “useful” data. As a result, the algorithm saves about 90% of the CPU processing time and 66% of the memory usage needed in a classical Kalman filter. Meanwhile, the method as a numerical approach needs no precise-loss transformation/approximation of system modules and the accuracy suffers little in comparison with the filter before computational optimization. Furthermore, since no complicated matrix theories are needed, the algorithm can be easily transplanted into other modified filters as a secondary optimization method to achieve further efficiency.

1. Introduction

In many SINS/GPS integrated navigation systems, common system models cannot conform to the use conditions of classical Kalman filter due to the “colored noise”. To still be able to use the optimal estimation method, it is necessary to expand the system state model [1,2]. However, as the state dimensions increase, filtering computation will rapidly become expensive and unstable, and the so-called “dimension disaster” could even break out. In order to address this problem, scholars have reported various optimization algorithms which can lower the computational costs and/or enhance the numerical robustness. The typical optimization algorithms contain square-root information filtering SRIF [3], U-D decomposition filtering [4], singular value decomposition filtering [5] and their improved versions [6,7,8]. These modified filters, besides greatly reducing the time consumption, theoretically guarantee the positive definiteness of covariance and effectively avoid the numerical divergence, thus they are widely applied in the theoretical design of higher-order systems. Nevertheless, they require the support of relatively complex matrix theory in derivation and may pose certain difficulty in engineering. On the other hand, although these algorithms have decreased computational costs, their complexity is still O(n3). With further expansion of the state dimensions, the above decomposition algorithms, as a kind of general method, are also being challenged on their real-time capability. Recently, researchers have paid much attention to non-commonality methods, which are designed for some specific applications or based on specific models. For instance, aiming at integrated navigation applications, a kind of reduced-order Kalman filter (RDKF) [9,10,11,12] was promoted to ease the computational load. The idea of these filters is to reduce the model dimension by theoretical/engineering methods. As the filter dimension n is vital to computation time, the reduction of the state order will produce a direct benefit in terms of real time. However, the decrease of the state order will also bring partial accuracy damage, therefore, these methods may not be suitable for some high accuracy-demanding applications. Another kind of effective algorithm optimizes the float-point operations mainly by taking advantage of the sparse matrices (e.g., the transition matrix Φn in SINS/GPS) during the filtering computation [13,14,15]. In these algorithms, the so-called matrix accumulative method (MAM) or other online methods are used and the algorithm complexity can be simplified to O(s2 – u2) (s/u represents the numbers of nonzero/1 elements in Φn). As the matrices in the high order model normally contain substantial numbers of zero elements, the computational time can be curtailed to an ideal level by usage of this sparse-matrix-based method. Meanwhile, being different from RDKF, the sparse-matrix-based methods do not change the system model and need little engineering approximation, thus they perform better in the accuracy-control aspect. But these methods also have their own limitations:
(1)
Though they avoid massive unnecessary multiplying-zero operations, the methods still need extra O(n3) times estimation of zero elements in each matrix multiplication;
(2)
Methods based on MAM, a kind of online method, could not seek any deeper optimization online and their efficiency is totally dependent on the number of zeros in the real-time matrix. To enhance the optimization efficiency, the usual way is to set more zero elements by approximation methods, but this comes at the cost of a certain accuracy loss.
In response to the above problems, we present here a new highly-efficient, accuracy-lossless, and engineering-tractable optimization algorithm based on offline derivation and a parallel method exploiting the numerical characteristics of SINS/GPS. In comparison with other algorithms, the proposed method offers numerous advantages: (i) in comparison with the general algorithms, our proposed method requires little complex deduction and is easily understandable; (ii) in comparison with the RDKF optimization method, the proposed method needs little engineering approximation and would be more accurate; and (iii) in comparison with the filter based on MAM, it is free of zero-estimation operations and can deduce some stronger conclusions, thus it is more efficient. Emphasizing on the engineering simplicity, we will introduce our optimization method revolving around the classical Kalman filter and a loosely-coupled model of SINS/GPS. It is apprised here that the chosen model does not imply any constraints on the application scope of our method. Actually, the offline and parallel method, as a pure numerical optimization method, is equally applicable to the extended filter and other complex SINS/GPS models if some similar research on the special model is done. On the other hand, since there is no transformation on any models or equations in our method (the two distinct differences with the normal filtering computations are: (i) unnecessary floating-point operations are directly avoided; and (ii) de-coupled operations are processed in parallel), the precision and robustness of our method would be equal to the classical Kalman filter in theory. Owing to this fact, the efficiency rather than robustness or accuracy is focused on in our discussion later. Section 2 gives a brief account of closed-loop Kalman filtering and a widely-used loosely-coupled model in SINS/GPS. In Section 3, the details of offline derivation and the parallel method based on a block-matrix technique are described. Section 4 and Section 5 deal with the statistics on computational costs and the filter performance on error estimation.

2. Loosely-Coupled SINS/GPS Model Using the Classical Kalman Filter

Depending on the different ways of integration, SINS/GPS are classified into two modes: (i) loose mode; and (ii) tight mode. Loose mode using GPS output to adjust SINS errors is opted for in this research for its distinctive features of high redundancy and easy realization. The emphasis is placed on a common kind of loose model i.e., position-velocity integration with 18 states and six observations.

2.1. State Equation

By using the indirect method, which treats the estimation variables as parametric errors instead of parameters themselves, the state equation is established in Equation (1):
x ˙ ( t ) = A ( t ) x ( t ) + G ( t ) ω ( t )
where, x ( t ) represents the state vector, including attitude errors γ , velocity errors δ v , position errors ( δ L   δ λ δ h ) T , and the “colored noise” errors: gyro constant drift ε c , Markov process drift ε r , and accelerometer drift a ; ω ( t ) denotes the collected system noise vector, including gyro drift white noise ω g , Markov driving white noise ω r , and accelerometer drift white noise ω a ; A ( t ) is state transition matrix and G ( t ) is noise coefficient matrix. Combining references [16,17,18,19,20,21,22] with some modification, the variables of the state equation can be described by Equations (2)–(5):
x ( t ) = ( γ x   γ y   γ z δ v x   δ v y   δ v z δ L   δ λ   δ h ε c x   ε c y   ε c z ε r x   ε r y   ε r z a x   a y   a z ) T
ω ( t ) = ( ω g x   ω g y   ω g z ω r x   ω r y   ω r z ω a x   ω a y   ω a z ) T
A ( t ) = [ A 11 A 12 A 13 C B N C B N O 3 × 3 A 21 A 22 A 23 O 3 × 3 O 3 × 3 C B N O 3 × 3 A 32 A 33 O 3 × 3 O 3 × 3 O 3 × 3 O 3 × 3 O 3 × 3 O 3 × 3 O 3 × 3 O 3 × 3 O 3 × 3 O 3 × 3 O 3 × 3 O 3 × 3 O 3 × 3 A I M U 22 O 3 × 3 O 3 × 3 O 3 × 3 O 3 × 3 O 3 × 3 O 3 × 3 A I M U 33 ] 18 × 18
G ( t ) = [ C B N O 3 × 3 O 3 × 3 O 3 × 3 O 3 × 3 O 3 × 3 O 3 × 3 O 3 × 3 O 3 × 3 O 3 × 3 O 3 × 3 O 3 × 3 O 3 × 3 I 3 × 3 O 3 × 3 O 3 × 3 O 3 × 3 I 3 × 3 ] 18 × 9
In matrices (4) and (5), all blocks are 3-order square matrices. Blocks O 3 × 3 , I 3 × 3 , and C B N stand for the zero matrix, the identity matrix, and the attitude matrix, respectively. The other blocks are described in detail in [16,17,18,19,20,21,22]. In particular, we highlight here two kinds of nonzero blocks A i 3 ( i = 1 , 2 , 3 ) and A I M U i i ( i = 2 , 3 ) of which the special single-nonzero-vector and diagonal structure may contribute to deeper optimization:
A i 3 = [ * 0 0 * 0 0 * 0 0 ] ,    i = 1 , 2 , 3 A I M U i i = [ * 0 0 0 * 0 0 0 * ] ,    i = 2 , 3
In addition, the variance Q ( t ) of noise vector ω ( t ) is described as:
Q ( t ) = [ Q 11 O 3 × 3 O 3 × 3 O 3 × 3 Q 22 O 3 × 3 O 3 × 3 O 3 × 3 Q 33 ] 9 × 9
where, block Q i i ( i = 1 , 2 , 3 ) is also diagonal.

2.2. Measurement Equation

We adopt a position-velocity integration model and define the filter observation vector z ( t ) [16,22]:
z o b s ( t ) = [ ( L I N S L G P S ) R n ( λ I N S λ G P S ) R e cos L h I N S h G P S v x I N S v x G P S v y I N S v y G P S v z I N S v z G P S ] = [ R n δ L + N x R e cos L δ λ + N y δ h + N z δ v x + M x δ v y + M y δ v z + M z ]
Then, we define the measurement equation as follows:
z ( t ) H ( t ) x ( t ) + n ( t )
where n ( t ) represents the measurement noise vector; R ( t ) denotes the corresponding variance matrix, and H ( t ) is the coefficient matrix. n ( t ) , R ( t ) and H ( t ) are described as follows according to [16,17,18,19,20,21,22]:
n ( t ) = ( N x , N y , N z , M x , M y , M z ) T
R ( t ) = diag { σ p x 2 , σ p y 2 , σ p z 2 , σ v x 2 , σ v y 2 , σ v z 2 } = [ R 11 O 3 × 3 O 3 × 3 R 22 ]
H ( t ) = [ O 3 × 3 O 3 × 3 H 13 O 3 × 3 O 3 × 3 O 3 × 3 O 3 × 3 I 3 × 3 O 3 × 3 O 3 × 3 O 3 × 3 O 3 × 3 ] 6 × 18
where:
H 13 = d i a g { R n   , R e cos L , 1 }

2.3. Closed-Loop Kalman Filtering

In order to fulfill the requirement of closed-loop Kalman filtering, the state equation (Equation (1)) and measurement equation (Equation (9)) are changed to the discrete forms:
x n = ϕ n x n 1 + u c n + Γ n ω n
z n = H n x n + n n
where, the transition matrix ϕ n is converted from A ( t ) ; the noise driving matrix Γ n is converted from G ( t ) ; the coefficient H n is converted from H ( t ) ; and the control vector u c n = ( γ c T δ v c T δ L c   δ λ c   δ h c ε c c T ε r c T   a c T ) c T is added for real-time calibration on the parameters of SINS.
After discretization, the update of x n is done recursively as follows:
x ˜ n ( ) = ϕ n x ˜ n 1 ( + c ) z ˜ n = H n x ˜ n ( ) x ˜ n ( + e ) = x ˜ n ( ) + K n ( z o b s n z ˜ n ) x ˜ n ( + c ) = x ˜ n ( + e ) + u c n
where, “~” represents the parameter needed estimation; ( ) represents the value before estimation; ( + e ) represents the value after estimation; ( + c ) represents the value after calibration with the vector u c n ; z o b s n represents real observation vector; and K n represents Kalman filtering gain.
To minimize the filtering error, state vector x ˜ n ( + c ) is permanently set to zero:
x ˜ n ( + c ) 0
Substitution of Equation (17) into Equation (16), yields:
u c n = x ˜ n ( + e ) = K n z o b s n
Apparently, u c n becomes the final output of the Kalman filter with control vector. As z o b s n is almost directly obtained by measurement, K n appears to be the key of u c n and even the filter. According to the Kalman filtering theory, K n can be recursively computed as follows:
P n ( ) = ϕ n P n 1 ( + e ) ϕ n T + Γ n Q n Γ n T
K n = P n ( ) H n T ( H n P n ( ) H n T + R n ) 1
P n ( + e ) = ( I K n H n ) P n ( )
where P n is the covariance; Q n and R n are the discretized forms of Q ( t ) and R ( t ) respectively. In fact, Equations (18)–(21) constitute the main process of the closed-loop Kalman filter. This process is generally subdivided into two processes: time propagation Equation (19) and measurement updating Equations (18), (20) and (21). In summary, the process flow of Kalman filter in SINS/GPS is depicted in Figure 1. Since most matrices involved in Figure 1 or Equations (19)–(21) are either 18 × 18 or 18 × 6 matrices, there will be a large-scale and time-consuming computation in each recursive cycle. Therefore, a certain optimization on the computation would be essential to the real-time properties of SINS/GPS.
Figure 1. Iterative closed-loop Kalman filtering process.
Figure 1. Iterative closed-loop Kalman filtering process.
Sensors 15 28402 g001

3. Computational Optimization

Consistent with Figure 1, optimization are sequentially implemented in the updating processes of ϕ n , Γ n Q n Γ n T , P n ( ) , K n , P n ( + e ) , where the most computational time are consumed.

3.1. Compute ϕ n

According to [16,17,18,19,20,21,22], we can calculate ϕ n as follows:
ϕ n = e t n 1 t n A ( t ) d t I + T n A n + T n 2 2 ! A n 2 + T n 3 3 ! A n 3
Here, the 3rd order Taloy equation is selected for mainly two reasons:
(1) Equation (22) meets the requirements of most mid-high-accuracy systems;
(2) Equation (22) can deduce the universal optimization conclusion.
It’s not difficult to prove that ϕ n would have a uniform numerical structure independent from which of the 3rd and higher order equation is chosen. On the other hand, lower order equations are obviously a special case of 3rd order equation. Therefore, the conclusion based on 3rd equation is suitable for the model based on the different order of equations too.
As the computation of Equation (22) is centered on the 18 × 18 matrix A n = A ( t n ) , it is necessary to inspect the numerical characteristics of A n before computational optimization: matrix A n , in which only 64 of the total 324 elements are non-zero, is clearly a sparse matrix. A substantial reduction of useless multiplying-zero operations can be achieved by directly expanding A n and operating every element in deduction (namely “direct derivation”), but because of the high order, direct derivation is quite complicated to handle and is not advisable in practice. Fortunately, the 3-order-block form of A n , having 23 zero blocks among the total of 36 blocks (see Equation (4)), is still a sparse matrix structurally. Therefore, we can carry out our offline optimization using an indirect method, in which the blocks instead of the elements are treated as the atomic unit in deduction. In this way, we can largely decrease the derivation complexity and retain most of the efficiency of a direct method at the same time. For these reasons, the indirect method is applied to all the derivation processes in this paper. In addition, the nonzero blocks A i 3 and A I M U i i (see Equation (6)), which need few operations in multiplication by any 3-order block (no more than 32 float-pointing multiplications and 2 × 3 additions), are also introduced to the derivation for higher efficiency.
It is a fact that manual derivation with indirect method is still a cumbersome job in view of multiple matrix multiplications (e.g., A n 2 , A n 3 ), so we handle our derivation by a computer program instead. With the powerful symbolic-computation function of some well-known math software, e.g., MATLAB and Maple, the complicated derivation can be easily accomplished. After necessary programming, the final derivation results are shown in Equations (23) and (24):
A n 2 = [ S 11 S 12 S 13 S 14 S 15 S 16 S 21 S 22 S 23 S 24 S 25 S 26 S 31 S 32 S 33 O 3 × 1 O 3 × 3 S 36 O 3 × 3 O 3 × 3 O 3 × 3 O 3 × 3 O 3 × 3 O 3 × 3 O 3 × 3 O 3 × 3 O 3 × 3 O 3 × 3 S 55 O 3 × 3 O 3 × 3 O 3 × 3 O 3 × 3 O 3 × 3 O 3 × 3 S 66 ] 18 × 18
A n 3 = [ T 11 T 12 T 13 T 14 T 15 T 16 T 21 T 22 T 23 T 24 T 25 T 26 T 31 T 32 T 33 T 34 T 35 T 36 O 3 × 3 O 3 × 3 O 3 × 3 O 3 × 3 O 3 × 3 O 3 × 3 O 3 × 3 O 3 × 3 O 3 × 3 O 3 × 3 T 55 O 3 × 3 O 3 × 3 O 3 × 3 O 3 × 3 O 3 × 3 O 3 × 3 T 66 ] 18 × 18
Also, the computation of blocks S i j and T i j is described in a computerized sequence as follows:
S i j = k = 1 3 A i k A k j , i , j  =  1 , 2 , 3 S i j = A i ( j / 2 1 ) C B N , i  =  1 , 2 , 3 ;   j =  4 , 6 S i 5 = S i 4 , i , j = 1 , 2 S 15 = S 15 +   ( C B N A I M U 22 ) S i m S 26 = S 26 +   ( C B N A I M U 33 ) S i m S 55 = ( A I M U 22 2 ) S i m S 66 = ( A I M U 33 2 ) S i m T i j = k = 1 3 A i k S k j , i = 1 , 2 , 3 ;   j = 1 , 2 , , 6 T 26 = T 26 +   ( C B N A I M U 33 2 ) S i m T 55 = ( A I M U 22 S 55 ) S i m T 66 = ( A I M U 33 S 66 ) S i m
whereas all products involving block A 31 , S 34 or S 35 are zero blocks O 3 × 3 and need no real-time computations; products involving the special blocks, e.g., A i 3 , S i 3 ( single-nonzero –vector block), A I M U i i , S 55 , or S 66 (diagonal blocks), are simplified and are partly indicated in symbol ( ) S i m .
Substituting of A n Equation (4), A n 2 Equation (23), and A n 3 Equation (24) in ϕ n Equation (22), yields:
ϕ n = [ ϕ 11 ϕ 12 ϕ 13 ϕ 14 ϕ 15 ϕ 16 ϕ 21 ϕ 22 ϕ 23 ϕ 24 ϕ 25 ϕ 26 ϕ 31 ϕ 32 ϕ 33 ϕ 34 ϕ 35 ϕ 36 O 3 × 3 O 3 × 3 O 3 × 3 I 3 × 3 O 3 × 3 O 3 × 3 O 3 × 3 O 3 × 3 O 3 × 3 O 3 × 3 ϕ 55 O 3 × 3 O 3 × 3 O 3 × 3 O 3 × 3 O 3 × 3 O 3 × 3 ϕ 66 ] 18 × 18
whereas:
ϕ i j = T n A i j + T n 2 2 ! S i j + T n 3 3 ! T i j ,     i [ 1 , 3 ] , j [ 1 , 6 ] , i j ϕ i i = I 3 × 3 + T n A i i + T n 2 2 ! S i i + T n 3 3 ! T i i , i = 1 , 2 , 3 , 5 , 6
Here, ϕ 13 and ϕ 23 are single-nonzero-column blocks; ϕ 33 is the sum of I 3 × 3 and single-nonzero-column block; ϕ 55 and ϕ 66 are diagonal blocks. These blocks would be specially used in other processes.
In conclusion, with offline derivation based on 3rd order block method, nearly half amount of blocks ϕ i j is proved to be constant blocks: O 3 × 3 or I 3 × 3 (see Equation (26)), thus, the computations on them are simply eliminated. Additionally, by using the properties of special blocks A i 3 , S i 3 , T i 3 , A j j , S j j , T j j and introducing the intermediate variables A n 2 , A n 3 , real-time computations on the other half can be further reduced.

3.2. Compute Γ n Q n Γ n T

According to [16,17,18,19,20,21,22], the 2nd order approximate of Γ n Q n Γ n T are computed as follows:
Γ n Q n Γ n T = T n 2 ( Q 1 n + ϕ n Q 1 n ϕ n T )
where:
Q 1 n = G ( t ) Q ( t ) G ( t ) T = [ C B N Q 11 ( C B N ) T O 3 × 9 O 3 × 3 O 3 × 3 O 9 × 3 O 9 × 9 O 9 × 3 O 9 × 3 O 3 × 3 O 3 × 9 Q 22 O 3 × 3 O 3 × 3 O 3 × 9 O 3 × 3 Q 33 ] 18 × 18
Obviously, Q 1 n and Γ n Q n Γ n T are both symmetric. Thus, for matrix Γ n Q n Γ n T , only its upper (or lower) triangle elements need to be computed in real time. Substitution of Q 1 n (Equation (29)) in Γ n Q n Γ n T (Equation (28)), yields:
Γ n Q n Γ n T = T n 2 [ Q n 11 Q n 12 Q n 13 O 3 × 3 Q n 15 Q n 16 Q n 12 T Q n 22 Q n 23 O 3 × 3 Q n 25 Q n 26 Q n 13 T Q n 23 T Q n 33 O 3 × 3 Q n 35 Q n 36 O 3 × 3 O 3 × 3 O 3 × 3 O 3 × 3 O 3 × 3 O 3 × 3 Q n 15 T Q n 25 T Q n 35 T O 3 × 3 Q n 55 O 3 × 3 Q n 16 T Q n 26 T Q n 36 T O 3 × 3 O 3 × 3 Q n 66 ] 18 × 18
where:
Q n i j = T n 2 [ ϕ i 1 Q 11 ϕ j 1 T + ϕ i 5 ( Q 22 ϕ j 5 T ) S i m + ϕ i 6 ( Q 33 ϕ j 6 T ) S i m ] ,     i , j = 1 , 2 , 3 ; i j Q n 11 = Q n 11 + T n 2 Q 11 Q n i j = T n 2 ( ϕ i j Q j 3 , j 3 ϕ j j ) S i m ,     i = 1 , 2 , 3 ; j = 5 , 6 Q n 55 = T n 2 ( Q 22 + ϕ 55 Q 22 ϕ 55 ) S i m Q n 66 = T n 2 ( Q 33 + ϕ 66 Q 33 ϕ 66 ) S i m
In Equation (31), Q n 55 and Q n 66 are apparently diagonal blocks that may be used to simplify other processes. As stated above, we optimize the computation of Γ n Q n Γ n T mainly by applying some special numerical properties to the offline derivation (e.g., the sparsity of G ( t ) , the simplicity of diagonal blocks Q 22 , Q 33 , ϕ 55 , ϕ 66 , and the symmetry of Γ n Q n Γ n T ) and much of the computation time is saved.

3.3. Compute P n ( )

For simplicity, P n ( ) and P n ( + e ) will be written in a unified form P n (this is allowed because of that P n ( ) and P n ( + e ) are actually the same covariance matrices in different state and share the same memories). Similar to ϕ n or Γ n Q n Γ n T , P n is written in 3rd-order-block-matrix form:
P n = [ P n , 11 P n , 12 P n , 16 P n , 21 P n , 22 P n , 26 P n , 61 P n , 62 P n , 66 ] 18 × 18
According to its definition, P n is a symmetric matrix:
P n , i j = P n , j i T , i , j = 1 , 2 6
Thus, only the lower (or upper) triangle part needs substantive computation. Regardless of Γ n Q n Γ n T at first, Equation (19) is written as:
P n ( ) = ϕ n P n 1 ( + e ) ϕ n T
Substitution of Equation (26) and (32), yields:
P n , 11 = i = 1 6 { ϕ 1 i [ j = 1 6 ( P n 1 , i j ϕ 1 j T ) ] T e m p } P n , 21 = i = 1 6 { ϕ 2 i [ j = 1 6 ( P n 1 , i j ϕ 1 j T ) ] T e m p } P n , 61 = [ ϕ 66 j = 1 6 ( P n 1 , 6 j ϕ 1 j T ) R e p ] S i m P n , 22 = i = 1 6 { ϕ 2 i [ j = 1 6 ( P n 1 , i j ϕ 2 j T ) ] T e m p } P n , 32 = i = 1 6 { ϕ 3 i [ j = 1 6 ( P n 1 , i j ϕ 2 j T ) ] T e m p } P n , 62 = [ ϕ 66 j = 1 6 ( P n 1 , 6 j ϕ 2 j T ) R e p ] S i m P n , 66 = ( ϕ 66 P n 1 , 66 ϕ 66 ) S i m
where the symbol ( ) T e m p represents intermediate items that would be temporarily stored in memory to avoid repeated computation; the symbol ( ) R e p represents the repeated item that have been computed before and are available by directly accessing ( ) T e m p and the symbol ( ) S i m indicates that the special properties of ϕ i 3 T are exploited to perform further optimization.
After Equation (35) is done, Γ n Q n Γ n T should be added. Substitution of Equations (30) and (32) in Equation (19), yields:
P n , i j = P n , i j + Q n , i j ,     i , j = 1 , 2 , 3 , 5 , 6 ;    i j
Above all, the computation of P n ( ) is divided into two steps: Equations (34) and (36). As the latter has few computations, optimization mainly focuses on the former. Employing the sparsity of ϕ n , the symmetry of P n ( ) and the repeatability of intermediate items, the real time of calculating Equation (34) is greatly improved.

3.4. Compute K n

Substitution of Equations (11), (12) and (32) in the inversion part of Equation (20), yields:
H n P n ( ) H n T + R n = [ H 13 P n , 33 H 13 + R 11 H 13 P n , 23 T P n , 23 H 13 P n , 22 + R 22 ] 6 × 6
Since H n P n ( ) H n T + R n has a low order, few computations are needed during inversion (the complexity is O ( 6 3 ) that can be neglected in comparison with other procedures) and the details of inversion are not discussed accordingly. Here, we use the symbol D to represent the result of inversion directly:
D = [ H n P n ( ) H n T + R n ] 1
Matrix D is easily proved to be symmetric and can be written in the 3-order-block-matrix form as:
D = [ D 11 D 12 D 12 T D 22 ] 6 × 6
Then K n can be computed as:
K n = [ P n , 12 D 12 T + P n , 13 H 13 D 11 P n , 12 D 22 + P n , 13 H 13 D 12 P n , 22 D 12 T + P n , 23 H 13 D 11 P n , 22 D 22 + P n , 23 H 13 D 12 P n , 62 D 12 T + P n , 63 H 13 D 11 P n , 62 D 22 + P n , 63 H 13 D 12 ] 18 × 6
Define
K n
as:
K n = [ K 11 K 12 K 21 K 22 K 61 K 62 ] 18 × 6
then:
K i j = P n , i 2 D j 2 T + P n , i 3 H 13 D 1 j ,     i = 1 , 2 , , 6 ;    j = 1 , 2
The derivation result shows that, by using the block-matrix optimization technique, only a simple process (Equation (37), (38) and (42)) is needed in K n updating. Compared with the complicated Equation (20) which needs multiple 18-order matrix production, our method seems much more simple and efficient in computing.

3.5. Compute P n ( + e )

Transform Equation (21) into:
P n ( + e ) = P n ( ) K n H n P n ( )
Obviously, the pressure in computation mainly comes from K n H n P n ( ) . Fortunately, K n H n P n ( ) is a symmetrical matrix (this is an evident inference according to Equation (43) in the condition that P n ( + e ) , P n ( ) are both symmetrical) so that only its upper (or lower) triangular part needs real-time updating. Define K n H n P n ( ) as the variance’s increment Δ P n :
Δ P n = K n H n P n ( ) = [ Δ P 11 Δ P 12 Δ P 16 Δ P 21 Δ P 22 Δ P 26 Δ P 61 Δ P 62 Δ P 66 ] 18 × 18
Substitution of Equations (12), (32) and (41), yields:
Δ P i j = K i 1 H 13 P n , 3 j + K i 2 P n , 2 j , i , j = 1 , 2 , , 6 ; i j
then:
P n , i j = P n , i j Δ P i j , i , j = 1 , 2 , , 6 ; i j
In conclusion, taking advantage of the symmetry of P n ( + e ) and the sparsity of H n , we deduct a simple form of computing P n ( + e ) : Equations (45) and (46). Equation (46) is apparently costless as only the matrix subtraction is involved. Equation (45) involves matrix products, but requires no computations on most blocks of P n (except the blocks in 2nd and 3rd rows) and proves to be highly efficient.

3.6. Parallel Computation

The optimization results (see Table 2, Table 3, Table 4, Table 5 and Table 6) reveal that, computation on P n ( ) costs nearly half of the processing time and becomes a bottleneck for further efficiency. The low efficiency mainly arises from that P n ( ) becomes increasingly dense and can provide few zero blocks after initial filtering cycles. In low-accuracy applications, some researchers may force the some matrix elements to be zero for low computational costs with the engineering approximate method. However, to some accuracy-demanded applications, methods with more accuracy and efficiency are still looked for. Parallel computation is one of the feasible approaches.
Before discussing the parallel method, we perform an inspection of the whole computation processes in order to clarify the dependency of every parameter on P n ( ) . The numerical relationship between P n ( ) and other parameters can be consulted in Table 1.
Except for P n ( + e ) , all parameters are either irrelevant to P n , i j or only relevant to 2nd and 3rd block rows (columns) of P n ( ) . Taking this fact into account, we can call 2nd and 3rd block rows (columns) “useful” data and treat the rest as “useless” (it is emphasized here that “useless” only refers to the requirement of most computation processes, but not that the data is really useless and needs no updates).
Furthermore, because P n ( ) is a symmetrical matrix, the “useful” blocks are only 2nd, 3rd block columns in fact. If “ × ” represents “useless” blocks, P n ( ) can be written as:
P n = [ × P n , 12 P n , 31 × × × × P n , 22 P n , 32 × × × × P n , 32 P n , 33 × × × × P n , 42 P n , 34 × × × × P n , 52 P n , 35 × × × × P n , 62 P n , 36 × × × ] 18 × 18
Table 1. Dependency of every real-time parameter on P n ( ) blocks.
Table 1. Dependency of every real-time parameter on P n ( ) blocks.
ϕ n , Q n , P n ( ) K n (Equations (40) and (41)) P n ( + e ) (Equation (43)) Δ P i j (Equation (45))
Irrelevant to P n , i j Dependent on P n , i 2 , P n , i 3 onlyDependent on P n , i j Dependent on P n , 2 i , P n , 3 i only
In measurement updating (Equations (18), (20) and (21)), all parameters except P n ( + e ) are decoupled with the “useless” data (Equation (19)) which belong to the time propagation. Moreover, to the coupled P n ( + e ) its time-consuming part Δ P i j is also independent of the “useless” blocks. In other words, with some appropriate modification of measurement updating and time propagation, we can produce a new decoupling mechanism of the two procedures. The modification contains three steps:
  • Step1. Subdivide time propagation.
  • Classify the blocks of P n ( ) into two kinds: “useful” data and “useless” data;
  • Step 2. Subdivide measurement updating.
  • Subdivide P n ( + e ) into two processes: computing Δ P i j (Equation (45)) and adding P n ( ) (Equation (46));
  • Step 3. Restructure measurement updating.
  • Restructure measurement updating process as Δ P i j (instead of P n ( + e ) ) (Equations (18) and (21)).
With the above steps, the restructured process and the “useless” data updating process are completely numerically decoupled and can be computed in parallel. In this way, the efficiency bottleneck in P n ( ) updating is broken.
Furthermore, in order to achieve the parallel computation on the “useful” data as well, we similarly subdivide the process of updating “useful” data into two steps: computing ϕ n P n 1 ( + e ) ϕ n T (Equation (34)) and adding Γ n Q n Γ n T (Equation (36)). Apparently, computing ϕ n P n 1 ( + e ) ϕ n T and updating Γ n Q n Γ n T (Equation (31), which belongs to time propagation) can be processed in parallel. In summary, the process of Kalman filtering with parallel computation is described as Figure 2.
It is pointed out here that the parallel method in this paper has a fundamental difference with some other parallel methods [23,24]. The others owe their decoupling to the so-called “mandatory delay”, which may cause certain accuracy damage. On the contrary, the decoupling in this paper needs neither “mandatory delay” nor engineering approximation and is achieved mainly by introducing the “useful” data and subdividing the computation process. Essentially, it is the numerical characteristic of SINS/GPS that brings up this special decoupling. Therefore, the parallel processing in this paper is an accuracy-lossless method.
In respect of efficiency, the parallel optimization method can reduce much execution time needed in updating both Γ n Q n Γ n T and the “useless” blocks (including 918 multiplications and 603 additions in updating Γ n Q n Γ n T , and 1089 multiplications and 1107 additions in updating “useless” blocks). In comparison with non-parallel methods, efficiency of parallel method increases by about 25%.
Figure 2. Filtering process with parallel computation.
Figure 2. Filtering process with parallel computation.
Sensors 15 28402 g002

4. Optimization Efficiency

According to the derivation result, we can figure out the optimization efficiency by counting and comparing the computational counts needed in both the proposed method and the existing algorithm. In detail, the discussion about counts proceeds on each parameter updating procedure, respectively. As matrix multiplication is mainly involved, the calculation will focus on the floating-point operations of multiplication and addition. Table 2, Table 3, Table 4, Table 5 and Table 6 will show the statistical details of each parameter. Then, Table 7 and Table 8 give the global efficiency and the comparison with other general algorithms, respectively. As shown in the tables, the offline-derivation and parallel method saves about 90% of computational time and 66% of memory space, while keeping the required accuracy level. Compared with general decomposition optimization algorithms, the proposed method needs no complicated matrix theory but producing higher efficiency. Under the condition that high hardware precision has guaranteed the numerical robustness, our algorithm appears quite simple and practical.
Table 2. Optimization efficiency of ϕ n updating process.
Table 2. Optimization efficiency of ϕ n updating process.
Offline-Derivation MethodTraditional Method
Multiplication 42  ×  3 3  +  36 × 3 2  +  13  ×  3  ×  3 2 = 1809 (I), 10% 3  ×  18 3  +  3 × 18 2  =  18468 (II)
Addition 42  ×  2  ×  3 2 + 9  ×  2  ×  3 + 13  ×  2  ×  3 2 = 1044 (I), 6% 3  ×  17  ×  18 2  +  3  ×  18 2  =  17496 (II)
Memory 32  ×  3 2  ×  8  Byte = 2.3  KB (III) 3  ×  18 2  ×  8   Byte   = 7.6 KB (III)
In (I), 42 × 33 (42 × 2 × 32), 36 × 32 (9 × 2 × 3) and 13 × 3 × 32 (13 × 2 × 32) represent the counts on multiplication (addition) of items in Equation (25) without special blocks, of items in Equation (25) with special block and of items in Equation (27) respectively, and only 32 & 33 items are counted; In (II), the first item represents the counts on operations of 18-order-matrix multiplication, the second item represents that of multiplication by scalars or matrix addition; In (III), data are assumed to be double-precision (8-byte word length); the percentages represent the efficiency in comparison with the traditional method.
Table 3. Optimization efficiency of Γ n Q n Γ n T updating process.
Table 3. Optimization efficiency of Γ n Q n Γ n T updating process.
Offline-Derivation MethodTraditional Method
Multiplication 24 × 3 3 + 18 × 3 2 + 12 × 3 2 = 918 , 6% 2 × 18 3 + 18 2 + 18 × 6 2 + 6 × 18 2 = 14580
Addition 24 × 2 × 3 2 + 9 × 2 × 3 + 13 × 3 2 = 603 , 4% 2 × 17 × 18 2 + 18 2 + 5 × 6 × 18 + 17 × 18 × 6 = 13716
Memory 13 × 3 2 × 8  Byte = 0.9  KB 3   ×   18 2  ×  8  Byte = 7.6  KB
Table 4. Optimization efficiency of P n ( ) updating process.
Table 4. Optimization efficiency of P n ( ) updating process.
Offline-Derivation MethodTraditional Method
Multiplication 120 × 3 3 + 37 × 3 2 = 3573 , 31% 2 × 18 3 = 11664
Addition 120 × 2 × 3 2 + 18 × 2 × 3 + 134 × 3 2 = 3474 , 31% 2 × 17 × 18 2 + 18 2 = 11340
Memory 60 × 3 2 × 8  Byte = 4.2   KB 3 × 18 2 × 8   Byte = 7.6   KB
Table 5. Optimization efficiency of K n updating process.
Table 5. Optimization efficiency of K n updating process.
Offline-Derivation MethodTraditional Method
Multiplication 24 × 3 3 + 6 × 3 2 = 702 , 11% 18  ×  18  ×  6  +  18  × 6 ×  6  +  6  ×  18   ×   18  +  6  ×  18  ×  6  =  5184
Addition 24 × 2 × 3 2 + 12 × 3 2 = 540 , 11% 17  ×  18  ×  6  +  17  ×  6  ×  6  +  5  ×  18  ×  6  +  17  ×  18  ×  6 = 4824
Memory 16   ×   3 2   ×   8   Byte =   1.1   KB 3   ×   6   ×   18 + 6   ×   6 ×   8   Byte   =   2.8   KB
Table 6. Optimization efficiency of P n ( + e ) updating process.
Table 6. Optimization efficiency of P n ( + e ) updating process.
Offline-Derivation MethodTraditional Method
Multiplication 42 × 3 3 + 6 × 3 2 = 1188 , 15% 6 × 18 × 18 + 18 3 = 7776
Addition 42 × 2 × 3 2 + 42 × 3 2 = 1134 , 15% 5 × 18 2 + 17 × 18 2 + 18 2 = 7452
Memory 16 × 3 2 × 8   Byte = 1.1   KB 3 × 6 × 18 + 6 × 6 × 8   Byte = 2.8   KB
Table 7. Optimization efficiency of offline-derivation and parallel method.
Table 7. Optimization efficiency of offline-derivation and parallel method.
Offline-Derivation & Parallel MethodTraditional Method
Multiplication 1809 + 2484 + 702 + 1188 = 6183 (IV), 10.7% 18468 + 14580 + 11664 + 5184 + 7776 = 57672
Addition 1044 + 2475 + 540 + 1134 = 5193 (IV), 9.5% 17496 + 13716 + 11340 + 4824 + 7452 = 54828
Memory 2.3 + 0.9 + 4.2 + 1.1 + 1.1 = 9.6   KB 7.6 + 7.6 + 7.6 + 2.8 + 2.8 = 28.4   KB
In (IV), 2484 (2475) is the operation times of “useful” blocks updating.
Table 8. Comparison of offline-derivation & parallel method and general optimization filters.
Table 8. Comparison of offline-derivation & parallel method and general optimization filters.
MultiplicationAddition
SRIF filtering 7 6 × 18 3 + 36 × 18 2 = 18468 (V) 7 6 × 18 3 + 36 × 18 2 = 18468 (V)
U-D decomposing filtering 1 2 × 18 3 + 1 2 × 18 2 + 10 × 18 2 + 8 × 18 = 6462 (VI) 1 2 × 18 3 + 1 2 × 18 2 + 9 × 18 2 + 9 × 18 = 6156 (VI)
SVD filtering 26 × 18 3 + 78 × 18 2 = 176904 (VII) 26 × 18 3 + 78 × 18 2 = 176904 (VII)
offline-derivation and parallel method 2484 + 702 + 1188 = 4374 (VIII) 2475 + 540 + 1134 = 4149 (VIII)
(V) is explained in Reference [3], while (VI) in Reference [25], Reference [26] and (VII) in Reference [5]; Point out here that, (VI) relies on the precondition that ϕ n is already an upper-triangle matrix in 9-order-block form (what is the conclusion of offline derivation), otherwise, 9702 multiplications and 9396 additions are actually needed; concerning (VIII), the procedure of computing ϕ n is no longer counted as no optimizations are against ϕ n in the general algorithms.

5. Simulation

To evaluate the method validity, we design a practical loosely-coupled SINS/GPS program in CCS v5.5 (a well-known programming tool on DSP platform). The program comprises three modules:
(1)
Sensor data sampling module.
(2)
SINS algorithm module. This module is designed to implement the calculation of attitude, velocity and position independently (only the inertial sensors data is needed).
(3)
Kalman filter module. This part is for data fusion. The system module is designed according to Section 2, and the computation process is programmed using the offline and parallel method described in Section 3. Besides, for accuracy evaluation, the classical KF is also designed as a control group.
With the developed program, we can simulate filtering process and evaluate the performance of our method in CCS. The simulation parameters and conditions are set as follows:
(1)
DSP TMS320C6722, a famous float-point CPU, is chosen in CCS as the computing device.
(2)
The carrier of the navigation system is assumed to be static. With this assumption, the gyro/accelerometer would detect no valid rotate/velocity rate aside from devices noise. Therefore, noise is considered the sampling data driving the SINS/GPS program. As the carrier is static, velocity/position should stay at the zero/initial value in theory, thus, this assumption can largely facilitate the evaluation of program results.
(3)
Velocity/position measured by GPS is constantly set:
v G P S ( 0 , 0 , 0 ) T ,    ( L      λ      h   )   G P S T ( 0.6977686 2.0306152 90.0 ) T
Velocity/position calculated by SINS is initialized:
v S I N S = ( 0 , 0 , 0 ) T ,    ( L      λ      h   )   S I N S T = ( 0.6977688370723981 2.030615370778482 91.0 ) T
(4)
IMU data rate (noise frequency): 100 Hz; SINS velocity/position updating rate: 50/20 Hz; filtering rate: 10 Hz.
With the above setting, we run the program in CCS for 100 s and record the navigation outputs (e.g., the X-axis velocity and latitude). The simulation results are illustrated in Figure 3 and Figure 4.
Figure 3. Latitude calculation by different methods.
Figure 3. Latitude calculation by different methods.
Sensors 15 28402 g003
Figure 4. Velocity calculation by different methods.
Figure 4. Velocity calculation by different methods.
Sensors 15 28402 g004
The results show that the proposed method effectively restrains the unbounded error accumulation in SINS and keep the estimated variables around their mean values. On the aspect of estimation performance, our method is nearly at par to the classical KF (the proposed-method curve does not completely overlap the classical-KF one because of less calculation and less truncation errors). This conclusion agrees with the theoretical analysis: our numerical method, needing no modification of the system module and no engineering approximation, causes little damage to the estimation accuracy, but from the point of view of real-time computation, our method is much more efficient than the classical KF. In the DSP program, the proposed method needs 64,820 system clocks in each filtering cycle while classical KF needs 390,480 clocks. If a clock frequency of 200 MHz, the typical frequency of a DSP TMS320C6722 is chosen, then our method will cost only 324 μs in each filtering operation. Such a level of real-time efficiency is highly valuable to the many accurate navigation applications.

6. Conclusions

Starting from a common classical-Kalman-fitler-based SINS/GPS model (position-speed integration with 18 states and six measurements), we present an optimization algorithm based on the system’s numerical characteristics. The algorithm employs a block-matrix technique in offline derivation, where special blocks (e.g., zero blocks, diagonal blocks, etc.) are used to simplify the calculation. In this way, plenty of invalid multiplying by zero and repeated operations are avoided offline. Furthermore, aiming at the time-consuming update of P n ( ) , a novel parallel method is implemented by defining “useful” data and subdividing computational process, and the entire filtering procedure is greatly accelerated. The statistical analysis and simulation results reveal that the offline algorithm, coupled with the parallel method. can reduce the CPU processing time by 90% and the memory usage by 66%. Compared with several general decomposition-optimization algorithms, the proposed method requires no complex matrix theory rather more efficiency. The distinguished feature of the algorithm is that no modification of the system module and no engineering approximation are needed, thus it causes little harm to the base filter. It is pointed out that although the derivation is based on a specific integration model, the algorithm as a complete numerical optimization approach can be transplanted to other advanced models of SINS/GPS or other extended filters. With the powerful symbolic-operation function of the MATLAB program, researchers can manipulate formulas involving high order matrices in an easy way. Consequently, the proposed method is an engineering-tractable approach with high efficiency and high precision for SINS/GPS integrated navigation systems.

Author Contributions

Shaoxing Hu and Shike Xu developed the overall algorithm and wrote the papers. Shike Xu coded the program and performed the simulation. Wangdu Hu analyzed the experiment data. Aiwu Zhang reviewed and revised the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Mohinder, S.G.; Angus, P.A. Global Navigation Satellite Systems, Inertial Navigation, and Integration; John Wiley & Sons Inc.: New York, NY, USA, 2013. [Google Scholar]
  2. Chui, C.K.; Chen, G. Kalman Filtering with Real-Time Applications; Tsinghua University Press: Beijing, China, 2013. [Google Scholar]
  3. Bierman, G.J. A comparison of discrete linear filtering algorithms. IEEE Trans. Aero. Electr. Syst. 1973, 9, 28–37. [Google Scholar] [CrossRef]
  4. Thornton, C.L.; Bierman, G.J. UDUT covariance factorization for Kalman filtering. Control Dyn. 1980, 16, 177–248. [Google Scholar]
  5. Wang, L.; Libert, G.; Manneback, P. Kalman filter algorithm based on singular value decomposition. In Proceedings of the 31st IEEE Conference on Decision and Control, Tucson, AZ, USA, 16–18 December 1992.
  6. Boncelet, C.; Dickinson, B. An extension of the SRIF Kalman filter. IEEE Trans. Autom. Control 1987, 32, 176–179. [Google Scholar] [CrossRef]
  7. Tsyganova, J.V.; Kulikova, M.V. State sensitivity evaluation within UD based array covariance filters. IEEE Trans. Autom. Control 2013, 58, 2944–2950. [Google Scholar] [CrossRef]
  8. Tang, Y.; Deng, Z.; Manoj, K.K.; Chen, D. A practical scheme of the sigmapoint Kalman filter for high-dimensional systems. J. Adv. Model. Earth Syst. 2014, 6, 21–37. [Google Scholar] [CrossRef]
  9. He, X.; Chen, Y.; Iz, H.B. A reduced-order model for integrated GPS/INS. IEEE Aerosp. Electron. Syst. Mag. 1998, 13, 40–45. [Google Scholar] [CrossRef]
  10. Huang, J.; Tan, H.S. A low-order DGPS-based vehicle positioning system under urban environment. IEEE ASME Trans. Mechatron. 2006, 11, 567–575. [Google Scholar] [CrossRef]
  11. Papazoglou, M.; Tsioras, C. Integrated SAR/GPS/INS for Target Geolocation Improvement. J. Comput. Model. 2014, 4, 267–298. [Google Scholar]
  12. Mutambara, A.G. Decentralized Estimation and Control for Multisensor Systems; CRC Press: Boca Raton, FL, USA, 1998. [Google Scholar]
  13. Zhu, Q.J.; Yan, G.M. A Rapid Computation Method for Kalman Filtering in Vehicular SINS/GPS Integrated System. Appl. Mech. Mater. 2012, 182, 541–545. [Google Scholar] [CrossRef]
  14. Wei, C.; Fang, J.; Sheng, J. Fast data fusion method for integrated navigation system and hardware in loop simulation. J. Beijing Univ. Aeronaut. Astronaut. 2006, 11, 1281–1285. (In Chinese) [Google Scholar]
  15. Holmes, S.; Klein, G.; Murray, D.W. An O(N²) Square Root Unscented Kalman Filter for Visual Simultaneous Localization and Mapping. IEEE Trans. Pattern Anal. Mach. Intell. 2009, 31, 1251–1263. [Google Scholar] [CrossRef] [PubMed]
  16. González, R.; Giribet, J.I.; Patiño, H.D. An approach to benchmarking of loosely coupled low-cost navigation systems. Math. Comput. Model. Dyn. Syst. 2015, 21, 272–287. [Google Scholar] [CrossRef]
  17. Quinchia, A.G.; Falco, G.; Falletti, E.; Dovis, F.; Ferrer, C. A comparison between different error modeling of MEMS applied to GPS/INS integrated systems. Sensors 2013, 13, 9549–9588. [Google Scholar] [CrossRef] [PubMed]
  18. Falco, G.; Einicke, G.A.; Malos, J.T.; Dovis, F. Performance analysis of constrained loosely coupled GPS/INS integration solutions. Sensors 2012, 12, 15983–16007. [Google Scholar] [CrossRef] [PubMed]
  19. Solimeno, A. Low-cost INS/GPS Data Fusion with Extended Kalman Filter for Airborne Applications. Master’s Thesis, Universidade Technica de Lisboa, Lisboa, Portugal, 2007. [Google Scholar]
  20. Farrell, J.A.; Barth, M. The Global Positioning Systems and Inertial Navigation; McGraw-Hill: New York, NY, USA, 1999. [Google Scholar]
  21. Wang, H. GPS Navigation Principle and Application; Science Press: Beijing, China, 2010. (In Chinese) [Google Scholar]
  22. Wu, T.; Ma, L. Strapdown Inertial Navigation System Application Analysis; National Defence Industry Press: Beijing, China, 2011. (In Chinese) [Google Scholar]
  23. Brown, D.W.; Gaston, F.M.F. The design of parallel square-root covariance Kalman filters using algorithm engineering. Integr. VLSI J. 1995, 20, 101–119. [Google Scholar] [CrossRef]
  24. Rosén, O.; Medvedev, A. Parallelization of the Kalman filter on multicore computational platforms. Control Eng. Pract. 2013, 21, 1188–1194. [Google Scholar] [CrossRef]
  25. Bierman, G.J. Measurement Updating Using the U-D Factorization. Automatica 1976, 12, 375–382. [Google Scholar] [CrossRef]
  26. Bierman, G.J. Efficient Time Propagation of U-D Covariance Factors. IEEE Trans. Autom. Control 1981, 26, 890–894. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Hu, S.; Xu, S.; Wang, D.; Zhang, A. Optimization Algorithm for Kalman Filter Exploiting the Numerical Characteristics of SINS/GPS Integrated Navigation Systems. Sensors 2015, 15, 28402-28420. https://doi.org/10.3390/s151128402

AMA Style

Hu S, Xu S, Wang D, Zhang A. Optimization Algorithm for Kalman Filter Exploiting the Numerical Characteristics of SINS/GPS Integrated Navigation Systems. Sensors. 2015; 15(11):28402-28420. https://doi.org/10.3390/s151128402

Chicago/Turabian Style

Hu, Shaoxing, Shike Xu, Duhu Wang, and Aiwu Zhang. 2015. "Optimization Algorithm for Kalman Filter Exploiting the Numerical Characteristics of SINS/GPS Integrated Navigation Systems" Sensors 15, no. 11: 28402-28420. https://doi.org/10.3390/s151128402

APA Style

Hu, S., Xu, S., Wang, D., & Zhang, A. (2015). Optimization Algorithm for Kalman Filter Exploiting the Numerical Characteristics of SINS/GPS Integrated Navigation Systems. Sensors, 15(11), 28402-28420. https://doi.org/10.3390/s151128402

Article Metrics

Back to TopTop