Next Article in Journal
Proactive Cross-Layer Framework Based on Classification Techniques for Handover Decision on WLAN Environments
Previous Article in Journal
An Efficient Adaptive Fuzzy Hierarchical Sliding Mode Control Strategy for 6 Degrees of Freedom Overhead Crane
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Gradually Linearizing Kalman Filter Bank Designing for Product-Type Strong Nonlinear Systems

School of Affiliation, Hangzhou Dianzi University, Hangzhou 310018, China
*
Author to whom correspondence should be addressed.
Electronics 2022, 11(5), 714; https://doi.org/10.3390/electronics11050714
Submission received: 12 January 2022 / Revised: 19 February 2022 / Accepted: 21 February 2022 / Published: 25 February 2022
(This article belongs to the Topic Artificial Intelligence in Sensors)

Abstract

:
Our study aimed to improve the poor performance of existing filters, such as EKF, UKF and CKF, that results from their weak approximation ability to nonlinear systems. This paper proposes a new extended Kalman filter bank focusing on a class of product-type strong nonlinear systems composed by system state variables, time-varying parameters and non-linear basic functions. Firstly, the non-linear basic functions are defined as hidden variables corresponding to system state variables, and then the strong nonlinear systems are described simplistically. Secondly, we discuss building two dynamic models between their future values of parameters, as well as hidden variables and their current values based on the given prior information. Thirdly, we recount how an extended Kalman filter bank was designed by gradually linearizing the strong nonlinear systems about system state variables, time-varying parameters and hidden variables, respectively. The first extended Kalman filter about future hidden variables was designed by using these estimates of the state variables and parameters, as well as hidden variables at current. The second extended Kalman filter about future parameters variables was designed by using these estimates of the current state variables and parameters, as well as future hidden variables. The third extended Kalman filter about future state variables was designed by using these estimates of the current state variables, as well as future parameters and hidden variables. Fourthly, we used digital simulation experiments to verify the effectiveness of this method.

1. Introduction

With the development of the times, filtering theory has played an important role in various fields at the domestic and international level, especially in national defense, military and other fields, such as tracking navigation, signal processing, automatic control, target tracking, etc. [1,2,3,4,5,6]. Kalman (R.E. Kalman) proposed a dynamic recursive state estimation method called the Kalman filter (KF) in 1960 [7]. The KF takes into account the statistical characteristics of the estimated and observed measures, and it designs an optimal filter based on the minimum mean square error to solve the problem of state estimation and target tracking in linear Gaussian systems [8,9]. However, in practical applications, KF is difficult to use to solve nonlinear models [10,11]. Bucy proposed the extended Kalman filter (EKF) method to solve nonlinear problems [12]. EKF is a function linear approximation method for nonlinear system models. It retains the Taylor expansion of the nonlinear function to the first-order term, thus successfully solving the nonlinear Kalman filter problem [13]. The EKF method has been successfully applied to state estimation and target tracking in the fields of vehicle control [14], traffic dynamics [15] and power systems [16]. However, since all the higher-order terms in the Taylor expansion are rounded, a large truncation error will occur in EKF [17]. Moreover, as the nonlinearity of the system increases, the filtering performance will gradually decrease or even diverge. Since, Zhou et al. proposed a Strong Tracking Filter (STF)in order to make up for the limitations of EKF. This method introduces a fading factor and calculates the optimal value of the fading factor by forcing the residual sequence to be orthogonal. This method improves the estimation accuracy and robustness of filtering. Subsequently, Julier et al. [18] proposed the unscented Kalman filter (UKF) method. In contrast to the function approximation method, UKF uses a sampling method to approximate the probability distribution of a non-linear function, thereby solving the problem of non-linear filtering. However, the UKF method can only achieve the second-order polynomial approximation to the nonlinear model at most, and the filtering effect will gradually decline as the degree of nonlinearity increases. Then the cubature Kalman filter (CKF) was developed on the basis of UKF [19,20]. Its sampling point selection method is optimized on the basis of UKF. It is mainly based on the idea of sampling points on a spherical surface. The two important steps of transforming into spherical radial integral form and third-order spherical radial criterion are realized. However, the error covariance matrix in CKF must be positive definite; otherwise, CKF will not be able to proceed. In short, the existing nonlinear methods mainly have two approaches: Taylor expansion and sampling approximation. For the overall nonlinear function, these two methods have less than the second-order approximation ability.
The joint estimation of state and parameters to identify the time-varying parameters in the model is also an important topic. In the existing methods, people often consider the method of expanding variables for the introduction of parameters in the system, but this method undoubtedly increases the degree of non-linearity of the original non-linear function. Therefore, the abovementioned nonlinear Kalman filtering methods, such as EKF, STF, and UKF, have obvious insufficient filtering capabilities for such situations, and they have encountered greater difficulties in estimation performance. Therefore, Jiang et al. [21] used the estimation result of the least square method as the initial value of the extended filtering algorithm and proposed a joint estimation method of the state and parameters of the polynomial system based on nonlinear filtering. Wen et al. [22] put the strong tracking filter combining the theory with multi-sensor data fusion technology, the result of state estimation based on global information fusion is obtained, and a new method of multi-sensor state and parameter joint estimation based on strong tracking filter is established. The abbreviations of Kalman Filtering methods in the article are shown in Table 1.
In addition, the research team of Wen and Sun et al. has achieved a series of results in the design of high-order extended Kalman filter in References [23,24,25]. Wang et al. also proposed a high-order extended Kalman filter based on maximum correlation entropy Mann filter design method [26]. Liu et al. proposed a design method of higher-order extended Kalman filter based on Kronecker product transform [27].
In this paper, for a class of nonlinear system model with multiplicative hidden variables and parameters, a step-by-step state and parameter estimation method is proposed that is different from the above two methods. Through the step-by-step linearization, each variable is solved step by step with the help of the Three-Stage Kalman Filter (TSKF), the step-by-step estimation of system state and parameters is realized, and the filtering accuracy is effectively improved.
The main contributions of this paper are as follows: (1) adopting the idea of splitting, the multiplicative basic function term is defined as the hidden variable of the system, and it is regarded as the time-varying parameter of the system to simplify the system model formally; and (2) adopting the idea of gradual linearization, respectively designing Kalman filters on hidden variables, time-varying parameters and state variables, thus improving the filtering effect of each weakly nonlinear factor.

2. A Strong Tracking Filter (STF) Method for Joint Estimation of the State and Parameters in a Nonlinear System

The important role of parameter estimation is for fault detection and diagnosis. In the actual system-controlled process, the process parameters often change randomly over time and cannot get the law of its change. Estimating and testing these time-varying parameters with high precision in order to ensure the normal operation of the system. Therefore, it is very necessary to establish a joint estimation method of state and parameters in a nonlinear time-varying stochastic system. Under normal circumstances, people usually use the method of expanding state variables to estimate the state and parameters jointly based on the standard extended Kalman filter.

2.1. Description of Nonlinear Systems with Real Varying Parameters to Be Estimated

Consider the following type of nonlinear system:
x ( k + 1 ) = f ( x ( k ) , θ ( k ) ) + w x ( k )
y ( k + 1 ) = h ( k + 1 , x ( k + 1 ) , θ ( k + 1 ) ) + v ( k + 1 )
where the n-dimensional state vector is as follows:
x ( k ) = [ x 1 ( k ) , , x i ( k ) , , x n ( k ) ] T
m-dimensional state vector:
y ( k + 1 ) = [ y 1 ( k + 1 ) , , y i ( k + 1 ) , , y m ( k + 1 ) ] T
n-dimensional nonlinear function vector:
f ( x ( k ) , θ ( k ) ) = [ f 1 ( x ( k ) , θ ( k ) ) , , f n ( x ( k ) , θ ( k ) ) ] T
p-dimensional time-varying parameter vector to be estimated:
θ ( k ) = [ θ 1 ( k ) , , θ i ( k ) , , θ p ( k ) ] T
Among the above variables, state variables, x ( k ) , and parametric variables, θ ( k ) , need to be estimated; f ( x ( k ) , θ ( k ) ) in Equation (1) is the state transition nonlinear function with time-varying parameters θ ( k ) , and h ( x ( k + 1 ) , θ ( k + 1 ) ) in the Equation (2) is the state-transition nonlinear function with time-varying parameters θ ( k + 1 ) . Moreover, w x ( k ) and v ( k + 1 ) are the independent Gaussian white noise of the corresponding dimension.
Firstly, in order to realize real-time estimation of state x ( k ) and parameters θ ( k ) , we introduce the following auxiliary equation:
θ ( k + 1 ) = θ ( k ) + w θ ( k )
We then establish the corresponding expansion vector:
x e ( k ) = [ x T ( k ) ,   θ T ( k ) ] T
w e ( k ) = [ w x T ( k ) ,   w θ T ( k ) ] T
Then the system (1)–(3) can be written in the following equivalent form:
x e ( k + 1 ) = f e ( k , u ( k ) , x e ( k ) ) + w e ( k )
y ( k + 1 ) = h e ( k + 1 , x ( k + 1 ) ) + v ( k + 1 )
where
f e ( x e ( k ) , k ) = [ f ( x ( k ) , θ ( k ) , k ) θ ( k ) ]
h e ( k + 1 , x e ( k + 1 ) ) = h ( k + 1 , x ( k + 1 ) , θ ( k + 1 ) )
It is assumed that the initial value of the state x e ( 0 ) has the following statistical characteristics:
E { x e ( 0 ) } = x ^ e 0 ;   E { [ x e ( 0 ) x ^ e 0 ] [ x e ( 0 ) x ^ e 0 ] T } = P 0
The initial values of noise w e ( k )   , v ( k + 1 )   , and the extended dimension state x e ( 0 ) have the following characteristics:
E { w e ( k ) } = 0 ;    E { w e ( k ) w e T ( k ) } = Q e ( k ) E { v ( k + 1 ) } = 0 ;    E { v ( k + 1 ) v T ( k + 1 ) } = R ( k + 1 ) E { w e ( k ) v T ( k + 1 ) } = 0 ,    E { w e ( k ) x e T ( 0 ) } = 0 ;    E { v ( k + 1 ) x e T ( 0 ) } = 0
Remark 1.
The strongly nonlinear system mentioned in this paper refers to the system in which the second-order and above derivatives of the system equation still affect it.

2.2. Design of STF for Joint Estimation of State and Parameter in the Nonlinear System

This section introduces the strong tracking filter (STF) of the joint estimation of state and parameters for the nonlinear system Equations (4) and (5) after dimension expansion. Moreover, suppose that the estimated value of the expanded state variable, x e ( k )   , and the estimated error covariance matrix were obtained:
x ^ e ( k k ) = E { x e ( k ) x ^ e 0 , y ( 1 ) , y ( 2 ) , , y ( k ) } P e ( k k ) = E { [ x e ( k ) x ^ e ( k k ) ] [ x e ( k ) x ^ e ( k k ) ] T }
and assume that the observed value y ( k + 1 ) at time k + 1 is known. The goal of this section is to obtain the estimated value of the state and the estimated error covariance matrix, as follows:
x ^ e ( k k ) , P e ( k k ) y ( k + 1 ) x ^ e ( k + 1 k + 1 ) , P e ( k + 1 k + 1 )
Therefore, design the following nonlinear system (4) and (5) of the state and parameter joint estimation of the strong tracking filter (STF):
x ^ e ( k + 1 k + 1 ) = x ^ e ( k + 1 k ) + K e ( k + 1 ) y ˜ e ( k + 1 k )
where the measurement forecast estimation error is as follows:
y ˜ e ( k + 1 k ) = y ( k + 1 ) y ^ e ( k + 1 k ) = y ( k + 1 ) h e ( x ^ e ( k + 1 k ) )
The key to the design of the strong tracking filter (STF) is to obtain the forcibly obtained gain matrix, K ( k + 1 ) , which must satisfy the following orthogonality principle:
E { [ x e ( k ) x ^ e ( k k ) ] [ x e ( k ) x ^ e ( k k ) ] T } = min
E { y ˜ ( k + j ) y ˜ T ( k ) } = 0 , k = 1 , 2 , , N , j = 1 , 2 , , N
The Equation (14) is the performance index of the extended Kalman filter. The Equation (15) expresses the orthogonality of innovation, and the residual sequence at different times is required to be orthogonal to each other everywhere; that is, the orthogonal principle Equation (15) is introduced on the basis of EKF. When the model is determined, the filter will operate normally. At this time, the STF degenerates to an EKF based on the performance index (14), and (15) will not have any adjustment effect. When the model is uncertain, the state estimate of the filter will deviate from the true state of the system, which will definitely be reflected in the output residual. At this time, the gain matrix will be adjusted online, and the forced Equation (15) is still valid, so that the residual sequence remains orthogonal to each other, and the filter can maintain accurate tracking of the actual system state.
The specific steps to establish STF are as follows:
x ^ e ( k + 1 k ) = f e ( x ^ e ( k k ) )
P e ( k + 1 k ) = λ ( k ) A e ( x ^ e ( k k ) ) P e ( k k ) A T ( x ^ e ( k k ) ) + Q e ( k )
K e ( k + 1 ) = P e ( k + 1 k ) H e T ( x ^ e ( k + 1 k ) )                     × [ H e ( x ^ e ( k + 1 k ) ) P e ( k + 1 k ) H e T ( x ^ e ( k + 1 k ) ) ] 1
λ ( k ) in Equation (17) is the adaptive fading factor of the system, and we have
λ ( k ) = { λ 1 ( k ) , λ 1 ( k ) 1 1 , λ 1 ( k ) < 1 ,
where
λ 1 ( k ) = T r ( N ( k ) ) T r ( M ( k ) )
N ( k ) = V 1 ( k ) K e ( x ^ e ( k + 1 k ) ) Q e ( k ) H e T ( x ^ e ( k + 1 k ) ) l ( k ) R e ( k + 1 )
M ( k ) = H e ( x ^ e ( k + 1 k ) ) A e ( x ^ ( k k ) ) P e ( k k )               × A e T ( x ^ e ( k k ) ) H e T ( x ^ e ( k + 1 k ) )
A e ( x ^ e ( k k ) ) = f e ( x e ( k ) ) x e x e ( k ) = x ^ e ( k k )
H e ( x ^ e ( k + 1 k ) ) = h e ( x ( k + 1 ) ) x e x e ( k + 1 ) = x ^ e ( k + 1 k )
V 1 ( k ) = { y ˜ e ( 1 ) y ˜ e T ( 1 ) , k = 0 ρ V 1 ( k 1 ) + y ˜ e ( k ) y ˜ e T ( k ) 1 + ρ , k 1
where ρ is the forgetting factor of STF, and l ( k ) is the weakening factor.
The core of the principle of orthogonality is Equation (15). When Equation (14) is replaced by other performance indicators, other similar orthogonality principles can be obtained, giving it the properties of a strong tracking filter.
Remark 2.
Degradation mechanism analysis: STF->EKF->KF.
(1) 
A strong tracking filter (STF) is composed of Equations (14)–(27), when λ ( k ) 1 in Equation (19), and STF degenerates the traditional EKF.
(2) 
The nonlinear functions in Equations (1) and (2) degenerate into the following:
f ( x e ( k ) , k ) : = A ( k ) x ( k )
h ( x e ( k + 1 ) , ( k + 1 ) ) : = H ( k + 1 ) x ( k + 1 )
Then STF degenerates the traditional Kalman filter (KF).

3. A Class of Nonlinear Systems Described by the Product of States, Parameters and Nonlinear Functions

3.1. Description of Non-Linear System

The following is a strong nonlinear system described by the product of states, parameters and nonlinear functions:
x i ( k + 1 ) = j = 1 n a i j x j ( k ) × θ i ( k ) × f i ( x ( k ) ) + w i ( k ) ;   i = 1 , 2 , , n
z i ( k + 1 ) = h i ( 1 ) ( x ( k + 1 ) ) × h i ( 2 ) ( θ i ( k + 1 ) ) × h i ( 3 ) ( f i ( x ( k ) ) )                 + v i ( k + 1 ) ; i = 1 , 2 , , m
where k is the discrete time, x ( k ) R n is the system state vector, λ ( k ) R p is the time-varying parameter to be estimated and z ( k + 1 ) R m is the observation vector of the state. Moreover, f j ( x ( k ) ) is the basic multiplicative factor about the state variable; h i ( 1 ) ( x ( k + 1 ) ) is the nonlinear function about the parameter x ( k + 1 ) , h i ( 2 ) ( θ ( k + 1 ) ) is the nonlinear function about the parameter θ ( k + 1 ) and h i ( 3 ) ( f ( x ( k + 1 ) ) is the compound nonlinear function about the state x ( k + 1 ) .
Suppose 1.
w ( k ) and v ( k + 1 ) are uncorrelated white noise sequences, and they meet the following conditions
E { w ( k ) } = 0 , E { v ( k ) } = 0
E { w ( k ) w T ( j ) } = Q ( k ) δ k , j
E { v ( k ) v T ( j ) } = R ( k + 1 ) δ k , j ;
While k = j , δ k , j = 1; otherwise, δ k , j = 0 .
Suppose 2.
The initial state value and the system noise are independent of each other, and they meet the following conditions:
E { w ( k ) v T ( k ) } = 0 ,   E { x ( 0 ) v T ( k ) } = 0 ,   E { x ( 0 ) w 1 T ( k ) } = 0 E { x ( 0 ) } = x ^ 0 ,   E { [ x ( 0 ) x ^ 0 ] [ x ( 0 ) x ^ 0 ] T } = P 0 ,   P 0 0  

3.2. Gradually Linearizing of the Strongly Nonlinear Systems

First, define the basic nonlinear function f j ( x ( k ) , k ) as the hidden variable about the system state variable, x ( k ) .
α i ( k ) : = f j ( x ( k ) , k ) ,   i = 1 , 2 , , n
Then Equations (28) and (29) can be written as follows:
x i ( k + 1 ) = j = 1 n a i j x j ( k ) × λ i ( k ) × α i ( k ) + w i ( k ) ; i = 1 , 2 , , n
z i ( k + 1 ) = h i ( 1 ) ( x ( k + 1 ) ) × h i ( 2 ) ( θ ( k + 1 ) ) × h i ( 3 ) ( α ( k + 1 ) ) + v i ( k + 1 ) ; i = 1 , 2 , , m
(1) Pseudo-linearized representation for hidden variables:
x ( k + 1 ) = A α ( x ( k ) , θ ( k ) ) × α ( k ) + w ( k )
z ( k + 1 ) = H α ( h ( 1 ) ( x ( k + 1 ) ) , h ( 2 ) ( θ ( k + 1 ) ) ) × h ( 3 ) ( α ( k + 1 ) ) + v α ( k + 1 )
where
A α ( x ( k ) , θ ( k ) ) = d i a g { j = 1 n a 1 j x j ( k ) θ 1 ( k ) , , j = 1 n a i j x j ( k ) θ i ( k ) , , j = 1 n a n j x j ( k ) θ n ( k ) }
H α ( h ( 1 ) ( x ( k + 1 ) , h ( 2 ) ( θ ( k + 1 ) ) ) = d i a g { h 1 ( 1 ) ( x ( k + 1 ) ) × h 1 ( 2 ) ( θ ( k + 1 ) ) ,       , h i ( 1 ) ( x ( k + 1 ) ) × h i ( 2 ) ( θ ( k + 1 ) ) , , h m ( 1 ) ( x ( k + 1 ) ) × h m ( 2 ) ( θ ( k + 1 ) ) }
(2) Pseudo-linear representation for time-varying parameters:
x ( k + 1 ) = A θ ( x ( k ) , α ( k ) ) × θ ( k ) + w θ ( k )
z ( k + 1 ) = H θ ( h ( 1 ) ( x ( k + 1 ) ) , h ( 3 ) ( α ( k + 1 ) ) ) × h ( 2 ) ( θ ( k + 1 ) ) + v ( k + 1 )
where
A θ ( x ( k ) , α ( k ) ) = d i a g { j = 1 m a 1 j x j ( k ) α 1 ( k ) , , j = 1 m a i j x j ( k ) α i ( k ) , , j = 1 m a n j x j ( k ) α n ( k ) }
H θ ( h ( 1 ) ( x ( k + 1 ) , h ( 3 ) ( α ( k + 1 ) ) ) = d i a g { h 1 ( 1 ) ( x ( k + 1 ) ) × h 1 ( 3 ) ( α ( k + 1 ) ) ,       , h i ( 1 ) ( x ( k + 1 ) ) × h i ( 3 ) ( α ( k + 1 ) ) , , h m ( 1 ) ( x ( k + 1 ) ) × h m ( 3 ) ( α ( k + 1 ) ) }
(3) Pseudo-linearized representation of state variables:
x ( k + 1 ) = A x ( θ ( k ) , α ( k ) ) × x ( k ) + w ( k )
z ( k + 1 ) = H x ( h ( 2 ) ( θ ( k + 1 ) , h ( 3 ) ( α ( k + 1 ) ) ) × h ( 1 ) ( x ( k + 1 ) + v x ( k + 1 )
where
A x ( θ ( k ) , α ( k ) ) = [ a 11 θ 1 ( k ) α 1 ( k ) a 1 i θ 1 ( k ) α 1 ( k ) a 1 n θ 1 ( k ) α 1 ( k ) a i 1 θ i ( k ) α i ( k ) a i i θ i ( k ) α i ( k ) a i n θ i ( k ) α i ( k ) a n 1 θ n ( k ) α n ( k ) a n i θ n ( k ) α n ( k ) a n n θ n ( k ) α n ( k ) ]
H x ( h ( 2 ) ( θ ( k + 1 ) , h ( 3 ) ( α ( k + 1 ) ) ) = d i a g { h 1 ( 2 ) ( θ ( k + 1 ) ) × h 1 ( 3 ) ( α ( k + 1 ) ) ,       , h i ( 2 ) ( θ ( k + 1 ) ) × h i ( 3 ) ( α ( k + 1 ) ) , , h m ( 2 ) ( θ ( k + 1 ) ) × h m ( 3 ) ( α ( k + 1 ) ) }
Remark 3.
The abovementioned linear processes for hidden variables, α ( k ) ; time-varying parameter variables, θ ( k ) ; and system state variables, x ( k ) , are all pseudo-linear processes. This is because the abovementioned processes are only equivalent transformation processes in the original system space and do not make any substantial changes. Therefore, they need to be re-linearized in the expanded space.
(4) The linear dynamic modeling of parametric variables θ ( k ) and hidden state variables α ( k ) .
This subsection establishes following the linear dynamic correlation model with time as the variable between θ ( k + 1 ) , α ( k + 1 ) , θ ( k ) , α ( k ) and x ( k ) .
α ( k + 1 ) = S α ( k ) x ( k ) + G α ( k ) α ( k ) + L α ( k ) θ ( k ) + w α ( k )
θ ( k + 1 ) = S θ ( k ) x ( k ) + G θ ( k ) α ( k ) + L θ ( k ) θ ( k ) + w θ ( k )
where
S θ ( k ) ,   G θ ( k ) , L θ ( k ) ;   S α ( k ) ,   G α ( k ) , L α ( k )
They are the matrix of the system to be identified with the appropriate dimensions; w θ ( k ) and w α ( k ) are the error vectors of the model to be modeled.
Remark 4.
(1) Based on the strong tracking filter (STF) introduced in Section 2, the estimated sequence of state variables, x ( k ) ; time-varying parameter variables, θ ( k ) ; and hidden variables, α ( k ) , can be obtained as follows:
x ^ ( k k ) ,   θ ^ ( k k ) ,   α ^ ( k k ) ) ;   k = 1 , 2 , , M  
(1) Use the least-square method to determine the statistical characteristics of model parameters and modeling errors:
Based on the Equations (46) and (47), the estimated value of the matrix parameter to be estimated can be obtained by using the linear least-squares method and is written as follows:
S ^ λ ,   G ^ λ , L ^ λ ;   S ^ α   G ^ α , L ^ α
Using Equations (46), (47) and (50), we can obtain the following:
θ ˜ ( k + 1 ) = θ ( k + 1 ) θ ^ ( k + 1 )           = θ ( k + 1 ) S ^ θ x ( k ) G ^ θ α ( k ) L ^ θ θ ( k )
α ˜ ( k + 1 ) = α ( k + 1 ) α ^ ( k + 1 )           = α ( k + 1 ) S ^ α x ( k ) G ^ α α ( k ) L ^ α θ ( k )
Then the statistical characteristics of modeling errors of w θ ( k ) and w α ( k ) can be determined separately with the help of the sequence { θ ˜ ( k ) } , { α ˜ ( k ) } .
(2) Use Multidimensional Taylor Network to determine the statistical characteristics of model parameters and modeling errors.
The Multidimensional Taylor Network that has been effectively used in practice can be borrowed to determine the estimated value of the parameters S θ ( k ) ,   G θ ( k ) , L θ ( k ) , S α ( k ) ,   G α ( k ) and L α ( k ) in (46) and (47). Then we can use the Equations (51) and (52) method to determine the statistical characteristics of the modeling error of w θ ( k ) and w α ( k ) , respectively.
(3) Use the idea of random walk to determine the statistical characteristics of model parameters and modeling errors.
In the absence of any prior information, set it as follows:
S θ ( k ) = 0 ,   G θ ( k ) = 0 , L θ ( k ) = I ;   S α ( k ) = 0 ,    G α ( k ) = I , L α ( k ) = 0
both θ ( k ) and α ( k ) are regarded as random processes conforming to random walks, and the methods of Equations (51) and (52) are used to determine the statistical characteristics of modeling errors of w θ ( k ) and w α ( k ) .

4. The Design of Three-Stage Kalman Filter for Estimating State, Parameters and Hidden Variables

Assuming the initial values x ( 0 ) , λ ( 0 ) , α ( 0 ) of estimated state, parameters and hidden variables, there are the following statistical characteristics:
E { x ( 0 ) } = x ^ 0 ;   E { [ x ( 0 ) x ^ 0 ] [ x ( 0 ) x ^ 0 ] T } = P x 0 E { θ ( 0 ) } = θ ^ 0 ;   E { [ θ ( 0 ) λ ^ 0 ] [ θ ( 0 ) θ ^ 0 ] T } = P θ 0 E { α ( 0 ) } = α ^ 0 ;   E { [ α ( 0 ) α ^ 0 ] [ α ( 0 ) α ^ 0 ] T } = P α 0
The estimated process noise, w x ( k ) , w θ ( k ) and w α ( k ) ; observation noise, v x ( k ) , v θ ( k ) , and v α ( k ) ; and initial value, x ( 0 ) , λ ( 0 ) , and α ( 0 ) , are all statistically independent of each other.
Obtained sequence of observations:
y ( 1 ) ,   y ( 2 ) ,   , y ( k )
further gain
x ^ ( k k ) = E { x ( k ) x ^ 0 , y ( 1 ) , y ( 2 ) , , y ( k ) } P x ( k k ) = E { [ x ( k ) x ^ ( k k ) ] [ x ( k ) x ^ ( k k ) ] T }
θ ^ ( k k ) = E { θ ( k ) θ ^ 0 , y ( 1 ) , y ( 2 ) , , y ( k ) } P θ ( k k ) = E { [ θ ( k θ ^ ( k k ) ] [ θ ( k ) θ ^ ( k k ) ] T }
α ^ ( k k ) = E { α ( k ) α ^ 0 , y ( 1 ) , y ( 2 ) , , y ( k ) } P α ( k k ) = E { [ α ( k ) α ^ ( k k ) ] [ α ( k ) α ^ ( k k ) ] T }
The goal of this section is as follows:
   x ^ ( k k ) , P x ( k k ) ;   θ ^ ( k k ) , P θ ^ ( k k ) ;   α ^ ( k k ) , P α ( k k )            y ( k + 1 ) α ^ ( k + 1 k + 1 ) , P α ( k + 1 k + 1 )                    α ^ ( k + 1 k + 1 ) y ( k + 1 ) θ ^ ( k + 1 k + 1 ) , P θ ( k + 1 k + 1 )                           α ^ ( k + 1 k + 1 ) , θ ^ ( k + 1 k + 1 ) y ( k + 1 ) x ^ ( k + 1 k + 1 ) , P x ( k + 1 k + 1 )

4.1. The Linear Kalman Filter Design for Solving Hidden Variables α ( k + 1 )

The pseudo linearization system composed of Equations (46) and (35) is rewritten as follows:
α ( k + 1 ) = S α ( k ) x ( k ) + G α ( k ) α ( k ) + L α ( k ) λ ( k ) + w α ( k )
z ( k + 1 ) = H α ( h ( 1 ) ( x ( k + 1 ) ) , h ( 2 ) ( λ ( k + 1 ) ) ) × h ( 3 ) ( α ( k + 1 ) ) + v α ( k + 1 )
and suppose that the initial value α ( 0 ) of the parameter has the following statistical characteristics:
E { α ( 0 ) } = α ^ 0 ;   E { [ α ( 0 ) α ^ 0 ] [ α ( 0 ) α ^ 0 ] T } = P α 0
The initial values of noise v α ( k + 1 )   , w α ( k )   and parameters α ( 0 )   have the following characteristics:
E { w α ( k ) } = 0 ;   E { v α ( k + 1 ) } = 0 ;   E { w α ( k ) v α T ( k + 1 ) } = 0 E { w α ( k ) α T ( 0 ) } = 0 ;   E { v α ( k + 1 ) α T ( 0 ) } = 0
The goal of this section is to achieve the following:
x ^ ( k k ) , P x ( k k ) ;   θ ^ ( k k ) , P θ ( k k ) ;   α ^ ( k k ) , P α ( k k )                                 formula ( 4.2.2 ) y ( k + 1 )   α ^ ( k + 1 k + 1 ) , P α ( k + 1 k + 1 )
Under the conditions of known x ^ ( k k ) , θ ^ ( k k ) , α ^ ( k k ) and online observations, Equation (61) is used to describe the linearization of hidden variables α ( k + 1 )   to construct the Kalman filter to find the following:
α ^ ( k + 1 k + 1 ) , P α ( k + 1 k + 1 )

The Design of the Linear Kalman Filter to Estimate Value of the Hidden Variable α ( k + 1 )

To design a linear filter estimator for solving hidden variables, we have the following:
α ^ ( k + 1 k + 1 ) = E { α ( k + 1 ) x ^ ( k k ) , θ ^ ( k k ) , α ^ ( k k ) , y ( k + 1 ) }                          = α ^ ( k + 1 k ) + K α ( k + 1 ) [ y ( k + 1 ) y ^ α ( k + 1 k ) ] α ^ ( k + 1 k + 1 ) = E { [ α ( k + 1 ) α ^ ( k + 1 k + 1 ) ] [ α ( k + 1 ) α ^ ( k + 1 k + 1 ) ] T }
where there are Equations (91)–(107) in sequence.
(1) Hidden variable forecast estimates and forecast estimate error covariance matrix:
Based on Equation (84), the predicted estimated value of the hidden variable can be obtained as follows:
α ^ ( k + 1 k ) = S α ( k ) x ^ ( k k ) + G α ( k ) α ^ ( k k ) + L α ( k ) θ ^ ( k k )
and forecast estimate error:
α ^ ( k + 1 k ) = α ( k + 1 ) α ^ ( k + 1 k )                   = S α ( k ) x ( k ) + G α ( k ) α ( k ) + L α ( k ) θ ( k ) + w α ( k )                   S α ( k ) x ^ ( k k ) G α ( k ) α ^ ( k k ) L α ( k ) θ ^ ( k k )                   = S α ( k ) x ˜ ( k k ) + G α ( k ) α ˜ ( k k ) + L α ( k ) θ ˜ ( k k ) + w α ( k )
and predictive estimate error covariance matrix:
P α ( k + 1 k ) = E { α ˜ ( k + 1 k ) α ˜ T ( k + 1 k ) }                     = θ α ( k ) [ S α ( k ) P x ( k k ) S α T ( k ) + G α ( k ) P α ( k k ) G α T ( k )                        + L α ( k ) P α ( k k ) L α T ( k ) ] + Q α ( k )
(2) Prediction estimates and prediction estimation errors of measured values of hidden variables:
Based on Equation (61), the predicted estimated value for the measured value of the hidden variable can be obtained as follows:
z ^ ( k + 1 k ) = H α ( h ( 1 ) ( x ^ ( k + 1 k ) ) ) , h ( 2 ) ( θ ^ ( k + 1 k ) ) ) ×   h ( 2 ) ( α ^ ( k + 1 k ) )
and forecast estimation error vector:
z ˜ α ( k + 1 k ) = z ( k + 1 ) z ^ α ( k + 1 k )                      = H α ( h ( 1 ) ( x ( k + 1 ) ) , h ( 2 ) ( θ ( k + 1 ) ) × h ( 3 ) ( α ( k + 1 ) ) + v ( k + 1 )                         H α ( h ( 1 ) ( x ^ ( k + 1 k ) ) , h ( 2 ) ( θ ^ ( k + 1 k + 1 ) ) × h ( 3 ) ( α ^ ( k + 1 k ) )                      H α ( h ( 1 ) ( x ^ ( k + 1 k ) ) , h ( 2 ) ( θ ^ ( k + 1 k ) )                         × [ h ( 3 ) ( α ( k + 1 ) ) h ( 3 ) ( α ^ ( k + 1 k ) ) ] + v ( k + 1 )                      = H α ( x ^ ( k + 1 k ) , θ ^ ( k + 1 k ) )                         × H α ( 3 ) ( α ^ ( k + 1 k ) ) [ α ( k + 1 ) α ^ ( k + 1 k ) ] + v ( k + 1 )                      H α ( h ( 1 ) ( x ^ ( k + 1 k ) ) , h ( 2 ) ( θ ^ ( k + 1 k ) )                         × H α ( 3 ) ( α ^ ( k + 1 k ) ) α ˜ ( k + 1 k ) + v α ( k + 1 )                      = H ¯ α ( h ( 1 ) ( x ^ ( k + 1 k ) ) , h ( 2 ) ( θ ^ ( k + 1 k ) ) , α ^ ( k + 1 k ) )                         × α ˜ ( k + 1 k ) + v α ( k + 1 )
and predictive estimate error covariance matrix:
P α , z z ( k + 1 k ) = E { z ˜ α ( k + 1 k ) z ˜ α T ( k + 1 k ) } = ( H ¯ α ( h ( 1 ) ( x ^ ( k + 1 k ) ) , h ( 2 ) ( θ ^ ( k + 1 k ) ) , α ^ ( k + 1 k ) ) ) P α ( k + 1 k )    × ( H ¯ α ( h ( 1 ) ( x ^ ( k + 1 k ) ) , h ( 2 ) ( θ ^ ( k + 1 k ) ) , α ^ ( k + 1 k ) ) ) T + R α ( k + 1 )
where
H ¯ α ( h ( 1 ) ( x ^ ( k + 1 k ) ) , h ( 2 ) ( θ ^ ( k + 1 k ) ) , α ^ ( k + 1 k ) )                 : = H α ( h ( 1 ) ( x ^ ( k + 1 k ) ) , h ( 2 ) ( θ ^ ( k + 1 k ) ) H α ( 3 ) ( α ^ ( k + 1 k ) )
Remark 5.
It is explained as follows in the derivation process of (70).
(1) The first “ “ is due to the approximate approximation used:
H α ( h ( 1 ) ( x ( k + 1 ) , h ( 2 ) ( θ ( k + 1 ) ) H α ( h ( 1 ) ( x ^ ( k + 1 k ) ) , h ( 2 ) ( θ ^ ( k + 1 k ) )
(2) The second “ ” sign is due to the approximation of the first-order Taylor expansion:
h ( 3 ) ( α ( k + 1 ) ) h ( 3 ) ( α ^ ( k + 1 k ) ) H α ( 3 ) ( α ^ ( k + 1 k ) ) α ˜ ( k + 1 k )
where
H α ( 3 ) ( α ^ ( k + 1 k ) ) = h ( 3 ) ( α ( k + 1 ) ) α ( k + 1 ) α ( k + 1 ) = α ^ ( k + 1 k )
(3) The noise term changes as it follows the derivation process of (122):
v ( k + 1 ) sec ond   first   v α ( k + 1 )
this is caused by the introduction of new modeling errors in the two “ “.
(4) In Equation (69), the parameter λ α ( k ) is the fading factor of Equation (17) determined by Equation (19) in Section 2.
At this point, having completed the linearization process of using the strong tracking filter (STF) to solve the hidden variable estimated value and the estimated error covariance matrix, and the corresponding STF will be designed below.
(3) Filter design for real-time estimation of hidden variables:
α ^ ( k + 1 k + 1 ) = α ^ ( k + 1 k ) + K α ( k + 1 ) [ y ( k + 1 ) y ^ α ( k + 1 k ) ]
Calculate the estimated error of the hidden variable as follows:
α ˜ ( k + 1 k + 1 ) = α ( k + 1 ) α ^ ( k + 1 k + 1 ) = α ( k + 1 ) α ^ ( k + 1 k ) K α ( k + 1 ) [ y ( k + 1 ) y ^ α ( k + 1 k ) ] = α ˜ ( k + 1 k ) K α ( k + 1 ) [ y ( k + 1 ) y ^ α ( k + 1 k ) ] = α ˜ ( k + 1 k ) K α ( k + 1 ) [ H ¯ α ( h ( 1 ) ( x ^ ( k + 1 k ) ) , h ( 2 ) ( λ ^ ( k + 1 k + 1 ) ) , α ^ ( k + 1 k ) ) )                                              × α ˜ ( k + 1 k ) + v α ( k + 1 ) ]
a direct sum decomposition of the observation vector:
z ( k + 1 ) = z ^ α ( k + 1 k ) + z ˜ α ( k + 1 k )                = z ^ α ( k + 1 k ) +   H ¯ α ( h ( 1 ) ( x ^ ( k + 1 k ) ) , h ( 2 ) ( λ ^ ( k + 1 k + 1 ) ) , α ^ ( k + 1 k ) )                        × α ˜ ( k + 1 k ) + v α ( k + 1 )
utilize the Orthogonal Principle:
{ α ˜ ( k + 1 k + 1 ) z T ( k + 1 ) } = 0
Then solve the gain matrix K α ( k + 1 ) in Equation (78).
K α ( k + 1 ) = P α ( k + 1 k ) ( H ¯ α ( h ( 1 ) ( x ^ ( k + 1 k ) ) , h ( 2 ) ( λ ^ ( k + 1 k + 1 ) ) , α ^ ( k + 1 k ) ) ) T       × [ ( H ¯ α ( h ( 1 ) ( x ^ ( k + 1 k ) ) , h ( 2 ) ( λ ^ ( k + 1 k + 1 ) ) , α ^ ( k + 1 k ) ) ) P ( k + 1 k )        × ( H ¯ α ( h ( 1 ) ( x ^ ( k + 1 k ) ) , h ( 2 ) ( λ ^ ( k + 1 k + 1 ) ) , α ^ ( k + 1 k ) ) ) T + R α ( k + 1 ) ] 1
(4) Calculate hidden variables and estimate the error covariance matrix in real time:
P α ( k + 1 k + 1 ) = E { [ α ( k + 1 ) λ ^ ( k + 1 k ) ] [ α ( k + 1 ) λ ^ ( k + 1 k ) ] T } = [ I K α ( k + 1 ) ( H ¯ α ( h ( 1 ) ( x ^ ( k + 1 k ) ) , h ( 2 ) ( λ ^ ( k + 1 k + 1 ) ) , α ^ ( k + 1 k ) ) ) ]      × P α ( k + 1 k )

4.2. The Design of Linear Kalman Filter for Solving Parameters θ ( k + 1 )

Based on the pseudo linearization system composed of Equations (47) and (39), it is rewritten as follows:
θ ( k + 1 ) = S θ ( k ) x ( k ) + G θ ( k ) α ( k ) + L θ ( k ) θ ( k ) + w θ ( k )
z ( k + 1 ) = H θ ( h ( 1 ) ( x ( k + 1 ) ) , h ( 3 ) ( α ( k + 1 ) ) ) × h ( 1 ) ( θ ( k + 1 ) ) + v ( k + 1 )
and suppose that the initial value of the parameter θ ( 0 ) has the following statistical characteristics:
E { θ ( 0 ) } = θ ^ 0 ;   E { [ θ ( 0 ) θ ^ 0 ] [ θ ( 0 ) θ ^ 0 ] T } = P θ 0
The initial values of noise v θ ( k + 1 )   and w θ ( k )   and parameter θ ( 0 )   have the following characteristics:
E { w θ ( k ) } = 0 ;   E { v θ ( k + 1 ) } = 0 ;   E { w θ ( k ) v θ T ( k + 1 ) } = 0 E { w θ ( k ) θ T ( 0 ) } = 0 ;   E { v θ ( k + 1 ) θ T ( 0 ) } = 0
The goal of this section is to achieve the following:
x ^ ( k k ) , P x ( k k ) ;   θ ^ ( k k ) , P λ ( k k ) ;   α ^ ( k + 1 k + 1 ) , P α ( k + 1 k + 1 )                                 formula ( 4.2.2 ) y ( k + 1 )   θ ^ ( k + 1 k + 1 ) , P θ ( k + 1 k + 1 )
Under the conditions of known x ^ ( k k ) , θ ^ ( k k ) , α ^ ( k + 1 k + 1 ) and online observations y ( k + 1 )   , we use the description of the linearization of hidden variables θ ( k + 1 )   in Equation (85) to construct the Kalman filter to find the following:
θ ^ ( k + 1 k + 1 ) , P θ ( k + 1 k + 1 )

The Design of Linear Filter for the Estimated Value of Parameter Variable θ ( k + 1 )

It is planned to design a linearized filter estimator to solve the parameter variables θ ( k + 1 )   :
θ ^ ( k + 1 k + 1 ) = E { λ ( k + 1 ) x ^ ( k k ) , α ^ ( k k ) , θ ^ ( k k ) , y ( k + 1 ) }                          = θ ^ ( k + 1 k ) + K θ ( k + 1 ) [ y ( k + 1 ) y ^ θ ( k + 1 k ) ] P θ ( k + 1 k + 1 ) = E { [ θ ( k + 1 ) θ ^ ( k + 1 k + 1 ) ] [ θ ( k + 1 ) θ ^ ( k + 1 k + 1 ) ] T }
where there are Equations (91)–(107) in sequence.
(1) The predicted value of the parameter variable θ ( k + 1 ) and the predicted estimation error covariance matrix:
Based on Equation (84), we can get the predicted estimated value of the parameter variable θ ( k + 1 ) as follows:
θ ^ ( k + 1 k ) = S θ ( k ) x ^ ( k k ) + G θ ( k ) α ^ ( k k ) + L θ ( k ) θ ^ ( k k )
and forecast estimated error:
θ ˜ ( k + 1 k ) = θ ( k + 1 ) θ ^ ( k + 1 k )                      = S θ ( k ) x ( k ) + G θ ( k ) α ( k ) + L θ ( k ) θ ( k ) + w θ ( k )                        S θ ( k ) x ^ ( k k ) G θ ( k ) α ^ ( k k ) L θ ( k ) θ ^ ( k k )                      = S θ ( k ) x ˜ ( k k ) + G θ ( k ) α ˜ ( k k ) + L θ ( k ) θ ˜ ( k k ) + w θ ( k )
and predictive estimate error covariance matrix:
P θ ( k + 1 k ) = E { θ ˜ ( k + 1 k ) θ ˜ T ( k + 1 k ) }                        = λ θ ( k ) [ S θ ( k ) P x ( k k ) S θ T ( k ) + G θ ( k ) P α ( k k ) G λ T ( k )                           + L θ ( k ) P θ ( k k ) L θ T ( k ) ] + Q θ ( k )
(2) Prediction estimates and prediction estimation errors of measured values of parameter variables θ ( k + 1 ) :
Based on Equation (85), for the measured value of the parameter variable, we have the predicted estimated value:
z ^ θ ( k + 1 k ) = H θ ( h ( 1 ) ( x ^ ( k + 1 k ) ) , h ( 3 ) ( α ^ ( k + 1 k + 1 ) ) )                             × h ( 2 ) ( θ ^ ( k + 1 k ) )  
and forecast estimation error vector:
z ˜ θ ( k + 1 k ) = z ( k + 1 ) z ^ θ ( k + 1 k )                      = H θ ( h ( 1 ) ( x ( k + 1 ) ) , h ( 3 ) α ( k + 1 ) ) × h ( 2 ) ( θ ( k + 1 ) ) + v ( k + 1 )                        H θ ( h ( 1 ) ( x ^ ( k + 1 k ) ) , h ( 3 ) ( ( α ^ ( k + 1 k + 1 ) ) ) × h ( 2 ) ( θ ^ ( k + 1 k ) )                      H θ ( h ( 1 ) ( x ^ ( k + 1 k ) ) , h ( 3 ) ( ( α ^ ( k + 1 k + 1 ) ) )                        × [ h ( 2 ) ( λ ( k + 1 ) ) h ( 2 ) ( θ ^ ( k + 1 k ) ) ] + v ( k + 1 )                      H θ ( h ( 1 ) ( x ^ ( k + 1 k ) ) , h ( 3 ) ( ( α ^ ( k + 1 k + 1 ) ) )                        × H θ ( 2 ) ( λ ^ ( k + 1 k ) ) [ θ ( k + 1 ) θ ^ ( k + 1 k ) ] + v ( k + 1 )                      = H θ ( h ( 1 ) ( x ^ ( k + 1 k ) ) , h ( 3 ) ( ( α ^ ( k + 1 k + 1 ) ) )                        × H θ ( 2 ) ( θ ^ ( k + 1 k ) ) θ ˜ ( k + 1 k ) + v θ ( k + 1 )                      = H ¯ θ ( h ( 1 ) ( x ^ ( k + 1 k ) ) , h ( 3 ) ( ( α ^ ( k + 1 k + 1 ) ) ) , θ ^ ( k + 1 k ) )                        × θ ˜ ( k + 1 k ) + v θ ( k + 1 )
and forecast estimation error covariance matrix:
P θ , z z ( k + 1 k ) = E { z ˜ θ ( k + 1 k ) z ˜ θ T ( k + 1 k ) } = ( H ¯ θ ( h ( 1 ) ( x ^ ( k + 1 k ) ) , h ( 3 ) ( ( α ^ ( k + 1 k + 1 ) ) ) , θ ^ ( k + 1 k ) ) ) P θ ( k + 1 k )     × ( H ¯ θ ( h ( 1 ) ( x ^ ( k + 1 k ) ) , h ( 3 ) ( ( α ^ ( k + 1 k + 1 ) ) ) , θ ^ ( k + 1 k ) ) ) T + R θ ( k + 1 )
where
H ¯ θ ( h ( 1 ) ( x ^ ( k + 1 k ) ) , h ( 3 ) ( ( α ^ ( k + 1 k + 1 ) ) ) , θ ^ ( k + 1 k ) )         = H θ ( h ( 1 ) ( x ^ ( k + 1 k ) ) , h ( 3 ) ( ( α ^ ( k + 1 k + 1 ) ) ) × H θ ( 2 ) ( λ ^ ( k + 1 k ) )
Remark 6.
 In the derivation process of (94), it is explained as follows:
(1) The first “ “ is due to the approximate approximation used.
H λ ( h ( 1 ) ( x ( k + 1 ) ) , h ( 3 ) ( α ( k + 1 ) ) )                    H λ ( h ( 1 ) ( x ^ ( k + 1 k ) ) , h ( 3 ) ( α ^ ( k + 1 k + 1 ) ) )
(2) The second “ “ sign is due to the approximation of Taylor expansion.
h ( 2 ) ( θ ( k + 1 ) ) h ( 2 ) ( θ ^ ( k + 1 k ) ) H θ ( 2 ) ( θ ^ ( k + 1 k ) ) × θ ˜ ( k + 1 k )
where
H θ ( 2 ) ( θ ^ ( k + 1 k ) ) = h ( 2 ) ( θ ( k + 1 ) ) θ ( k + 1 ) θ ( k + 1 ) = θ ^ ( k + 1 k )
(3) In the derivation process of (94), the noise term changes as follows:
v ( k + 1 ) sec ond   first   v λ ( k + 1 )
this is caused by the introduction of new modeling errors in the two “ ”.
(4) The parameter λ θ ( k ) in Equation (93) is the fading factor of Equation (17) determined by Equation (19) in Section 2.
So far, we completed the linearization process of using the strong tracking filter (STF) to solve the estimation value and the estimation error covariance matrix of parameter θ ( k + 1 ) , and the corresponding STF is designed below.
(3) The filter design for real-time estimation of parameter variables:
θ ^ ( k + 1 k + 1 ) = θ ^ ( k + 1 k ) + K θ ( k + 1 ) [ y ( k + 1 ) y ^ θ ( k + 1 k ) ]
Calculate real-time estimates of parameter variables:
θ ˜ ( k + 1 k + 1 ) = θ ( k + 1 ) θ ^ ( k + 1 k + 1 ) = θ ( k + 1 ) θ ^ ( k + 1 k ) K θ ( k + 1 ) [ y ( k + 1 ) y ^ θ ( k + 1 k ) ] = θ ˜ ( k + 1 k ) K θ ( k + 1 ) [ y ( k + 1 ) y ^ θ ( k + 1 k ) ] = θ ˜ ( k + 1 k ) K θ ( k + 1 ) [ H ¯ θ ( h ( 1 ) ( x ^ ( k + 1 k ) ) , h ( 3 ) ( ( α ^ ( k + 1 k + 1 ) ) ) , θ ^ ( k + 1 k ) )                                                 × θ ˜ ( k + 1 k ) + v θ ( k + 1 ) ]
a direct sum decomposition of observations:
z ( k + 1 ) = z ^ θ ( k + 1 k ) + z ˜ θ ( k + 1 k )                = z ^ θ ( k + 1 k ) +   H ¯ θ ( h ( 1 ) ( x ^ ( k + 1 k ) ) , h ( 3 ) ( α ^ ( k + 1 k + 1 ) ) , θ ^ ( k + 1 k ) )                    × θ ˜ ( k + 1 k ) + v θ ( k + 1 )
utilize the Orthogonal Principle:
{ θ ˜ ( k + 1 k + 1 ) z T ( k + 1 ) } = 0
solve the gain matrix, K θ ( k + 1 ) , in Equation (102):
K θ ( k + 1 ) = P θ ( k + 1 k ) ( H ¯ θ ( h ( 1 ) ( x ^ ( k + 1 k ) ) , h ( 3 ) ( α ^ ( k + 1 k + 1 ) ) , θ ^ ( k + 1 k ) ) ) T   × [ ( H ¯ θ ( h ( 1 ) ( x ^ ( k + 1 k ) ) , h ( 3 ) ( θ ^ ( k + 1 k + 1 ) ) , λ ^ ( k + 1 k ) ) ) P ( k + 1 k )     × ( H ¯ θ ( h ( 1 ) ( x ^ ( k + 1 k ) ) , h ( 3 ) ( θ ^ ( k + 1 k + 1 ) ) , λ ^ ( k + 1 k ) ) ) T + R θ ( k + 1 ) ] 1
(4) Calculate the real-time estimated error covariance matrix of parameter variables.
According to the definition (90), we have the following:
P θ ( k + 1 k + 1 ) = [ I K θ ( k + 1 ) ( H ¯ θ ( x ^ ( k + 1 k ) , α ^ ( k + 1 k ) , θ ^ ( k + 1 k ) ) ) ]                                    × P θ ( k + 1 k )

4.3. The Linear Kalman Filter Design for Solving State Variables x ( k + 1 )

Based on the pseudo linearization system composed of Equations (42) and (43), it is rewritten as follows:
x ( k + 1 ) = A x ( θ ( k ) , α ( k ) ) × x ( k ) + w ( k )
z ( k + 1 ) = H x ( h ( 2 ) ( θ ( k + 1 ) , h ( 3 ) ( α ( k + 1 ) ) × h ( 1 ) ( x ( k + 1 ) + v x ( k + 1 )
and suppose that the initial value of the state x ( 0 ) has the following statistical characteristics:
E { x ( 0 ) } = x ^ 0 ;   E { [ x ( 0 ) x ^ 0 ] [ x ( 0 ) x ^ 0 ] T } = P x 0
The initial values of noise v x ( k + 1 )   , w x ( k )   and parameters have the following characteristics:
E { w x ( k ) } = 0 ;   E { v x ( k + 1 ) } = 0 ;   E { w x ( k ) v x T ( k + 1 ) } = 0 E { w x ( k ) x T ( 0 ) } = 0 ;   E { v x ( k + 1 ) x T ( 0 ) } = 0
The goal of this section is as follows:
x ^ ( k k ) , P x ( k k ) ;   α ^ ( k + 1 k + 1 ) , P α ( k + 1 k + 1 ) ;   θ ^ ( k + 1 k + 1 ) , P θ ( k + 1 k + 1 )                                 formula ( 4.3.3 ) y ( k + 1 )   x ^ ( k + 1 k + 1 ) , P x ( k + 1 k + 1 )
Through the obtained state, parameter variable and hidden variable estimated value and covariance matrix, we obtain the following
x ^ ( k k ) , P x ( k k ) ;   α ^ ( k + 1 k + 1 ) , P α ( k + 1 k + 1 ) ;   θ ^ ( k + 1 k + 1 ) , P θ ( k + 1 k + 1 )
When linearized by Equation (109) to obtain the estimated value of the state variable x ( k + 1 )   and the estimated error covariance matrix, it achieves the following:
x ^ ( k + 1 k + 1 ) , P x ( k + 1 k + 1 )
The goal of this section is to achieve the following:
x ^ ( k k ) , P x ( k k ) ;   θ ^ ( k + 1 k + 1 ) , P θ ( k + 1 k + 1 ) ;   α ^ ( k + 1 k + 1 ) , P α ( k + 1 k + 1 )                                 formula ( 4.3.2 ) y ( k + 1 )   x ^ ( k + 1 k + 1 ) , P x ( k + 1 k + 1 )
Under the condition of known x ^ ( k k ) , θ ^ ( k + 1 k + 1 ) , α ^ ( k + 1 k + 1 ) and obtained online observations, through the description of the linearization of state variables in Equation (85), construct the Kalman filter for the following:
x ^ ( k + 1 k + 1 ) , P x ( k + 1 k + 1 )

The Linear Filter Design of State Variable Estimates x ( k + 1 )

x ^ ( k + 1 k + 1 ) = E { x ( k + 1 ) x ^ ( k k ) , y ( k + 1 ) , α ^ ( k + 1 k + 1 ) , θ ^ ( k + 1 k + 1 ) }                             = x ^ ( k + 1 k ) + K x ( k + 1 ) [ y ( k + 1 ) y ^ x ( k + 1 k ) ] P x ( k + 1 k + 1 ) = E { [ x ( k + 1 ) x ^ ( k + 1 k + 1 ) ] [ x ( k + 1 ) x ^ ( k + 1 k + 1 ) ] T }
where there are Equations (117)–(133) in sequence.
(1) The predictive estimate of the state variable x ( k + 1 ) and the predictive estimate error covariance matrix
Based on Equation (108), we get the predicted estimated value of the state variable x ( k + 1 )
x ^ ( k + 1 k ) = A x ( θ ^ ( k k ) , α ^ ( k k ) ) × x ^ ( k k )
and forecast estimation error:
x ˜ ( k + 1 k ) = x ( k + 1 ) x ^ ( k + 1 k ) = A x ( θ ( k ) , α ( k ) ) × x ( k ) + w ( k ) A x ( θ ^ ( k k ) , α ^ ( k k ) ) × x ^ ( k k ) A x ( θ ^ ( k k ) , α ^ ( k k ) ) [ x ( k ) x ^ ( k k ) ] + w ( k ) = A x ( θ ^ ( k k ) , α ^ ( k k ) ) x ˜ ( k k ) + w x ( k )
and forecast estimation error covariance matrix:
P x ( k + 1 k ) = E { x ˜ ( k + 1 k ) x ˜ T ( k + 1 k ) }                        = A x ( θ ^ ( k k ) , α ^ ( k k ) ) P x ( k k ) A x T ( θ ^ ( k k ) , α ^ ( k k ) ) + Q x ( k )
Remark 7.
 In the derivation process of (92), it is explained as follows.
(1) The first “ “ is due to the approximate approximation used.
A x ( θ ( k ) , α ( k ) ) A x ( θ ^ ( k k ) , α ^ ( k k ) )
(2) In the derivation process of (92), the noise term changes as follows:
w ( k ) first   w x ( k )
this is caused by the introduction of new modeling errors this time.
(2) Prediction estimates and prediction estimation errors of measured values of state variables:
Based on Equation (109), for the measured variable x ( k + 1 ) , we have a predictive estimate:
z ^ x ( k + 1 k ) = H x ( h ( 1 ) ( θ ^ ( k + 1 k + 1 ) ) , h ( 2 ) ( α ^ ( k + 1 k + 1 ) ) ×   x ^ ( k + 1 k )
and forecast estimation error vector:
z ˜ x ( k + 1 k ) = z ( k + 1 ) z ^ x ( k + 1 k ) = H x ( h ( 2 ) ( θ ( k + 1 ) ) , h ( 2 ) ( α ( k + 1 ) ) × h ( 1 ) ( x ( k + 1 ) ) + v ( k + 1 )    H x ( h ( 2 ) ( θ ^ ( k + 1 k + 1 ) ) , h ( 3 ) ( α ^ ( k + 1 k + 1 ) ) × h ( 1 ) ( x ^ ( k + 1 k ) ) H x ( h ( 2 ) ( θ ^ ( k + 1 k + 1 ) ) , h ( 3 ) ( α ^ ( k + 1 k + 1 ) )    × [ h ( 1 ) ( x ( k + 1 ) ) h ( 1 ) ( x ^ ( k + 1 k ) ) ] + v ( k + 1 ) H x ( h ( 2 ) ( θ ^ ( k + 1 k + 1 ) ) , h ( 3 ) ( α ^ ( k + 1 k ) )    × H x ( 1 ) ( x ^ ( k + 1 k ) ) x ˜ ( k + 1 k ) ) + v ( k + 1 ) = H ¯ x ( x ^ ( k + 1 k ) , h ( 2 ) ( θ ^ ( k + 1 k + 1 ) ) , h ( 3 ) ( α ^ ( k + 1 k + 1 ) )    × x ˜ ( k + 1 k ) + v x ( k + 1 )
and forecast estimation error covariance matrix:
P x , z z ( k + 1 k ) = E { z ˜ x ( k + 1 k ) z ˜ x T ( k + 1 k ) } = ( H ¯ x ( x ^ ( k + 1 k ) , h ( 2 ) ( θ ^ ( k + 1 k + 1 ) ) , h ( 3 ) ( α ^ ( k + 1 k + 1 ) ) ) P x ( k + 1 k )      × ( H ¯ x ( x ^ ( k + 1 k ) , h ( 2 ) ( θ ^ ( k + 1 k + 1 ) ) , h ( 3 ) ( α ^ ( k + 1 k + 1 ) ) ) T + R x ( k + 1 )
where
H ¯ x ( x ^ ( k + 1 k ) , h ( 2 ) ( θ ^ ( k + 1 k + 1 ) ) , h ( 3 ) ( α ^ ( k + 1 k + 1 ) )           : = H x ( h ( 2 ) ( θ ^ ( k + 1 k + 1 ) ) , h ( 3 ) ( α ^ ( k + 1 k ) ) H x ( 1 ) ( x ^ ( k + 1 k ) )
Remark 8.
 In the derivation process of (123), it is explained as follows:
(1) The first “ “ is due to the approximate approximation used.
H x ( h ( 1 ) ( θ ( k + 1 ) ) , h ( 2 ) ( α ( k + 1 ) )                   H x ( h ( 1 ) ( θ ^ ( k + 1 k + 1 ) ) , h ( 2 ) ( α ^ ( k + 1 k + 1 ) )
(2) In the derivation process of (123), the noise term changes as follows:
v ( k + 1 ) first   v x ( k + 1 )
this is caused by the new module error introduced by “ “ twice.
(3) The filter design for real-time estimation of state variables x ( k + 1 ) :
x ^ ( k + 1 k + 1 ) = x ^ ( k + 1 k ) + K x ( k + 1 ) [ y ( k + 1 ) y ^ x ( k + 1 k ) ]
calculate the estimated error of the state variable x ( k + 1 ) :
x ˜ ( k + 1 k + 1 ) = x ( k + 1 ) x ^ ( k + 1 k + 1 ) = x ( k + 1 ) x ^ ( k + 1 k ) K α ( k + 1 ) [ y ( k + 1 ) y ^ x ( k + 1 k ) ] = x ˜ ( k + 1 k ) K x ( k + 1 ) [ y ( k + 1 ) y ^ x ( k + 1 k ) ] = x ˜ ( k + 1 k ) K x ( k + 1 ) [ H ¯ x ( x ^ ( k + 1 k ) , h ( 2 ) ( θ ^ ( k + 1 k + 1 ) ) , h ( 3 ) ( α ^ ( k + 1 k + 1 ) )                                                 × x ˜ ( k + 1 k ) + v x ( k + 1 ) ]
a direct sum decomposition of observations:
z ( k + 1 ) = z ^ x ( k + 1 k ) + z ˜ x ( k + 1 k )                = z ^ x ( k + 1 k ) + H ¯ x ( x ^ ( k + 1 k ) , h ( 2 ) ( θ ^ ( k + 1 k + 1 ) ) , h ( 3 ) ( α ^ ( k + 1 k + 1 ) )                      × x ˜ ( k + 1 k ) + v x ( k + 1 )
utilize the Orthogonal Principle as follows:
{ x ˜ ( k + 1 k + 1 ) z T ( k + 1 ) } = 0
solve the gain matrix, K x ( k + 1 ) , in Equation (128):
K x ( k + 1 ) = P x ( k + 1 k ) ( H ¯ x ( x ^ ( k + 1 k ) , h ( 2 ) ( θ ^ ( k + 1 k + 1 ) ) , h ( 3 ) ( α ^ ( k + 1 k + 1 ) ) ) T     × [ ( H ¯ x ( x ^ ( k + 1 k ) , h ( 2 ) ( θ ^ ( k + 1 k + 1 ) ) , h ( 3 ) ( α ^ ( k + 1 k + 1 ) ) ) P x ( k + 1 k )         × ( H ¯ x ( x ^ ( k + 1 k ) , h ( 2 ) ( θ ^ ( k + 1 k + 1 ) ) , h ( 3 ) ( α ^ ( k + 1 k + 1 ) ) ) T + R x ( k + 1 ) ] 1
(4) Calculate state variables x ( k + 1 ) and estimate the error covariance matrix in real time:
P x ( k + 1 k + 1 ) = E { [ x ( k + 1 ) x ^ ( k + 1 k ) ] [ x ( k + 1 ) x ^ ( k + 1 k ) ] T } = E { x ˜ ( k + 1 k ) x ˜ T ( k + 1 k ) } = [ I K x ( k + 1 ) ( H ¯ x ( x ^ ( k + 1 k ) , h ( 2 ) ( θ ^ ( k + 1 k + 1 ) ) , h ( 3 ) ( α ^ ( k + 1 k + 1 ) ) ) ]     × P x ( k + 1 k )

5. Simulation

Simulation 1.
Consider the following one-dimensional nonlinear system:
x ( k + 1 ) = x ( k ) α ( k ) sin ( x ( k ) ) + w ( k ) z ( k + 1 ) = x ( k + 1 ) α 3 ( k + 1 ) + v ( k )
where Q = [ 0.001 ] , R 1 = [ 0.000001 ] , and R 2 = [ 0.0002 ] ; the change rule of time-varying parameter is unknown, but the normal value is known α 0 = 0.58 . The data and curves in the table are the statistical results of 100 Monte Carlo simulations, which are taken during simulation:
x ( 0 ) = 0 ,   x ^ ( 0 0 ) = 0 ,   α ^ ( 0 0 ) = 0.58 ,   P ( 0 0 ) = d i a g [ 0.3 ,   0.3 ] ;
Randomly select the following parameters α ( k ) with random noise w 2 ( k ) ~ N [ 0 ,   0.0001 ] :
α ( k + 1 ) = { α ( k ) + w 2 ( k ) ,                     0 k 28 α ( k ) + α 0 / 10 + w 2 ( k ) ,      k = 29 α ( k ) + w 2 ( k ) ,                     30 k 58 α ( k ) α 0 / 500 + w 2 ( k ) ,    59 k 89 α ( k ) + w 2 ( k ) ,                      k = 90 α ( k ) + α 0 / 5 + w 2 ( k ) ,        91 k 100
For a one-dimensional nonlinear system, when the system contains time-varying parameters, Figure 1 and Table 2 show the filtering effects of several filtering methods. Combining the data in Table 2, it can be seen that the estimation accuracy of the algorithm in this paper has been significantly improved regardless of the estimation of the state or the estimation of the parameters.
Simulation 2.
Consider the following two-dimensional nonlinear system model:
x 1 ( k + 1 ) = 0 . 8 α 1 ( k ) x 1 ( k ) sin ( β 1 x 2 ( k ) ) + w 1 ( k ) x 2 ( k + 1 ) = 0 . 8 α 2 ( k ) x 2 ( k ) cos ( β 2 x 1 ( k ) ) + w 2 ( k ) z 1 ( k + 1 ) = γ x 1 2 ( k + 1 ) + x 2 ( k + 1 ) + v 1 ( k + 1 ) z 2 ( k + 1 ) = x 1 ( k + 1 ) + v 2 ( k + 1 )
where k is the discrete time and takes the value. The parameter β indicates the degree of nonlinearity, and the parameter γ indicates the proportion of nonlinearity. The model of the parameter α is as follows:
α 1 ( k + 1 ) = α 1 ( k ) sin ( x 1 ( k ) ) α 2 ( k + 1 ) = α 2 ( k ) cos ( x 2 ( k ) )
Set the parameters β and γ as follows when 40 k 60   :
β 1 = r a n d ( 1 ) , β 2 = r a n d ( 1 ) , γ = r a n d ( 1 )  
otherwise β 1 = β 2 = γ = 0 . 8   . Process noise and observation noise have the following characteristics w ( k ) N [ 0 , Q ( k ) ]   , v ( k + 1 ) N [ 0 , R ( k + 1 ) ]   . The data and curves in the Figure 2 and Figure 3 and Table 3 and Table 4 are the statistical results of 100 Monte Carlo simulations. During simulation, x ( 0 ) = [ 0.8 ,   0.6 ] T ; x ^ ( 0 0 ) = [ 0.8 ,   0.6 ] T , P ( 0 0 ) = d i a g [ 1 ,   1 ] .
The data in Table 3 show that the estimated mean square error of the TSKF algorithm in this paper is 0.0010 and 8.8605 × 10−4, respectively, which are significantly improved compared to the estimation accuracy of EKF, STF and UKF. Among them, the most improved was EKF: x 1 = 89.0110% and x 2 = 74.6843%.
In the two-dimensional system of Simulation Experiment 5.2, time-varying parameters with stronger nonlinearity are introduced to verify the effectiveness of the algorithm in the multidimensional system. Table 4 shows the estimated mean square error of the TSKF algorithm in this paper is 0.0011 and 0.0009, respectively, which is significantly improved compared to the estimation accuracy of EKF, STF and UKF. Among them, the most improved was EKF: α 1 = 65.6250% and α 2 = 70.9677%. The experimental data show that the algorithm in this paper has achieved the expected effect.

6. Conclusions

Aiming at a system composed of linear terms and factorizable nonlinear terms accumulation, a nonlinear Kalman filtering method based on multiplicative latent variable state and parameter stepwise estimation was established. Firstly, the decomposed nonlinear multiplier was defined as a hidden variable, and the hidden variable was regarded as a new system variable, and a dynamic correlation model between the hidden variable and the original state was established: the linear state model and linear measurement model of the state. Finally, the Three-Stage Kalman Filter was designed to estimate the hidden variables, parameters and states in turn. The Monte Carlo simulation experiment shows that, compared with the typical nonlinear filtering of EKF, UKF and STF, this method has achieved better estimation results, and the filtering accuracy has been significantly improved. Moreover, the parameter variables can be estimated very well, which provides a new solution to the problem of state and parameter fusion estimation.
Although the method in this paper has achieved good filtering results, there are still areas for improvement. For example, how to solve the problem of state and parameter estimation for nonlinear systems with complex parameters by introducing hidden variables; and how to solve the problem of parameter identification in nonlinear input and output systems with time-varying or complex parameters. These are the key points and difficulties of the next step of research.
In addition, in the follow-up research work, we can cooperate with other members of our research team to combine the method in this paper with other fields, so that it can further achieve more practical applications; for example, it can be used to solve the parameters of neural networks [28,29], or used in fault diagnosis [30,31]. For the parameter-solving problem in neural network and knowledge transfer, the existing methods can not realize the adaptive updating of parameters. However, the method in this paper can solve the parameters of more general models; it has greater advantages in solving such problems.

Author Contributions

Conceptualization, C.W. and Z.L.; methodology, C.W.; software, Z.L.; validation, C.W. and Z.L.; formal analysis, C.W.; writing—original draft preparation, C.W.; writing—review and editing, Z.L.; visualization, Z.L. All authors have read and agreed to the published version of the manuscript.

Funding

National Natural Science Foundation of China key project (No. 61751304), intelligent diagnosis, prediction and maintenance of abnormal conditions of large petrochemical plants.

Institutional Review Board Statement

Not Applicable.

Informed Consent Statement

Not Applicable.

Data Availability Statement

Not Applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhu, S.; Liu, W.; Cui, H. Multiple resolvable groups tracking using the GLMB filter. Acta Autom. Sin. 2017, 43, 2178–2189. [Google Scholar]
  2. Hu, Y.; Jin, Z.; Xue, X.; Sun, C. Fault diagnosis for networked systems by asynchronous IMM fusion filtering. Acta Autom. Sin. 2017, 43, 1329–1338. [Google Scholar]
  3. Thomas, C.; Thach, N.D.; Julien, M.; Wang, Z.; Tarek, R. Zonotopic Kalman Filter-Based Interval Estimation for Discrete-Time Linear Systems With Unknown Inputs. IEEE Control Syst. Lett. 2022, 6, 806–811. [Google Scholar]
  4. Shen, T.; Xue, A.; Zhou, Z. Multi-sensor Gaussian mixture PHD fusion for multi-target tracking. Acta Autom. Sin. 2017, 43, 1028–1037. [Google Scholar]
  5. Liu, J.; Sun, L.; Pu, J.; Hu, Z.; Wang, Y. Cooperative Localization in a Team of Two Mobile Robots Based on Rigid Constraints. Acta Electron. Sin. 2020, 48, 1777–1785. [Google Scholar]
  6. Alfakih, M.; Keche, M.; Benoudnine, H. A Kalman-filter-based fusion method for accurate urban localisation. IET Commun. 2021, 15, 653–663. [Google Scholar] [CrossRef]
  7. Kalman, R.E. A new approach to linear filter and prediction. Trans. ASME Ser. D J. Basic Eng. 1960, 82, 35–45. [Google Scholar] [CrossRef] [Green Version]
  8. Qiao, S.; Han, N.; Zhu, X.; Shu, H. A Dynamic Trajectory Prediction Algorithm Based on Kalman Filter. Acta Electron. Sin. 2018, 46, 418–423. [Google Scholar]
  9. Wang, X. Power Systems Dynamic State Estimation With the Two-Step Fault Tolerant Extended Kalman Filtering. IEEE Acesses 2021, 9, 137211–137223. [Google Scholar] [CrossRef]
  10. Li, H.; Zhao, S. The Optimal Distributed Kalman Filtering Fusion With Linear Equality Contraint. IEEE Acesses 2021, 9, 106283–106292. [Google Scholar] [CrossRef]
  11. Wen, C.; Cheng, X.; Xu, D.; Wen, C. Filter design based on characteristic functions for one class of multi-dimensional nonlinear non-Gaussian systems. Automatica 2017, 82, 171–180. [Google Scholar] [CrossRef]
  12. Wen, C.; Wang, Z.; Hu, J.; Liu, Q.; Fuad, E.A. Recursive filtering for state-saturated systems with randomly occurring nonlinearities and missing measurements. Int. J. Robust Nonlinearity Control 2018, 28, 1715–1727. [Google Scholar] [CrossRef]
  13. Kalman, R.E.; Bucy, R.S. New results in linear filtering and prediction theory. J. Basic Eng. 1961, 83, 95–108. [Google Scholar] [CrossRef]
  14. Sunahara, Y. An approximate method of state estimation for nonlinear dynamical systems. Fluids Eng. 1970, 11, 957–972. [Google Scholar] [CrossRef]
  15. Guo, H.; Chen, H.; Xu, F.; Wang, F.; Lu, G. Implementation of EKF for vehicle velocities estimation on FPGA. IEEE Trans. Ind. Electron. 2013, 60, 3823–3835. [Google Scholar] [CrossRef]
  16. Ye, S.; Daniel, B.W. Scaling the Kalman filter for large-scale traffic estimation. IEEE Trans. Control Netw. Syst. 2018, 5, 968–980. [Google Scholar]
  17. Zhao, J.; Marcos, N.; Lamine, M. A robust iterated extended Kalman filter for power system dyamic state estimation. IEEE Trans. Power Syst. 2017, 32, 3205–3216. [Google Scholar] [CrossRef]
  18. Julier, S.J.; Uhlmann, J.K. Unscented filtering and nonlinear estimation. Proc. IEEE 2004, 92, 401–422. [Google Scholar] [CrossRef] [Green Version]
  19. Arasaratnam, I.; Haykin, S. Cubature Kalman filters. IEEE Trans. Autom. Control 2009, 54, 1254–1269. [Google Scholar] [CrossRef] [Green Version]
  20. Arasaratnam, I.; Haykin, S. Square-root quadrature Kalman filtering. IEEE Trans Signal Process. 2008, 56, 2589–2593. [Google Scholar] [CrossRef]
  21. Jiang, Q.; Zhang, J. Nonlinear Filtering Based Joint Estimation of Parameters in Polynomial Systems. In Proceedings of the 26th China Control and Decision Conference, Changsha, China, 31 July 2014. [Google Scholar]
  22. Wen, C.; Chen, Z.; Zhou, D. Joint State and Parameter Estimation for Multisensor Nonlinear Dynamic Systems on the Basis of Strong Tracking Filter. Acta Electron. Sin. 2002, 30, 1715–1717. [Google Scholar]
  23. Sun, X.; Wen, C.; Wen, T. Maximum Correntropy High-Order Extended Kalman Filter. Chin. J. Electron. 2022, 31, 190–198. [Google Scholar] [CrossRef]
  24. Sun, X.; Wen, C.; Wen, T. A Novel Step-by-Step High-Order Extended Kalman Filter Design for a Class of Complex Systems with Multiple Basic Multipliers. Chin. J. Electron. 2021, 30, 313–321. [Google Scholar] [CrossRef]
  25. Sun, X.; Wen, C.; Wen, T. High-Order Extended Kalman Filter Design for a Class of Complex Dynamic Systems with Polynomial Nonlinearities. Chin. J. Electron. 2021, 30, 508–515. [Google Scholar] [CrossRef]
  26. Wang, Q.; Sun, X.; Wen, C. Design Method for a Higher Order Extended Kalman Filter Based on Maximum Correlation Entropy and a Taylor Network System. Sensors 2021, 21, 5864. [Google Scholar] [CrossRef] [PubMed]
  27. Liu, X.; Wen, C.; Sun, X. Design Method of High-Order Kalman Filter for Strong Nonlinear System Based on Kronecker Product Transform. Sensors 2022, 22, 653. [Google Scholar] [CrossRef] [PubMed]
  28. Wen, T.; Xie, G.; Cao, Y.; Cai, B. A DNN-Based Channel Model for Network Planning in Train Control Systems. IEEE Trans. Intell. Transp. Syst. 2021, 1–8. [Google Scholar] [CrossRef]
  29. Kong, Y.; Ma, X.; Wen, C. A New Method of Deep Convolutional Neural Network Image Classification Based on Knowledge Transfer in Small Label Sample Environment. Sensors 2022, 22, 898. [Google Scholar] [CrossRef]
  30. Ye, L.; Ma, X.; Wen, C. Rotating Machinery Fault Diagnosis Method by Combining Time-Frequency Domain Features and CNN Knowledge Transfer. Sensor 2021, 21, 8168. [Google Scholar] [CrossRef]
  31. Ma, X.; Wen, C.; Wen, T. An Asynchronous and Real-time Update Paradigm of Federated Learning for Fault Diagnosis. IEEE Trans. Ind. Inform. 2021, 17, 8531–8540. [Google Scholar] [CrossRef]
Figure 1. Filtering effect on parameter α and state x: (a) true value and estimated value of α , (b) true value and estimated value of x , (c) estimation error of α and (d) estimation error of x .
Figure 1. Filtering effect on parameter α and state x: (a) true value and estimated value of α , (b) true value and estimated value of x , (c) estimation error of α and (d) estimation error of x .
Electronics 11 00714 g001
Figure 2. Filtering effect on parameter x : (a) true value and estimated value of x 1 , (b) true value and estimated value of x 2 , (c) estimation error of x 1 and (d) estimation error of x 2 .
Figure 2. Filtering effect on parameter x : (a) true value and estimated value of x 1 , (b) true value and estimated value of x 2 , (c) estimation error of x 1 and (d) estimation error of x 2 .
Electronics 11 00714 g002
Figure 3. Filtering effect on parameter α : (a) true value and estimated value of α 1 , (b) true value and estimated value of α 2 ; (c) estimation error of α 1 and (d) estimation error of α 2 .
Figure 3. Filtering effect on parameter α : (a) true value and estimated value of α 1 , (b) true value and estimated value of α 2 ; (c) estimation error of α 1 and (d) estimation error of α 2 .
Electronics 11 00714 g003
Table 1. Abbreviation of Kalman filter methods.
Table 1. Abbreviation of Kalman filter methods.
Kalman FilterExtended Kalman FilterStrong Tracking FilterUnscented Kalman FilterThree Stage-Kalman Filter
AbbreviationEKFSTFUKFTSKF
Table 2. Performance comparison of different algorithms.
Table 2. Performance comparison of different algorithms.
MSEEKFSTFUKFTSKFVS EKFVS STFVS UKF
state x 0.00650.00633.3783 × 10−41.6557 × 10−497.4528%97.3719%50.9901%
parameter α 0.02251.4868 × 10−41.0504 × 10−48.7584 × 10−599.6107%41.0923%16.6184%
Table 3. Performance comparison of different algorithms.
Table 3. Performance comparison of different algorithms.
MSEEKFSTFUKFTSKFVS EKFVS STFVS UKF
state x 1 0.00910.00760.00240.001089.0110%86.8421%58.3333%
x 2 0.00350.00199.9575 × 10−48.8605 × 10−474.6843%53.3658%11.0168%
Table 4. Performance comparison of different algorithms.
Table 4. Performance comparison of different algorithms.
MSEEKFSTFUKFTSKFVS EKFVS STFVS UKF
parameter α 1 0.00320.00300.00190.001165.6250%63.3333%42.1053%
α 2 0.00370.00310.00150.000975.6757%70.9677%40.0000%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wen, C.; Lin, Z. A Gradually Linearizing Kalman Filter Bank Designing for Product-Type Strong Nonlinear Systems. Electronics 2022, 11, 714. https://doi.org/10.3390/electronics11050714

AMA Style

Wen C, Lin Z. A Gradually Linearizing Kalman Filter Bank Designing for Product-Type Strong Nonlinear Systems. Electronics. 2022; 11(5):714. https://doi.org/10.3390/electronics11050714

Chicago/Turabian Style

Wen, Chenglin, and Zhipeng Lin. 2022. "A Gradually Linearizing Kalman Filter Bank Designing for Product-Type Strong Nonlinear Systems" Electronics 11, no. 5: 714. https://doi.org/10.3390/electronics11050714

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop