3.2. Limited Memory-Based Random-Weighted Estimations of System Noise Statistics
Consider the following dynamic discrete system:
where
is the n-dimensional system state vector at time k,
the m-dimensional measurement vector,
the system state transition matrix,
the system measurement matrix,
the system process noise, and
the measurement noise.
Suppose the statistics of system process noise
are unknown, i.e.,
where
and
are the unknown mean and covariance of the process noise, and
is the Kronecker−
function.
Suppose the statistical properties of measurement noise
are unknown, i.e.,
where
and
are the unknown mean and covariance of the measurement noise.
The means of the process and measurement noises are called the first-order noise statistics, while their covariances are called the second-order noise statistics.
In a limited memory period of length N
, the arithmetic mean estimation of the measurement noise mean can be expressed as follows:
Applying the random-weighted concept to Equation (6), the random-weighted estimation of the measurement noise mean is as follows:
In the limited memory period, the arithmetic mean estimation of the measurement noise covariance can be expressed as follows:
Applying the random-weighted concept to Equation (8), the random weighting estimation of the measurement noise covariance can be obtained as follows:
Define the measurement residual as follows:
According to the first formula in Equation (3), the measurement residual
can be written as follows:
where
denotes the state estimation error.
Similarly, in the limited memory period of length N
, the arithmetic mean estimation of the process noise mean can be expressed as follows:
The arithmetic mean estimation of the process noise covariance can be expressed as follows:
Applying the random-weighted concept to Equations (12) and (13), the random-weighted estimation of
and
can be written as follows:
Define the process residual as follows:
According to the second formula in Equation (3), the process residual
can be rewritten as follows:
According to the KF principle, we have the following:
where
is the process residual covariance, and
is the one-step state estimation error variance, which is expressed as follows:
is the state error covariance, which is expressed as follows:
where
is filter gain matrix, which is expressed as follows:
Equations (7), (9), (14), and (15) provide the random-weighted estimations of the process noise statistics and measurement noise statistics, which allow us to adaptively adjust the random weights to suppress the interferences of the process and measurement noises on the state estimation for improving the KF accuracy.
3.3. Unbiasedness of Random-Weighted Estimations of System Noise Statistics
Theorem 1. The random-weighted estimations
and
of
and
, which are given by (7) and (14), are suboptimally unbiased.
Proof of Theorem 1. From Equation (7), we have the following:
where
is used in the last step derivation.
It can be seen from Equation (22) that is not the optimal unbiased estimation of .
If the measurement noise is constant or involves small variations in the limited memory, i.e.,
, Equation (22) can be further written as follows:
It is known from Equations (22) and (23) that the random weighting estimation of is suboptimally unbiased.
Similarly, from (14), we have the following:
It can be seen from Equation (24) that is not the optimal unbiased estimation of .
If the process noise is constant or involves small variations in the limited memory, i.e.,
, Equation (24) can be further written as follows:
It is known from Equations (24) and (25) that the random-weighted estimation of is suboptimally unbiased.
The proof of Theorem 1 is completed. □
Theorem 2. The random-weighted estimations
and
of
and
, which are given by (9) and (15), are suboptimally unbiased.
Proof of Theorem 2. Since the state estimate from KF is unbiased, we have the following:
By Equation (11), we have the following:
Calculate the measurement residual covariance:
where
.
Since
and
are independent each other, by Equation (26), we have the following:
and the following:
Substituting Equations (29) and (30) into (28) yields the following:
where
is the state error covariance at time
.
From Equation (31), in the limited memory, the arithmetic mean estimation of
can be calculated as follows:
The random-weighted estimation of
can be written as follows:
Taking the mathematical expectation on both sides of Equation (33) generates the following:
It can be seen from Equation (34) that is not the optimal unbiased estimation of .
If the measurement noise is constant or involves small variations in the limited memory period, i.e.,
, Equation (34) can be further written as follows:
It is known from (34) and (35) that the random-weighted estimation of is suboptimally unbiased.
Now let us study the unbiasedness for the random-weighted estimation of .
According to Equation (17), we have the following:
Calculate the process residual covariance:
where
.
Since
and
are independent each other, according to Equation (26), we have the following:
and the following:
Substituting Equations (38) and (39) into Equation (37) yields the following:
where
is the state error covariance at time
.
The arithmetic mean estimation of
can be calculated as
By Equation (41), the random-weighted estimation of
can be written as follows:
Taking the mathematical expectation on both sides of Equation (42) generates the following:
It can be seen from Equation (43) that is not the optimal unbiased estimation of .
If the process noise is constant or involves small variations in the limited memory, i.e.,
, Equation (43) can be further written as follows:
It is known from Equations (43) and (44) that are the random-weighted suboptimal unbiased estimation of .
The proof of Theorem 2 is completed. □
Based on above, the overview diagram of LM-RWKF is as
Figure 1, and the procedure of the proposed LM-RWKF is as follows:
The procedure of the proposed LM-RWKF is as follows:
- (i)
Initialize the estimated state and its associated error covariance:
- (ii)
Calculate predicted state vector:
- (iii)
Calculate the one-step prediction covariance by (19).
- (iv)
Estimate the mean and covariance of the process and measurement noise statistics by (7), (9), (14), and (15).
- (v)
The process and measurement noise statistics are fed back to (19)–(21) to obtain a new filter gain matrix.
- (vi)
Calculate the new state estimation vector.
- (vii)
Let k = k + 1 return to (ii) until all iterations are complete.