Abstract
This paper proposes novel unbiased minimum-variance receding-horizon fixed-lag (UMVRHF) smoothers in batch and recursive forms for linear discrete time-varying state space models in order to improve the computational efficiency and the estimation performance of receding-horizon fixed-lag (RHF) smoothers. First, an UMVRHF smoother in batch form is proposed by combining independent receding-horizon local estimators for two separated sub-horizons. The local estimates and their error covariance matrices are obtained based on an optimal receding horizon filter and the smoother in terms of the unbiased minimum variance; they are then optimally combined using Millman’s theorem. Next, the recursive form of the proposed UMVRHF smoother is derived to improve its computational efficiency and extendibility. Additionally, we introduce a method for extending the proposed recursive smoothing algorithm to a posteriori state estimations and propose the Rauch–Tung–Striebel receding-horizon fixed-lag smoother in recursive form. Furthermore, a computational complexity reduction technique that periodically switches the two proposed recursive smoothing algorithms is proposed. The performance and effectiveness of the proposed smoothers are demonstrated by comparing their estimation results with those of previous algorithms for Kalman and receding-horizon fixed-lag smoothers via numerical experiments.
1. Introduction
Fixed-lag smoothers have been widely investigated because they provide optimal estimates, unlike filters and predictors [1,2,3,4]. In practice, smoothers are often used to improve the performance of filters, provided delays can be tolerated. In smoothing problems, the objective is to estimate past or delayed states using past measurements up to the current time. Three types of smoothers exist: fixed-point, fixed-lag, and fixed-interval smoothers. The fixed-lag smoother is the most useful type because it offers more accurate and general solutions than filters. Thus, smoothers have been considered as a suitable choice in many applications where a short time delay is acceptable, such as denoising ECG signals [5], image processing [6], target tracking [7,8,9], system-on-chip testing [10], structural dynamics estimation [11,12], and vibration analysis [13].
The structures of fixed-lag smoothers can be categorized as infinite impulse response (IIR) and finite impulse response (FIR) structures, based on the duration of the impulse responses. Owing to the IIR structure, the fixed-lag Kalman (FK) smoother has been widely used considering its optimality [1,4,14,15]. However, certain potential problems may arise due to this IIR structure. Because the FK smoother is designed under the assumption of an accurate system model, optimality cannot be guaranteed when there exists a model mismatch. Moreover, because it uses all the information from initiation to the current time, undesirable errors can accumulate in the state due to modeling and numerical errors, such that the estimates might oscillate or diverge. To overcome the shortcomings of the IIR structure, receding-horizon fixed-lag (RHF) smoothers have been proposed for linear systems [5,16,17,18,19,20,21] and non-linear systems [22] as alternatives to FK smoothers. Because RHF smoothers estimate the state by using the most recent finite measurements, they exhibit good properties such as bounded-input bounded-output (BIBO) stability, fast tracking ability, and robustness against modeling uncertainties and numerical errors.
However, previous RHF smoothers require vast amounts of computational power and memory, thereby preventing efficient implementations in real-time applications. Moreover, their complicated batch-form derivation is difficult to understand, and thus prevents further developments. Because these disadvantages are mainly caused by the high-dimensional matrix multiplications involved in batch calculations, one possible approach to solving this problem is to consider the RHF smoother. In [17], a guideline for choosing the appropriate horizon length of an RHF smoother is introduced for linear time-invariant systems; however, the structure of the proposed smoothing algorithm has a complicated batch form, making it difficult for application to large dimensional systems. In fact, the adjustment of horizon length is not a fundamental solution to the problem of reducing the computational cost because the horizon length is an important design parameter of RHF smoothers that can significantly affect the estimation performance.
An alternative approach to reducing computational complexity and memory size involves separating the estimation horizon by changing the high-dimensional matrix multiplication into the sum of reduced-dimensional matrix multiplications. If the estimates are obtained from two separated and independent estimators, the computational cost can be reduced by parallel computing. Because RHF smoothers estimate the state at a fixed lag time, finite measurements on the overall horizon can be partitioned into two independent subsets of measurements based on the estimation time. For the separated two subsets of measurements, different receding horizon estimators can be implemented independently, and their estimates can be combined in an optimal manner using Millman’s theorem [1]. A recursive RHF smoothing algorithm can provide a method for overcoming the disadvantages of computational complexity and memory size.
To the best of the authors’ knowledge, there are few results on the recursive RHF smoothing problem. In [19], an unbiased recursive RHF smoother based on receding horizon filtering was proposed for discrete time-invariant systems. Because the proposed unbiased RHF smoother is designed to ignore the statistical information of processes and measurement noise, it affords a more robust estimate than the FK smoother when the horizon length is selected appropriately. However, the proposed recursive unbiased RHF smoother requires a heavy computational load to find the optimal horizon length, in addition to obtaining the initial conditions and smoothed estimates from the filtering results via batch calculations. Moreover, because the inverse of the state transition matrix is required to establish the estimator gain matrix, it cannot be applied to systems with a singular state transition matrix, making the estimation problem unfeasible.
A different type of recursive RHF smoother, finite-memory structure (FMS) smoother, was proposed in [21]. In the derivation of the RHF smoother, smoothed estimates are obtained from a one-step-ahead estimate, which affords a fast estimation ability. Although the structure of recursive estimation seems appropriate, the performance might be degraded by the system and measurement noises because the noise information is not considered in the derivation. Owing to its IIR iteration structure, the estimation stability cannot be guaranteed, resulting in the divergence of the estimation result due to accumulated errors such as estimation and numerical errors in the processing. Furthermore, a batch calculation is required during the estimate update and it also makes the matrix assumption of an invertible state transition matrix. Additionally, although the recursive RHF smoothing algorithm provides computational efficiency for time-varying systems, the previous recursive RHF smoothing algorithms were solely proposed for time-invariant systems.
Therefore, in this study, we propose a UMVRHF smoother in batch and recursive forms involving separated sub-horizons for time-varying systems. The proposed UMVRHF smoother is designed by combining two independent receding-horizon local estimators for separated sub-horizons. The local estimates are obtained based on optimal receding horizon filtering and smoothing in terms of the unbiased minimum variance; they are then optimally combined using Millman’s theorem to obtain an overall unbiased optimal estimate. We also propose a recursive algorithm for the proposed UMVRHF smoother based on the one-step predicted finite horizon Kalman filter. The proposed UMVRHF smoothers provide unbiased optimal estimates of the state on the finite estimation horizon. In addition, the proposed smoothers do not require any assumptions about the a priori information of the horizon initial conditions and inverse of the state transition matrix. Moreover, because the recursive form of the proposed UMVRHF smoother is derived using the Kalman filtering algorithm, it is easy to understand and extend. Thus, as extensions of the proposed recursive UMVRHF (RUMVRHF) smoother, we introduce a method for a posteriori state estimation. We also propose an additional recursive RHF smoother based on the Rauch–Tung–Striebel smoothing algorithm and computational complexity reduction strategy.
The main contributions of this study are as follows:
- Novel optimal unbiased RHF smoothers are proposed in batch and recursive forms for linear discrete time-varying state-space models.
- The proposed RHF smoothers are obtained by optimally combining two local optimal receding horizon filters for the separated sub-horizons.
- The Rauch–Tung–Striebel-type RHF smoother and reduced computational complexity RHF smoother are proposed to improve the computational efficiency.
- The proposed RHF smoothers exhibit good properties, such as computational efficiency, optimality, unbiasedness, robustness against temporary modeling uncertainty, bounded-input and bounded-output stability, and fast convergence speed.
- The proposed RHF smoothers do not require any a priori information of the initial state or any assumption of the invertible state transition matrix.
- Through the numerical experiments, it is shown that the proposed RHF smoothers have advantages over the existing RHF smoothers and fixed-lag Kalman smoother and are applicable for practical application.
The remainder of this paper is organized as follows. In Section 2 and Section 3, UMVRHF smoothers in the batch and recursive forms are proposed, based on two independent optimal estimators for separated sub-horizons and Millman’s theorem. Thereafter, the extension method to estimate a posteriori states is introduced. The Rauch–Tung–Striebel RHF smoother and the computational complexity reduction method based on the smoothing algorithm are presented in Section 4. In Section 5, the performance and effectiveness of the proposed RHF smoothers are demonstrated via numerical experiments. Finally, the conclusions of this study are presented in Section 6.
2. Unbiased Minimum Variance Receding-Horizon Fixed-Lag Smoother in Batch Form
Consider the discrete time-varying state-space model as
where and are the state vector and output vector, respectively. The process noise vector and output noise vector are assumed as zero-mean white Gaussian noises, where the covariance matrices are denoted as and , respectively. It is assumed that the process and output noises are mutually uncorrelated with each other and are also uncorrelated with the initial state . The pair is also assumed to be observable. For the system model (1) and (2), the conventional RHF smoother for estimating the h lag state is expressed as a linear function of the finite measurements on the overall horizon as follows [20]:
where is the overall estimate, N is receding horizon length, and are the gain matrix and its i-th row vector of RHF smoother, and is the finite number of measurements expressed as follows:
respectively.
To reduce the computational complexity and memory size, the proposed UMVRHF smoother in this study is designed to be a weighted linear combination of two optimal and independent local RH estimators as
where and are defined as the local unbiased minimum variance estimates on the sub-horizons and , and and are constant weighting matrices for each local estimate, respectively.
Because two local RH estimators are implemented independently for the two separated subsets of measurements, their estimates can be combined in an optimal manner using Millman’s theorem. By using Millman’s theorem, constant weighting matrices and can be determined from the following mean-squared criterion [23]:
with unbiased constraints
Then, the overall state error covariance matrix can be represented as
where and are local error covariance matrices of and , respectively.
By differentiating with respect to as
we obtain the solution of the optimal problem (6) as follows:
where the inverse of always exists, since and are positive definite.
If we assume that unbiased minimum variance estimators for each sub-horizon are expressed as linear functions of the finite measurements, that is
where and are the optimal gain matrices of each estimator, the UMVRHF smoother can be expressed in batch form, as follows:
Because local estimates and are unbiased, i.e., and , the overall estimate is also unbiased as follows:
Now, the final objective is to obtain the optimal estimators in terms of unbiased minimum variance for each sub-horizon.
2.1. Unbiased Minimum Variance Estimation on the Sub-Horizon
In this section, we derive an unbiased minimum variance estimator to estimate the state at time using given finite measurements on the sub-horizon .
Before we derive the optimal estimator, we define the following matrices:
On the sub-horizon , an optimal estimator in terms of unbiased minimum variance with batch form for the state at can be expressed as a linear function of the finite measurements, as follows:
where the optimal gain matrix is determined by following lemma.
Lemma 1.
On the horizon , for the system models (1) and (2) and the following unbiased minimum variance estimation problem:
subject to
where the local estimation error is defined as
The optimal gain matrix of the optimal estimator (28) is given by [24]
Proof (of Lemma 1).
For a proof of Lemma 1, we refer the reader to [24]. □
Because the optimal estimator (28) on the sub-horizon is satisfied the unbiased condition (30), we can determine the unbiased constraint as [24]
With the unbiased constraint and following relations [24]
the estimation error (31) can be expressed as follows:
Thus, the local estimate and its error covariance matrix on the sub-horizon can be obtained as follows:
respectively.
2.2. Unbiased Minimum Variance Estimation on Sub-Horizon
In this section, the optimal estimator in terms of unbiased minimum variance is derived to estimate the state at time on the sub-horizon .
An unbiased minimum variance estimator to estimate the state at time for the finite measurements on the sub-horizon can be expressed as follows:
The finite number of measurements can be expressed in terms of the horizon initial state as follows:
and the following constraint is required to satisfy the unbiased condition, i.e., [20]:
Thereafter, the objective is to determine the optimal gain matrix subject to the unbiased constraint (41), which can be obtained from the following theorem:
Theorem 1.
On the horizon , for the following unbiased minimum variance estimation problem,
subject to
the optimal gain matrix of the optimal estimator (39) is given by
Proof of Theorem 1.
For convenience, the gain matrix in (39) is partitioned as follows:
where is the i-th row vector of the gain matrix .
By denting the as the i-th row vectors of identity matrix I, then the i-th unbiased constraint of (41) can be expressed as follows:
for . Moreover, by denoting as the i-th row vectors of the estimation error , calculating and taking expectation, we obtain
Note that the i-th component of the estimation error only depends on . Hence, the following performance criterion is established:
where is i-th row vectors of the Lagrange multiplier, which is associated with the i-th unbiased constraint.
To minimize (49) with respect to and , the following necessary conditions should be satisfied:
Then, we have
can be obtained as
By reconstructing according to (46), the optimal gain matrix can be obtained as follows:
This completes the proof. □
Thus, on the sub-horizon , the local estimate and its error covariance matrix can be obtained as follows:
3. Unbiased Minimum Variance Receding-Horizon Fixed-Lag Smoother in Recursive Form
In this section, we derive the recursive form of optimal estimators for each sub-horizon.
For the optimal estimator on the sub-horizon , we introduce the one-step predicted Kalman filter and its batch form. The dynamic equation of the one-step predicted Kalman filter in recursive form is expressed as [1]
for , where and denote the i-th estimate and its error covariance matrix on the sub-horizon , respectively. The batch form of the one-step predicted Kalman filter (57) and (58) for the finite subhorizon can be represented as follows:
Lemma 2.
[24] For the system model (1) and (2), the finite horizon one-step predicted Kalman filter in batch form can be represented on the horizon as follows:
where and are the horizon initial state and its error covariance matrix, respectively, and the solution of the Riccati equation for the Kalman filter is given as follows:
Proof of Lemma 2.
We refer the reader to [24] for the proof of Lemma 2. □
The recursive form of the optimal estimator for the sub-horizon can be obtained if the initial conditions (61) and (62) are calculated recursively. Moreover, the local estimate (55) and its error covariance matrix (56) for the sub-horizon have the same equation structure because they are also the initial conditions of the sub-horizon. The following theorem proves that the initial estimate and its error covariance matrix on the finite horizon can be obtained recursively.
Theorem 2.
On the horizon , the optimal estimate of the horizon initial state and its error covariance in batch form are given as follows:
where the initial conditions are obtained from
where the priori information matrix and the priori information estimate are calculated from the following recursive equations:
and
respectively, for , with the initial conditions and , where and are the posteriori information matrix and the posteriori information estimate, respectively.
Proof of Theorem 2.
With the definition of , , , , and as
for , respectively, and can be rewritten as follows:
respectively.
From (76) and (77), the estimated horizon initial error covariance (64) and state (63) can be represented as follows:
where and are denoted as
respectively.
To obtain the recursive form of and , we must know the recursive calculations of , , , and .
Before deriving the recursive form, we introduce the useful equality as follows:
This completes the proof. □
Thus, the recursive form of the proposed UMVRHF smoother can be represented and summarized by the following Algorithm 1.
The overview of the proposed RUMVRHF smoother is shown in Figure 1.
| Algorithm 1 Recursion of UMVRHF smoother on the horizon . |
Sub-horizon :
|
Figure 1.
Concept and timing diagram of the RUMVRHF smoother.
4. Extensions of RUMVRHF Smoother
In this section, we introduce a method for extending the proposed RUMVRHF smoother to a posteriori state estimation. In addition, additional RHF smoothers are proposed based on the Rauch–Tung–Striebel smoothing algorithm and computational complexity reduction strategy.
4.1. Extension to Posteriori State Estimation
We first introduce an approach for extending the proposed recursive algorithm to a posteriori state estimation. By denoting the a posteriori estimates for the overall horizon , sub-horizon and separated sub-horizon as , and , respectively, the a posteriori overall estimate and its error covariance at time can be represented as follows [1]:
where and are denoted as the state error covariance matrices of the estimates and , respectively, and and are obtained from
For the sub-horizon , the a posteriori local estimate and its error covariance matrix can be updated from the predicted local estimates and its error covariance matrix as [1]
respectively.
Because the recursive equations in Theorem 2 are equivalent to the conventional information filter formulation in [1], the posteriori information estimate and its posteriori information matrix can be obtained as and , respectively. Thus, the a posteriori local estimate and its error covariance matrix can be computed from the predicted and as follows:
where and are calculated from (83) and (84) with boundary conditions and .
4.2. Extension to Receding-Horizon Rauch-Tung-Striebel Fixed-Lag Smoother
Because the proposed RUMVRHF smoother is established based on two local optimal estimators, three steps are required to obtain the overall estimate and its error covariance matrix. Using the Rauch–Tung–Striebel (RTS) smoothing algorithm [25,26], the proposed recursive smoothing process can be implemented in two steps: filtering for the current state and smoothing for the fixed-lag state estimations. Although the estimation structure of RTS smoothing is different from that of the proposed smoother, it can provide computational efficiency and reliable results [27]. Thus, in this subsection, we propose a receding-horizon Rauch–Tung–Striebel fixed-lag (RHRTSF) smoother as an extension and application of the proposed recursive method.
First, the backward recursion of the overall posteriori covariance matrix on the time-interval can be represented in terms of posteriori local covariance matrices and as follows:
for , where .
Involving the same derivation as (95), the backward recursion of the overall smoothed estimates can be written as follows:
for .
Thus, the RHRTSF smoother can be obtained by combining the receding horizon optimal filter on the overall horizon and backward recursions (95) and (96) by the following algorithm:
The concept of the proposed RHRTSF smoother is depicted in Figure 2.
Figure 2.
Concept of the RHRTSF smoother.
4.3. Extension to Reduced Computational Complexity Receding Horizon Fixed-Lag Smoother
Although the proposed RUMVRHF and RTSRHF smoothers can provide more computational efficiency than RH smoothers in batch form, the computational complexity can be further improved by combining and sharing their estimates. In this section, we proposed the reduced computational complexity RHF (RCCRHF) smoother that periodically switches between RTSRHF and RUMVRHF smoothers to reduce the computational complexity of the proposed recursive smoothing algorithms.
The concept of RCCRHF smoothing algorithm is shown in Figure 3. As can be seen in the figure, the basic concept of the RCCRHF smoothing algorithm is a periodical switching RTSRHF and RUMVRHF smoothing algorithms and sharing their estimates.
Figure 3.
The concept of RCCRHF smoothing.
To illustrate the proposed RCCRHF smoothing algorithm, let us suppose that the RTSRHF smoother is selected at time and it estimates the lag state by using measurements in the horizon . The RCCRHF smoothing algorithm is started from the RTSRHF smoother. The horizon initial state and its error covariance are estimated through steps 1–4 of Algorithm 2. If we define the mid estimate and its error covariance as and , respectively, these are obtained through the steps 5–9 in Algorithm 2. Thereafter, the final estimate and its error covariance are finally obtained through the steps 10–13 in Algorithm 2), with boundary conditions and . It is noted that this process is exactly same as the RHRTS smoothing algorithm except that the overall horizon length is taken as , not N. Moreover, it is also noted that the mid estimate and its error covariance are the same as the estimation results of minimum variance RH (MVRH) filter for the horizon .
Now, suppose that the smoothing mode is changed to RUMVRHF smoother at time k and it estimates the lag state by using measurements on the horizon . In the RUMVRHF smoothing process, the local estimate and its error covariance for the sub-horizon should be obtained through the MVRH filter. This estimation process is equivalent to obtain the mid estimate and its error covariance of RTSRHF smoother. This means that the mid estimate and its error covariance of RTSRHF smoother at time can be used as the local estimate and error covariance of RUMVRHF smoother at time k for the sub-horizon , i.e., and , respectively. Thus, the overall smoothed estimate and its error covariance of RUMVRHF smoother are obtained as follows:
where and are the local estimate and its error covariance matrix for the sub-horizon , respectively, and and are obtained from the RTSRHF smoother on the overall horizon , not .
| Algorithm 2 RHRTSF smoothing algorithm on the horizon . |
|
Therefore, RCCRHF smoother can be designed by switching the roles of RTSRHF and RUMVRHF smoothers in every h time step periodically. The concept of periodical switching scheme for RCCRHF smoother is depicted in Figure 4.
Figure 4.
Concept of periodical swithcing scheme for RHCCRHF smoother.
The process for obtaining the local estimate and its error covariance of RUMVRHF smoother can be eliminated by sharing the mid estimate and its error covariance of RTSRHF smoother at the h lag time. Thus, the RCCRHF smoother could have less computation time than the proposed recursive RHF smoothers.
However, if the lag size h is greater than half of the horizon length , the RSTRHF smoother cannot be defined. In this situation, the overall horizon lengths of the RSTRHF and RUMVRHF smoothers are taken as N and , respectively. Although the horizon length of the RTSRHF smoother is increased by h, the RCCRHF smoother has less computation time of other proposed recursive RHF smoothers because iterations to obtain local estimate and its covariance of RUMVRHF smoother is omitted.
5. Numerical Experiments and Discussion
In this section, numerical experimental results present the performance and effectiveness of the proposed RHF smoothing algorithms.
5.1. F-404 Gas Turbine Aircraft Engine System
We consider the following discrete time F-404 gas turbine aircraft engine model [21].
where the initial state , initial error covariance matrix , and the actual process and measurement noise covariance matrices are considered as and , respectively.
To demonstrate the robustness against temporary modeling uncertainty of the proposed smoothers upon the occurrence of a model mismatch, the model uncertain parameter is considered as follows:
Smoothers are designed for the nominal state space models (101) and (102), considering ; therefore, they are applied to the temporarily uncertain system. We set the process covariance and measurement noise covariance matrices for smoothers as and , respectively. Because the estimates of the nominal FMS smoother, i.e., forgetting factor , diverge for the system model, the forgetting factor is considered as .
First, to verify the performance of the proposed algorithms, the proposed RHF smoothers are compared with the Rauch–Tung–Striebel fixed-lag (RTSF) Kalman [1], the previous optimal RHF [20], and the FMS smoothers [21]. The second state estimation errors and time averaged values of the root-mean square estimation (RMSE) errors are shown for various horizon lengths in Figure 5 and Table 1, Table 2 and Table 3, respectively.
Figure 5.
Estimation errors with respect to various horizon lengths ().
Table 1.
RMSE errors for the time interval .
Table 2.
RMSE errors for the time interval .
Table 3.
RMSE errors for the time interval .
As observed in these results, the proposed RHF smoothers successfully estimate the real state and have better estimation performance than others. In comparison to the RTSF Kalman smoother, the RMSE errors of the proposed smoothers with small horizon lengths are a little bigger than that of the RTSF Kalman smoother when there are no modeling uncertainties.
However, in the case of large horizon lengths, the RMSE errors of the proposed smoothers are smaller than that of the RTSF Kalman smoother. Moreover, the estimation errors of the proposed RHF smoothers are remarkably smaller than that of the RTSF Kalman smoother for the cases of model mismatches. Furthermore, the estimated states of the proposed RHF smoothers rapidly converge to the real state after the modeling uncertainty disappears, whereas those of the RTSF Kalman smoother take a long time to converge. Hence, it can be deduced that the proposed RHF smoothers are more robust against temporary modeling uncertainty than the RTSF Kalman smoother.
In contrast to the previous optimal RHF smoother, the RMSE errors of the proposed RHF smoothers are slightly larger; however, they are similar to those of the previous optimal RHF smoother in the absence of modeling uncertainties. However, the estimation performance of the proposed RCCRHF and RTSRHF smoothers is better than that of the previous optimal RHF smoother when modeling uncertainties exist. Moreover, the estimation errors of the proposed RCCRHF and RTSRHF smoothers are getting smaller than that of the previous optimal RHF smoother, as well as increasing the horizon length of RHF smoothers. Furthermore, we can observe that the RMS errors of the RUMVRHF smoothers (a priori and a posteriori) are still slightly larger than those of the previous optimal RHF smoother, although the horizon length is increased.
By comparing the results for the RTSRHF and RUMVRHF smoothers, it can be stated that the estimation performance is affected, albeit not severely, by the information state, information matrix, and their initial conditions. Moreover, the structure of the RTSRHF smoother is better than that of the RUMVRHF smoother.
Second, to highlight the effects of the horizon length and delay size on the proposed RHF smoothers, we compare the RMSE errors of the smoothers with respect to various horizon lengths and lag sizes, as shown in Figure 6.
Figure 6.
RMSE errors with respect to the various horizon lengths and lag sizes.
In contrast to all RMSE errors, it can be observed that the proposed RCCRHF and RTSRHF smoothers and the previous optimal RHF smoother can be optimal choices for short lag sizes. In particular, in Figure 6c,e, it can be observed that RHF smoothers with long horizon lengths and short delay sizes perform better than RTSF Kalman smoother, even in the absence of modeling uncertainties. In addition, the RHF smoothers with large horizon lengths yield better estimation results than those with small horizon lengths in the case of an accurate system model.
However, other proposed smoothers are also acceptable for use. From Figure 6b,d,f, it is observed that the proposed RCCRHF smoother with a short delay size provides better estimation performance than others in the case of a model mismatch because its averaged horizon length is shorter than other RHF smoothers. However, the RCCRHF smoother with a long delay size (longer than half of the horizon length) demonstrates poor estimation results owing to its increased horizon length. Thus, it can be stated, in the case of a model mismatch, a short lag size should be adopted for the proposed RCCRHF smoother.
As observed in Figure 6a,c,e, the estimation performance of the previous optimal RHF smoother is better than the other proposed RHF smoothers. On the contrary, observing the results in Figure 6b,d,f, it can easily noticed that the proposed smoothers demonstrate better estimation performance than the previous optimal RHF smoother except for small lag size intervals. In particular, the proposed RTSRHF smoother guarantees its estimation performance for most lag sizes. Thus, we can infer that the proposed smoothers are robust and provide more exact estimate than the previous optimal RHF smoother. Furthermore, we can observe that there is best lag size, and it is close to half of the horizon length.
Third, to demonstrate the computational efficiency of the proposed approaches, the time-averaged computation times of RHF smoothers are compared, as presented in Table 4 and Figure 7. For fair comparisons, the matrix gains of the batch RHF smoothers are calculated at every iteration, although the smoothers are designed for time-invariant systems.
Table 4.
Average computation time of RHF smoothers ().
Figure 7.
Time-averaged computation time with respect to the horizon lengths and lag sizes.
As observed from these results, the proposed UMVRHF smoothers in a separated batch-form have better computational performance than the previous optimal RHF smoothers in batch form. In Figure 7, we can observe that the computational efficiency of the proposed UMVRHF smoothers in a separated batch form can be improved by selecting the lag size near the half of the horizon length. Moreover, the proposed RHF smoothers in a recursive-form have much smaller computation times than RHF smoothers in a batch-form. It is clearly noticeable that the proposed RCCRHF smoothers involving short lag sizes (shorter than half of horizon length) have much smaller averaged computation times than other smoothers.
In addition, for a long lag size (larger than the half of the horizon length), the computation time is still smaller than that of the other proposed RHF smoothers, although their horizon sizes are longer than the others. Furthermore, the proposed RTSRHF smoothers have better computational performance than the proposed RUMVRHF smoothers.
5.2. Direct Current Motor System
In this subsection, numerical experiment results are given for direct current (DC) motor system to verify the applicability of the proposed methods for more practical application.
The discrete-time DC motor model with model uncertainty is given as follows [28,29]:
where the DC motor is assumed to be operated without any payload and drive voltage is assumed to be encountered as an external unit step source. Covariances for system and measurement noises are taken as and , respectively, and the system parameters of DC motor system are shown in Table 5.
Table 5.
Parameters of DC motor.
The model uncertain parameter for model mismatch is considered as
For RH smoothers, the horizon length and fixed-lag size are taken as and , respectively. As shown in the previous experiment, the estimation performance of RUMVRHF smoother for posteriori priori state and batch form are similar to that of the RUMVRHF smoother for posteriori. Thus, in this section, we compare the estimation results of the RUMVRHF smoother (posteriori), RCCRHF smoother, RTSRHF smoother, FMS smoother, and RTSF Kalman smoother.
Figure 8 and Table 6 present estimation errors and time averaged values of RMSE errors of smoothers for . In addition, the average computation times of RHF smoothers are presented in Table 7.
Figure 8.
Estimation errors of smoothers.
Table 6.
RMSE errors of smoothers for the time interval .
Table 7.
Average computation times of RHF smoothers.
As shown in these results, the proposed RHF smoothers estimate the real state well and provide more exact estimates than RTSF Kalman smoother for the modeling uncertainty. Moreover, the proposed RHF smoothers have faster convergence speed than RTSF Kalman smoother as we expected. Furthermore, the proposed RCCRHF smoother has better computational performance than other compared RHF smoothers. These observations show that the proposed RHF smoothers can work well and have robustness against temporarily modeling uncertainty for practical application.
6. Conclusions
In this paper, novel optimal RHF smoothers have been proposed in batch and recursive forms for linear discrete time-varying state space models. The proposed RHF smoothers were obtained by optimally combining local optimal receding horizon filters and smoothers for the separated sub-horizons. The proposed recursive RHF smoothing algorithm was formulated as a finite horizon Kalman filtering and information smoothing. Additionally, as an extension of the proposed recursive RHF smoothing algorithm, a Rauch–Tung–Striebel type RHF smoothing algorithm and reduced computational complexity RHF smoother were also proposed to improve the computational efficiency. The proposed RHF smoothers could provide optimality, unbiasedness, and bounded-input bounded-output stability. Moreover, the proposed RHF smoothers are independent of any a priori information of the initial state and assumption of a non-singular transition system matrix. Furthermore, the proposed RHF smoothers could significantly improve the computational efficiency, which is major disadvantage of RHF smoothers. Finally, it has been demonstrated via numerical experiments that the RHF smoothers proposed in this paper can provide better estimation results than fixed-lag Kalman smoothers in the case of a model mismatch. Furthermore, it was also shown that the proposed methods are more computational efficient than the existing RHF smoothers and applicable for practical application.
However, it is possible to improve the estimation perforamance of the RH smoothing algorithms by choosing the appropriate fixed lag size and horizon length. Thus, we plan to investigate the horizon length adjustment method and how to find optimal fixed-lag size for further improvement of the proposed RH smoothers as future work. Moreover, in the future, it would be meaningful to extend and evaluate the proposed RH smoothing algorithms for more practical applications.
Author Contributions
Conceptualization, B.K.; methodology, B.K.; software, B.K.; validation, B.K. and P.S.K.; formal analysis, B.K.; investigation, B.K. and P.S.K.; resources, B.K. and P.S.K.; data curation, B.K. and P.S.K.; writing—original draft preparation, B.K.; writing—review and editing, B.K.; visualization, B.K.; All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
Not applicable.
Conflicts of Interest
The authors declare no conflict of interest.
Abbreviations
The following abbreviations are used in this manuscript:
| RHF | Receding Horizon Fixed-lag |
| FK | Fixed-lag Kalman |
| BIBO | Bounded Input Bounded Output |
| FIR | Finite Impulse Response |
| IIR | Infinite Impulse Response |
| UMVRHF | Unbiased Minimum Variance Receding Horizon Fixed-lag |
| RUMVRHF | Recursive Unbiased Minimum Variance Receding Horizon Fixed-lag |
| RTS | Rauch–Tung–Striebel |
| RTSF | Rauch–Tung–Striebel Fixed-lag |
| RTSRHF | Rauch–Tung–Striebel Receding Horizon Fixed-lag |
| RCCRHF | Reduced Computational Complexity Receding Horizon Fixed-lag |
| FMS | Finite Memory Structure |
References
- Lewis, F.L. Optimal Estimation: With An Introduction to Stochastic Control Theory; John Wiley and Sons: Hoboken, NJ, USA, 1986. [Google Scholar]
- Martino, L.; Read, J. A joint introduction to Gaussian Processes and Relevance Vector Machines with connections to Kalman filtering and other kernel smoothers. Inf. Fusion 2021, 74, 17–38. [Google Scholar] [CrossRef]
- Eilers, P.H.C.; Marx, B.D. Practical Smoothing: The Joys of P-Splines; Cambridge University Press: Cambridge, UK, 2021. [Google Scholar]
- Poddar, S.; Crassidis, J.L. Adaptive Lag Smoother for State Estimation. Sensors 2022, 22, 5310. [Google Scholar] [CrossRef] [PubMed]
- Lastre-Domínguez, C.; Ibarra-Manzano, O.; Andrade-Lucio, J.A.; Shmaliy, Y.S. Denoising ECG Signals Using Unbiased FIR Smoother and Harmonic State-Space Model. In Proceedings of the 28th European Signal Processing Conference (EUSIPCO), Amsterdam, The Netherlands, 18–21 January 2021; pp. 1279–1283. [Google Scholar]
- Si, Y. LPPCNN: A Laplacian Pyramid-based Pulse Coupled Neural Network Method for Medical Image Fusion. J. Appl. Sci. Eng. 2021, 24, 299–305. [Google Scholar]
- Ullah, I.; Qureshi, M.B.; Khan, U.; Memon, S.A.; Shi, Y.; Peng, D. Multisensor-Based Target-Tracking Algorithm with Out-of-Sequence-Measurements in Cluttered Environments. Sensors 2018, 18, 4043. [Google Scholar] [CrossRef] [PubMed]
- Tao, H.; Yanzhang, Z.; Hongxia, N.; Mingxi, C.; Shan, W. Research on Tracking Foreign Objects in Railway Tracks Based on Hidden Markov Kalman Filter. J. Appl. Sci. Eng. 2022, 25, 893–903. [Google Scholar]
- Kim, P.S. Finite Memory Structure Filtering and Smoothing for Target Tracking in Wireless Network Environments. Appl. Sci. 2019, 9, 2872. [Google Scholar] [CrossRef]
- Rau, J.C.; Wu, P.H.; Li, W.L. Test Slice Difference Technique for Low-Transition Test Data Compression. J. Appl. Sci. Eng. 2012, 15, 157–166. [Google Scholar]
- Lagerblad, U.; Wentzel, H.; Kulachenko, A. Dynamic response identification based on state estimation and operational modal analysis. Mech. Syst. Signal Process 2019, 129, 37–53. [Google Scholar] [CrossRef]
- Maes, K.; Gillijns, S.; Lombaert, G. A smoothing algorithm for joint input–state estimation in structural dynamics. Mech. Syst. Signal Process 2018, 98, 292–309. [Google Scholar] [CrossRef]
- Lagerblad, U.; Wentzel, H.; Kulachenko, A. Study of a fixed-lag Kalman smoother for input and state estimation in vibrating structures. Inverse Probl. Sci. Eng. 2021, 29, 1260–1281. [Google Scholar] [CrossRef]
- Vauhkonen, P.J.; Vauhkonen, M.; Kaipio, J.P. Fixed-lag smoothing and state estimation in dynamic electrical impedance tomography. Int. J. Numer. Meth. Eng. 2001, 50, 2195–2209. [Google Scholar] [CrossRef]
- Hsieh, C.S. Unbiased minimum-variance input and state estimation for systems with unknown inputs: A system reformation approach. Automatica 2017, 84, 236–240. [Google Scholar] [CrossRef]
- Kwon, B.; Han, S.; Han, S. Improved Receding Horizon Fourier Analysis for Quasiperiodic Signals. J. Electr. Eng. Technol. 2017, 12, 378–384. [Google Scholar] [CrossRef]
- Kim, P.S. A computationally efficient fixed-lag smoother using recent finite measurements. Measurement 2013, 46, 846–850. [Google Scholar] [CrossRef]
- Shmaliy, Y.S.; Morales-Mendoza, L.J. FIR smoothing of discrete-time polynomial signals in state space. IEEE Trans. Signal Process. 2010, 58, 2544–2555. [Google Scholar] [CrossRef]
- Simon, D.; Shmaliy, Y.S. Unified forms for Kalman and finite impulse response filtering and smoothing. Automatica 2013, 49, 1892–1899. [Google Scholar] [CrossRef]
- Kwon, B.K.; Han, S.; Kwon, O.K.; Kwon, W.H. Minimum variance FIR Smoother for Discrete-time systems. IEEE Signal Process. Lett. 2007, 14, 557–560. [Google Scholar] [CrossRef]
- Kim, P.S. A finite memory structure smoother with recursive form using forgetting factor. Math. Probl. Eng. 2017, 2017, 8192053. [Google Scholar] [CrossRef]
- Xu, Y.; Shmaliy, Y.S.; Ahn, C.K.; Shen, T.; Zhuang, Y. Tightly Coupled Integration of INS and UWB Using Fixed-Lag Extended UFIR Smoothing for Quadrotor Localization. IEEE Internet Things J. 2021, 8, 1716–1727. [Google Scholar] [CrossRef]
- Shin, V.; Lee, Y.; Choi, T. Generalized Millman’s formula and its application for estimation problems. Signal Process. 2006, 86, 257–266. [Google Scholar] [CrossRef]
- Kwon, B.; Kim, S.-I. Recursive Optimal Finite Impulse Response Filter and Its Application to Adaptive Estimation. Appl. Sci. 2022, 12, 2757. [Google Scholar] [CrossRef]
- Rauch, H.E. Solutions to the Linear Smoothing Problem. IEEE Trans. Automat. Contr. 1963, 8, 371–372. [Google Scholar] [CrossRef]
- Rauch, H.E.; Tung, F.; Streibel, C.T. Maximum Likelihood Estimation of Linear Dynamic Systems. J. AIAA 1965, 3, 1445–1450. [Google Scholar] [CrossRef]
- Zhou, Q.; Zhang, H.; Li, Y.; Li, Z. An Adaptive Low-Cost GNSS/MEMS-IMU Tightly-Coupled Integration System with Aiding Measurement in a GNSS Signal Challenged Environment. Sensors 2015, 15, 23953–23982. [Google Scholar] [CrossRef]
- Pal, D. Modeling, analysis and design of a DC motor based on state space approach. Int. J. Eng. Res. Technol. 2016, 5, 293–296. [Google Scholar]
- Abut, T. Modeling and optimal control for a DC motor. Int. J. Eng. Trends Technol. 2016, 32, 146–150. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).







