Next Article in Journal
Integrated Hydrological Modeling to Analyze the Effects of Precipitation on Surface Water and Groundwater Hydrologic Processes in a Small Watershed
Next Article in Special Issue
Evaluation of the DRAINMOD Model’s Performance Using Different Time Steps in Evapotranspiration Computations
Previous Article in Journal
Effects of Grazing on Water Erosion, Compaction and Infiltration on Grasslands
Previous Article in Special Issue
Machine Learning in Assessing the Performance of Hydrological Models
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Adaptive Conditional Bias-Penalized Kalman Filter for Improved Estimation of Extremes and Its Approximation for Reduced Computation

1
Department of Civil Engineering, The University of Texas at Arlington, Arlington, TX 76019, USA
2
Len Technologies, Oak Hill, VA 20171, USA
*
Author to whom correspondence should be addressed.
Hydrology 2022, 9(2), 35; https://doi.org/10.3390/hydrology9020035
Submission received: 13 January 2022 / Revised: 7 February 2022 / Accepted: 15 February 2022 / Published: 17 February 2022
(This article belongs to the Special Issue Recent Advances in Hydrological Modeling)

Abstract

:
Kalman filter (KF) and its variants and extensions are wildly used for hydrologic prediction in environmental science and engineering. In many data assimilation applications of Kalman filter (KF) and its variants and extensions, accurate estimation of extreme states is often of great importance. When the observations used are uncertain, however, KF suffers from conditional bias (CB) which results in consistent under- and overestimation of extremes in the right and left tails, respectively. Recently, CB-penalized KF, or CBPKF, has been developed to address CB. In this paper, we present an alternative formulation based on variance-inflated KF to reduce computation and algorithmic complexity, and describe adaptive implementation to improve unconditional performance. For theoretical basis and context, we also provide a complete self-contained description of CB-penalized Fisher-like estimation and CBPKF. The results from one-dimensional synthetic experiments for a linear system with varying degrees of nonstationarity show that adaptive CBPKF reduces the root-mean-square error at the extreme tail ends by 20 to 30% over KF while performing comparably to KF in the unconditional sense. The alternative formulation is found to approximate the original formulation very closely while reducing computing time to 1.5 to 3.5 times of that for KF depending on the dimensionality of the problem. Hence, adaptive CBPKF offers a significant addition to the dynamic filtering methods for general application in data assimilation when the accurate estimation of extremes is of importance.

1. Introduction

Streamflow prediction is subject to uncertainties from multiple sources. These include uncertainties in the input (e.g., mean areal precipitation (MAP) and mean areal potential evapotranspiration (MAPE)), hydrologic model uncertainties, and the uncertainties in the initial conditions. With climate change and rapid urbanization, the uncertainty in the input and parametric uncertainties will increase; thus, reducing them has become increasingly challenging. Recently, data assimilation is widely used to reduce the uncertainty in initial conditions [1,2].
In many data assimilation applications in hydrologic predictions, Kalman filter (KF) and its variants and extensions are widely used to fuse observations with model predictions in a wide range of applications [3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23]. In geophysics and environmental science and engineering, often the main objective of data assimilation is to improve the estimation and prediction of states in their extremes rather than in normal ranges. In hydrologic forecasting, for example, the accurate prediction of floods and droughts is far more important than that of streamflow and soil moisture in normal conditions. Because KF minimizes unconditional error variance, its solution tends to improve the estimation near median where the state of the dynamic system resides most of the times while often leaving significant biases in the extremes. Such conditional biases (CB) [24] generally result in consistent under- and overestimation of the true states in the upper and lower tails of the distribution, respectively. To address CB, CB-penalized Fisher-like estimation and CB-penalized KF (CBPKF) [25] have recently been developed which jointly minimize error variance and expectation of the Type-II CB squared for the improved estimation and prediction of extremes. The Type-II CB, defined as E [ X ^ | X = x ] x , is associated with failure to detect the event where x denotes the realization of X where X , X ^ and x ^ denote the unknown truth, the estimate, and the realization of X ^ , respectively [26]. The original formulation of CBPKF, however, is computationally extremely expensive for high-dimensional problems. Additionally, whereas CBPKF improves performance in the tails, it deteriorates performance in the normal ranges. In this work, we approximate CBPKF with forecast error covariance-inflated KF, referred to hereafter as the variance-inflated KF (VIKF) formulation, as a computationally less expensive and algorithmically simpler alternative, and implement adaptive CBPKF to improve unconditional performance.
Elements of CB-penalized Fisher-like estimation have been described in the forms of CB-penalized indicator cokriging for fusion of predicted streamflow from multiple models and observed streamflow [27], CB-penalized kriging for spatial estimation [28] and rainfall estimation [29], and CB-penalized cokriging for fusion of radar rainfall and rain gauge data [30]. The original formulation of CBPKF have been described in [25], respectively. Its ensemble extension, CB-penalized ensemble KF, or CEnKF, is described in [31] in the context of ensemble data assimilation for flood forecasting. In this paper, we provide in the context of data assimilation a complete self-contained description of CBPKF for theoretical background for the alternative formulation and adaptive implementation. Whereas CBPKF was initially motivated for environmental and geophysical state estimation and prediction, it is broadly applicable to a wide range of applications for which improved performance in the extremes is desired. This paper is organized as follows. Section 2 and Section 3 describe CB-penalized Fisher-like solution and CBPKF, respectively. Section 4 describes approximation of CBPKF. Section 5 describe the evaluation experiments and results, respectively. Section 6 describes adaptive CBPKF. Section 7 provides the conclusions.

2. Conditional Bias-Penalized Fisher-like Solution

As in Fisher estimation [32], the estimator sought for CB-penalized Fisher-like estimation is X * = W Z where X * denotes the (m × 1) vector of the estimated states, W denotes the (m × (n + m)) weight matrix, and Z denotes the ((n + m) × 1) augmented observation vector. In the above, n denotes the number of observations, m denotes the number of state variables, and (n + m) represents the dimensionality of the augmented vector of the observations and the model-predicted states to be fused for the estimation of the true state X . The purpose of augmentation is to relate directly to CBPKF in Section 3 without introducing additional notations. Throughout this paper, we use regular and bold letters to differentiate the non-augmented and augmented variables, respectively. The linear observation equation is given by:
Z = H X + V
where X denotes the (m × 1) vector of the true state with E[X] = MX and Cov[X,XT] = ΨXX, H denotes the ((n + m) × m) augmented linear observation equation matrix, and V denotes the ((n + m) × 1) augmented zero-mean observation error vector with Cov [ V , V T ] = R . Assuming independence between X and V, we write the Bayesian estimator [30] for X , or X * , as:
X * = M X + W ( Z H M x )
The error covariance matrix for X * , E [ ( X X * ) ( X X * ) T ] , is given by:
Σ E V = ( I W H ) Ψ X X ( I W H ) T + W R W T
With Equation (2), we may write Type-II CB as:
X E [ X * | X ] = ( X M X ) W E [ ( Z H M x ) | X ]
The observation equation for Z is obtained by inverting Equation (1):
X = G T Z G T V
The (mxn) matrix, GT, in Equation (5) is given by:
G T = ( U T H ) 1 U T
where UT is some (m × (n + m)) nonzero matrix. Using Equation (5) and the identity, Ψ Z Z = H Ψ X X H T + R , we may write the Bayesian estimate for E [ Z | X ] in Equation (4) as:
E ^ [ Z | X ] = H M X + C ( X M X )
where
C = ( H Ψ X X H T + R ) G T [ G ( H Ψ X X H T + 2 R ) G T ] 1
Equations (7) and (8) state that the Bayesian estimate of Z given X is given by HX if the a priori state error covariance Ψ X X is noninformative or there are no observation errors, but by the average of the a priori mean M X and the observed true state X if the a priori Ψ X X is perfectly informative or observations are information-less.
With Equation (4), we may write the quadratic penalty due to Type-II CB as:
Σ C B = E [ ( X E X * [ X * | X ] ) ( X E X * [ X * | X ] ) T ] = ( I W C ) Ψ X X ( I W C ) T
where I denotes the (m × m) identity matrix. Combining Σ E V in Equation (3) and Σ C B in Equation (9), we have the apparent error covariance, Σ a , which reflects both the error covariance and Type-II CB:
Σ a = ( I W H ) Ψ X X ( I W H ) T + W R W T + α ( I W C ) Ψ X X ( I W C ) T
where α denotes the scaler weight given to the CB penalty term. Minimizing Equation (10) with respect to W, or by direct analogy with the Bayesian solution [32], we have:
W = Ψ X X H ^ T [ H ^ Ψ X X H ^ T + Λ ] 1
The modified structure matrix H ^ T and observation error covariance matrix Λ in Equation (11) are given by:
H ^ T = H T + α C T
Λ = R + α ( 1 α ) C Ψ X X C T α H Ψ X X C T α C Ψ X X H T
Using Equation (11) and the matrix inversion lemma [33], we have for Σ a and X * in Equations (10) and (2), respectively:
Σ a = α Ψ X X + [ H ^ Λ 1 H ^ T + Ψ X X 1 ] 1
X * = [ H ^ T Λ 1 H ^ + Ψ X X 1 ] 1 { H ^ T Λ 1 Z + Ψ X X 1 M X } + Δ
where Δ = α Ψ X X H ^ T [ H ^ Ψ X X H ^ T + Λ ] 1 C M X . To render the above Bayesian solution to a Fisher-like solution, we assume no a priori information in X and let Ψ X X 1 in the brackets in Equations (14) and (15) vanish:
Σ a = B [ H ^ Λ 1 H ^ T ] 1
X * = [ H ^ T Λ 1 H ^ ] 1 H ^ T Λ 1 Z + Δ
where the scaling matrix B is given by B = α Ψ X X H ^ T Λ 1 H ^ + I . To obtain the estimator of the form, X * = W Z , we impose the unbiasedness condition, E [ X * ] = X , or equivalently, W H = I . The above condition is satisfied by replacing [ H ^ T Λ 1 H ^ ] 1 with [ H ^ T Λ 1 H ] 1 and dropping ∆ in Equation (17):
Σ a = B [ H ^ Λ 1 H T ] 1
X * = [ H ^ T Λ 1 H ] 1 H ^ T Λ 1 Z
Finally, we obtain from Equation (3) the error covariance, Σ E V , associated with X * in Equation (19):
Σ E V = W R W T = [ H ^ T Λ 1 H ] 1 H ^ T Λ 1 R Λ 1 H ^ [ H ^ T Λ 1 H ] 1
Note that, if α = 0, we have H ^ T = H and Λ = R , and hence the CB-penalized Fisher-like solution, Equations (19) and (20), is reduced to the Fisher solution [32].

3. Conditional Bias-Penalized Kalman Filter

CBPKF results directly from decomposing the augmented matrices and vectors in Equations (19) and (20) as KF does from the Fisher solution [32]. The CBPKF solution, however, is not very simple because the modified observation error covariance matrix, Λ, is no longer diagonal. An important consideration in casting the CB-penalized Fisher-like solution into CBPKF is to recognize that CB arises from the error-in-variable effects associated with uncertain observations [34], and that the a priori state, represented by the dynamical model forecast, is not subject to CB. We therefore apply the CB penalty to the observations only and reduce C in Equation (8) to C T = ( C 1 , k T   C 2 , k T ) = ( C 1 , k T   0 ) . Separating the observation and dynamical model components in H ^ T and Λ via the matrix inversion lemma, we have:
H ^ T = ( H ^ 1 , k T   I )
Λ = [ Λ 11 , k Λ 12 , k Λ 21 , k Λ 22 , k ]
where
H ^ 1 , k T = H k T + α C 1 , k T
Λ 11 , k = R k + α ( 1 α ) C 1 , k Ψ X X C 1 , k T α H k Ψ X X C 1 , k T α C 1 , k Ψ X X H k T
Λ 12 , k = α C 1 , k Ψ X X
Λ 21 , k = Λ 12 , k T
Λ 22 , k = Σ k | k 1
In the above, H k denotes the (n × m) observation matrix, and R k denotes the (n × n) observation error covariance matrix. To evaluate the (m × n) matrix, C 1 , k , it is necessary to specify U T in Equation (6). We use U T = H T which ensures invertibility of U T H , but other choices are also possible. We then have for C 1 , k T :
C 1 , k = [ ( H k Ψ X X H k T + R k ) G 1 , k + H k Ψ X X G 2 , k ] L k 1
where
G 2 , k T = ( H k T H k + I ) 1
G 1 , k T = G 2 , k T H k T
L k = G 2 , k T [ H k T ( H k Ψ X X H k T + 2 R k ) H k + H k T H k Ψ X X + Ψ X X H k T H k + Ψ X X + 2 Σ k | k 1 ] G 2 , k .
Expanding W in Equation (11) with Λ 1 = Γ = [ Γ 11 , k Γ 12 , k Γ 21 , k Γ 22 , k ] , we have:
W = [ H ^ T Λ 1 H ] 1 H ^ T Λ 1 = ( ϖ 1 , k H k + ϖ 2 , k ) 1 ( ϖ 1 , k   ϖ 2 , k )
In Equation (32), the (m × n) and (m × m) weight matrices for the observation and model prediction, ω1,k and ω2,k, respectively, are given by:
ϖ 1 , k = H ^ 1 , k T Γ 11 , k + Γ 21 , k
ϖ 2 , k = H ^ 1 , k T Γ 12 , k + Γ 22 , k
where
Γ 22 , k = [ Λ 22 , k Λ 21 , k Λ 11 , k 1 Λ 12 , k ] 1
Γ 11 , k = Λ 11 , k 1 + Λ 11 , k 1 Λ 12 , k Γ 22 , k Λ 21 , k Λ 11 , k 1
Γ 12 , k = Λ 11 , k 1 Λ 12 , k Γ 22 , k
The apparent CBPKF error covariance, which reflects both Σ E V and Σ C B , is given by Equation (18) as:
Σ a , k | k = α Σ k | k 1 + [ ϖ 1 , k H k + ϖ 2 , k ] 1
The CBPKF error covariance, which reflects Σ E V only, is given by Equation (20) as:
Σ k | k = [ ϖ 1 , k H k + ϖ 2 , k ] 1 ( ϖ 1 , k R k ϖ 1 , k T + ϖ 2 , k Σ k | k 1 ϖ 2 , k T ) [ ϖ 1 , k H k + ϖ 2 , k ] 1
Because CBPKF minimizes Σ a , k | k rather than Σ k | k , it is not guaranteed that Equation (39) satisfies Σ k | k Σ k | k 1 a priori. If the above condition is not met, it is necessary to reduce α and repeat the calculations. If α is reduced all the way to zero, CBPKF collapses to KF. The CBPKF estimate may be rewritten into a more familiar form:
X ^ k | k = [ ϖ 1 , k H k + ϖ 2 , k ] 1 [ ϖ 1 , k Z k + ϖ 2 , k X ^ k | k 1 ] = X ^ k | k 1 + K k [ Z k H k X ^ k | k 1 ]
In Equation (40), Zk denotes the (n × 1) observation vector, and the (m × n) CB-penalized Kalman gain, K k , is given by:
K k = [ ϖ 1 , k H k + ϖ 2 , k ] 1 ϖ 1 , k
To operate the above as a sequential filter, it is necessary to prescribe Ψ X X and α. An obvious choice for Ψ X X , i.e., the a priori error covariance of the state, is Σ k | k 1 . Specifying α requires some care. In general, a larger α improves accuracy over the tails but at the expense of increasing unconditional error. Too small an α may not affect large enough CB penalty in which case the CBPKF and KF solutions would differ a little. Too large an α, on the other hand, may severely violate the Σ k | k Σ k | k 1 condition in which case the filter may have to be iterated at an additional computational expense with successively reduced α . A reasonable strategy for reducing α is α i = c α i 1 ,   i = 1 , 2 , 3 , , with 0 < c < 1 where α i denotes the value of α at the i-th iteration [24,29]. For high-dimensional problems, CBPKF can be computationally very expensive. Whereas KF requires solving an (m × n) linear system only once per updating or fusion cycle, CBPKF additionally requires solving two (m × m) linear systems (for C 1 , k . and Γ 22 ), and a (n × n) system (for Λ 11 ), assuming that the structure of the observation equation does not change in time (in which case G 2 , k T in Equation (29) may be evaluated only once). To reduce computation, below we approximate CBPKF with KF by inflating the forecast error covariance.

4. VIKF Approximation of CBPKF

The main idea behind this simplification is that, if the gain for the CB penalty, C, in Equation (10) can be linearly approximated with H, the apparent error covariance Σ a becomes identical to Σ E V in Equation (3) but with Ψ X X inflated by a factor of 1 + α:
Σ ( 1 + α ) = ( I W H ) ( 1 + α ) Ψ X X ( I W H ) T + W R ( 1 + α ) W T
where R ( 1 + α ) = [ R 0 0 ( 1 + α ) Ψ X X ] . The KF solution for Equation (42) is identical to the standard KF solution but with Σ k | k 1 replaced by ( 1 + α ) Σ k | k 1 :
X ^ k | k = [ H k T R k 1 H k + { ( 1 + α ) Σ k | k 1 } 1 ] 1 [ H k T R k 1 Z k + { ( 1 + α ) Σ k | k 1 } 1 X ^ k | k 1 ]
With WH=I in Equation (43) for the VIKF solution, we have Σ ( 1 + α ) = W R ( 1 + α ) W T for the apparent filtered error variance of X ^ k | k in Equation (42). The error covariance of X ^ k | k , Σ k | k , is given by Equation (3) as:
Σ k | k = W R W T = [ H T R ( 1 + α ) 1 H ] 1 H T R ( 1 + α ) 1 R R ( 1 + α ) 1 H [ H T R ( 1 + α ) 1 H ] 1 = Σ ( 1 + α ) , k | k Σ ( 1 + α ) 2 , k | k 1 Σ ( 1 + α ) , k | k
In Equation (44), the inflated filtered error covariance, Σ β , k | k , where β denotes the multiplicative inflation factor, is given by:
Σ β , k | k = β Σ k | k 1 β Σ k | k 1 H k T [ H k β Σ k | k 1 H k T + R k ] 1 H k β Σ k | k 1 = [ H k T R k 1 H k + ( β Σ k | k 1 ) 1 ] 1
Computationally, the evaluation of Equations (43) and (44) requires solving two (m × n) and a (m × m) linear systems. As in the original formulation of CBPKF, iterative reduction of α is necessary to ensure Σ k | k Σ k | k 1 .
The above approximation assumes that the CB penalty, Σ C B , is proportional to the error covariance, Σ E V . To help ascertain how KF, CBPKF and the VIKF approximation may differ, we compare in Table 1 their analytical solutions for gain κ k , and filtered error variance σ k | k 2 for the 1D case of m = n = 1. The table shows that the VIKF approximation and CBPKF are identical for the 1D problem except that the CB penalty for CBPKF is twice as large as that for the VIKF approximation. To visualize the differences, Figure 1 shows κ k and σ k | k 2 for KF, the VIKF approximation and CBPKF for the three cases of σ k | k 1 2 = 1 and σ Z 2 = 1 (left), σ k | k 1 2 = 1 and σ Z 2 = 4 (middle), and σ k | k 1 2 = 4 and σ Z 2 = 1 (right). For all cases, we set h to unity and varied α from 0 to 1. The figure indicates that, compared to KF, the VIKF approximation and CBPKF prescribe appreciably larger gains, that the increase in gain is larger for larger α, and that the CBPKF gain is larger than the gain in the VIKF approximation for the same value of α. The figure also indicates that, compared to KF error variance, CBPKF error variance is larger, and that the increase in error variance is larger for larger α. Note that the differences between the KF and CBPKF solutions are the smallest for σ k | k 1 2 > σ Z 2 , a reflection of the diminished impact of CB owing to the comparatively smaller uncertainty in the observations. The above development suggests that one may be able to approximate CBPKF very closely with the VIKF-based formulation by adjusting α in the latter. Below, we evaluate the performance of CBPKF relative to KF and the VIKF-based approximation of CBPKF.

5. Evaluation and Results

For comparative evaluation, we carried out the synthetic experiments of [25]. We assume the following linear dynamical and observation models with perfectly known statistical parameters:
X k = Φ k 1 X k 1 + W k 1
Z k = H k X k + V k
where Xk and Xk−1 denote the state vectors at time steps k and k − 1, respectively, Φk−1 denotes the state transition matrix at time step k − 1 assumed as Φ k 1 = φ k 1 I , W k 1 denotes the white noise vector, w j , k 1 ~ N ( 0 , σ w k 1 2 ) , j = 1,…,m, with Q k 1 = E [ W k 1 W k 1 T ] , and Vk denotes the observation error vector, v i , k ~ N ( 0 , σ v k 2 ) , I = 1, …, n. The number of observations, n, is assumed to be time-invariant. The observation errors are assumed to be independent among themselves and of the true state. To assess comparative performance under widely varying conditions, we randomly perturbed φk−1, σw,k−1 and σv,k above according to Equations (48) through (50) below, and used only those deviates that satisfy the bounds:
φ k 1 p = φ k 1 + γ φ ε φ     0.5 φ k 1 p 0.95
σ w , k 1 p = σ w , k 1 + γ w ε w       σ w , k 1 p 0.01
σ v , k p = σ v , k + γ v ε v       σ v , k p 0.01
In the above, the superscript p signifies that the variable is a perturbation, ε φ     ε w       and ε v       denote the normally distributed white noise for the respective variables, and γ φ   γ w and γ v denote the standard deviations of the white noise added to φ k 1 , σ w , k 1 and σ v , k , respectively. The parameter settings (see Table 1) are chosen to encompass less predictable (small φk−1) to more predictable (large φk−1) processes, certain (small σw,k−1) to uncertain (large σw,k−1) model dynamics, and more informative (small σv,k) to less informative (large σv,k) observations. The bounds for φ k 1 p in Equation (48) are based on the range of lag-1 serial correlation representing moderate to high predictability where CBPKF and KF are likely to differ the most. The bounding of the perturbed values σ w , k 1 p and σ v , k p in Equations (49) and (50), respectively, is necessary to avoid the observational or model prediction uncertainty becoming unrealistically small. Very small σ w , k 1 p and σ v , k p render the information content of the model prediction, Σ k | k 1 , and the observation, Zk, respectively, very large, and hence keep the filters operating in unrealistically favorable conditions for extended periods of time. We then apply KF, CBPKF, and the VIKF approximation to obtain X ^ k | k and Σ k | k , and verify them against the assumed truth. To evaluate the performance of CBPKF relative to KF, we calculate percent reduction in root-mean-square error (RMSE) by CBPKF over KF conditional on the true state exceeding some threshold between 0 and the largest truth.
Figure 2 shows the percent reduction in RMSE by CBPKF over KF for Cases 1 (left), 5 (middle) and 9 (right) representing Groups 1, 2 and 3 in Table 1, respectively. The three groups differ most significantly in the variability of the dynamical model error, γ w , and may be characterized as nearly stationary (Group 1), nonstationary (Group 2), and highly nonstationary (Group 3). The range of α values used is [0.1, 1.2] with an increment of 0.1. The numbers of state variables, observations, and updating cycles used in Figure 2 are 1, 10, and 100,000 for all cases. The dotted line at 10% reduction in the figure serves as a reference for significant improvement. The figure shows that, at the extreme end of the tail, CBPKF with α of 0.7, 0.6, and 0.5 reduces RMSE by about 15, 25, and 30% for Cases 1, 5 and 9, respectively, but at the expense of increasing unconditional RMSE by about 5%. The general pattern of reduction in RMSE for other cases in Table 1 is similar within each group and is not shown. We only note here that larger variability in observational uncertainty (i.e., larger γ v ) reduces the relative performance of CBPKF somewhat, and that the magnitude of variability in predictability (i.e., γ φ ) has a relatively small impact on the relative performance.
It was seen in Table 1 that the VIKF approximation is identical to CBPKF for m = n = 1 but for the multiplicative scaler weight for the CB penalty. Numerical experiments indicate that, whereas the above relationship does not hold for other m or n, one may very closely approximate CBPKF with the VIKF-based formulation by adjusting α . For example, the VIKF approximation with α increased by a factor of 1.25 to 1.90 differ from CBPKF only by 1% or less for all 12 cases in Table 2 with m = 1 and n = 10. The above findings indicate that the VIKF approximation may be used as a computationally less expensive alternative for CBPKF. Table 3 compares the CPU time among KF, CBPKF, and the VIKF approximation for six different combinations of m and n based using Intel(R) Xeon(R) Gold 6152 CPU @ 2.10 GHz. The computing time is reported in multiples of the KF’s. Note that the original formulation of CBPKF quickly becomes extremely expensive as the dimensionality of the problem increases, whereas the CPU time of the VIKF approximation stays under 3.5 times that of KF for the size of the problems considered.
If the filtered error variance is unbiased, one would expect the mean of the actual error squared associated with the variance to be approximately the same as the variance itself. To verify this, we show in Figure 3 the filtered error variance vs. the actual error squared for KF (left), the VIKF approximation (middle), and CBPKF (right) for all ranges of filtered error variance. For reference, we plot the one-to-one line representing the unbiased error variance conditional on the magnitude of the filtered error variance and overlay the local regression fit through the actual data points using the R package locfit [35]. The figure shows that all three provide conditionally unbiased estimates of filtered error variance as theoretically expected, and that the VIKF approximation and CBPKF results are extremely similar to each other.

6. Adaptive CBPKF

Whereas CBPKF or the VIKF approximation significantly improves the accuracy of the estimates over the tails, it deteriorates performance near the median. Figure 2 suggests that if α can be prescribed adaptively such that a small/large CB penalty is affected when the system is in the normal/extreme state, the unconditional performance of CBPKF would improve. Because the true state of the system is not known, adaptively specifying α is necessarily an uncertain proposition. There are, however, certain applications in which the normal vs. extreme state of the system may be ascertained with higher accuracy than others. For example, the soil moisture state of a catchment may be estimated from assimilating precipitation and streamflow data into hydrologic models [36,37,38,39,40,41]. If α is prescribed adaptively based on the best available estimate of the state of the catchment, one may expect improved performance in hydrologic forecasting. In this section, we apply adaptive CBPKF in the synthetic experiment and assess its performance. An obvious strategy for adaptively filtering is to parameterize α in terms of the KF estimate (i.e., the CBPKF estimate with α = 0 ) as the best guess for the true state. The premise of this strategy is that, though it may be conditionally biased, the KF estimate fuses the information available from both the observations and the dynamical model, and hence best captures the relationship between α and the departure of the state of the system from median. A similar approach has been used in fusing radar rainfall data and rain gauge observations for multisensor precipitation estimation in which an ordinary cokriging estimate was used to prescribe α in CB-penalized cokriging [30].
Necessarily, the effectiveness of the above strategy depends on the skill of the KF estimate; if the skill is very low, one may not expect significant improvement. Figure 2 suggests that, qualitatively, α should increase as the state becomes more extreme. To that end, we employed the following model for time-varying α :
α k = γ X ^ k | k K F
where α k denotes the multiplicative CB penalty factor for CBPKF at time step k, X ^ k | k K F denotes some norm of the KF estimate at time step k, and γ denotes the proportionality constant. Figure 4(left) shows the RMSE reduction by adaptive CBPKF over KF with α k = γ | X ^ k | k K F | for the 12 cases in Table 2 m = 1 and n = 10. The γ values used were 3.0, 1.0, and 0.5 for Groups 1, 2, and 3 in Table 2, respectively. The figure shows that adaptive CBPKF performs comparably to KF in the unconditional sense while substantially improving performance in the tails. The rate of reduction in RMSE with respect to the increasing conditioning truth, however, is now slower than that seen in Figure 2 due to the occurrences of incorrectly specified α. To assess the uppermost bound of the feasible performance of adaptive CBPKF, we also specified α with perfect accuracy under Equation (51) via α k = γ | X k | where X k denotes the true state. The results are shown in Figure 4(right) for which the γ values used were 3.0, 1.5, and 1.0 for Groups 1, 2, and 3 in Table 2, respectively. The figure indicates that adaptive CBPKF with perfectly prescribed α greatly improves performance, even outperforming KF in the unconditional sense. Figure 4 suggests that, if α can be prescribed more accurately with additional sources of information, the performance of adaptive CBPKF may be improved beyond the level seen in Figure 4(left). Finally, we show in Figure 5 the example scatter plots of the KF (black) and adaptive CBPKF (red) estimates vs. truth. They are for Cases 1 and 9 in Table 2 representing Groups 1 and 3, respectively. It is readily seen that the CBPKF significantly reduces CB in the tails while keeping its estimates close to the KF estimates in normal ranges.

7. Conclusions

Conditional bias-penalized Kalman filter (CBPKF) has recently been developed to improve the estimation and prediction of extremes. The original formulation, however, is computationally very expensive, and deteriorates performance in the normal ranges relative to KF. In this work, we present a computationally less expensive alternative based on the variance-inflated KF (VIKF) approximation, and improve unconditional performance by adaptively prescribing the weight for the CB penalty. For evaluation, we carried out synthetic experiments using linear systems with varying degrees of dynamical model uncertainty, observational uncertainty, and predictability. The results indicate that the VIKF-based approximation of CBPKF provides a computationally much less expensive alternative to the original formulation, and that adaptive CBPKF performs comparably to KF in the unconditional sense while improving the estimation of extremes by about 20 to 30% over KF. It is also shown that additional improvement may be possible by improving adaptive prescription of the weight to the CB penalty using additional sources of information. The findings indicate that adaptive CBPKF offers a significant addition to the dynamic filtering methods for general application in data assimilation and, in particular, when or where the estimation of extremes is of importance. The findings in this work are based on idealized synthetic experiments that satisfy linearity and normality. Additional research is needed to assess performance for non-normal problems and for nonlinear problems using the ensemble extension [31], and to prescribe the weight for the CB penalty more skillfully.

Author Contributions

Conceptualization, D.-J.S.; methodology, D.-J.S., H.S., H.L.; software, H.S.; validation, H.S.; writing—original draft preparation, H.S., D.-J.S.; writing—review and editing, D.-J.S., H.S., H.L.; visualization, H.S.; supervision, D.-J.S.; project administration, D.-J.S.; funding acquisition, D.-J.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Science Foundation [CyberSEES-1442735], and the National Oceanic and Atmospheric Administration [NA16OAR4590232, NA17OAR4590174, NA17OAR4590184].

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wood, A.W.; Lettenmaier, D.P. An ensemble approach for attribution of hydrologic prediction uncertainty. Geophys. Res. Lett. 2008, 35, L14401. [Google Scholar] [CrossRef] [Green Version]
  2. Gupta, H.V.; Clark, M.P.; Vrugt, J.A.; Abramowitz, G.; Ye, M. Towards a comprehensive assessment of model structural adequacy. Water Resour. Res. 2012, 48, W08301. [Google Scholar] [CrossRef]
  3. Antoniou, C.; Ben-Akiva, M.; Koutsopoulos, H.N. Nonlinear Kalman filtering algorithms for on-Line calibration of dynamic traffic assignment models. IEEE Trans. Intell. Transp. Syst. 2007, 8, 661–670. [Google Scholar] [CrossRef]
  4. Bhotto, M.Z.A.; Bajić, I.V. Constant modulus blind adaptive beamforming based on unscented Kalman filtering. IEEE Signal Process. Lett. 2015, 22, 474–478. [Google Scholar] [CrossRef]
  5. Bocher, M.; Fournier, A.; Coltice, N. Ensemble Kalman filter for the reconstruction of the Earth’s mantle circulation. Nonlinear Process. Geophys. 2018, 25, 99–123. [Google Scholar] [CrossRef] [Green Version]
  6. Chen, W.; Shen, H.; Huang, C.; Li, X. Improving soil moisture estimation with a dual ensemble Kalman smoother by jointly assimilating AMSR-E brightness temperature and MODIS LST. Remote Sens. 2017, 9, 273. [Google Scholar] [CrossRef] [Green Version]
  7. Gao, Z.; Shen, W.; Zhang, H.; Ge, M.; Niu, X. Application of helmert variance component based adaptive Kalman filter in multi-GNSS PPP/INS tightly coupled integration. Remote Sens. 2016, 8, 553. [Google Scholar] [CrossRef]
  8. Houtekamer, P.L.; Zhang, F. Review of the ensemble Kalman filter for atmospheric data assimilation. Mon. Weather Rev. 2016, 144, 4489–4532. [Google Scholar] [CrossRef]
  9. Jain, A.; Krishnamurthy, P.K. Phase noise tracking and compensation in coherent optical systems using Kalman filter. IEEE Commun. Lett. 2016, 20, 1072–1075. [Google Scholar] [CrossRef]
  10. Jiang, Y.; Liao, M.; Zhou, Z.; Shi, X.; Zhang, L.; Balz, T. Landslide deformation analysis by coupling deformation time series from SAR data with hydrological factors through data assimilation. Remote Sens. 2016, 8, 179. [Google Scholar] [CrossRef] [Green Version]
  11. Kurtz, W.; Franssen, H.-J.H.; Vereecken, H. Identification of time-variant river bed properties with the ensemble Kalman filter. Water Resour. Res. 2012, 48, W10534. [Google Scholar] [CrossRef] [Green Version]
  12. Lu, X.; Wang, L.; Wang, H.; Wang, X. Kalman filtering for delayed singular systems with multiplicative noise. IEEE/CAA J. Autom. Sin. 2016, 3, 51–58. [Google Scholar] [CrossRef]
  13. Lv, H.; Qi, F.; Zhang, Y.; Jiao, T.; Liang, F.; Li, Z.; Wang, J. Improved detection of human respiration using data fusion based on a multi-static UWB radar. Remote Sens. 2016, 8, 773. [Google Scholar] [CrossRef] [Green Version]
  14. Ma, R.; Zhang, L.; Tian, X.; Zhang, J.; Yuan, W.; Zheng, Y.; Zhao, X.; Kato, T. Assimilation of remotely-sensed leaf area index into a dynamic vegetation model for gross primary productivity estimation. Remote Sens. 2017, 9, 188. [Google Scholar] [CrossRef] [Green Version]
  15. Muñoz-Sabater, J. Incorporation of passive microwave brightness temperatures in the ECMWF soil moisture analysis. Remote Sens. 2015, 7, 5758–5784. [Google Scholar] [CrossRef] [Green Version]
  16. Nair, A.; Indu, J. Enhancing Noah land surface model prediction skill over Indian subcontinent by assimilating SMOPS blended soil moisture. Remote Sens. 2016, 8, 976. [Google Scholar] [CrossRef] [Green Version]
  17. Reichle, R.H.; McLaughlin, D.B.; Entekhabi, D. Hydrologic data assimilation with the ensemble Kalman filter. Mon. Weather Rev. 2002, 130, 103–114. [Google Scholar] [CrossRef] [Green Version]
  18. Wallace, L.; Lucieer, A.; Watson, C.; Turner, D. Development of a UAV-LiDAR system with application to forest inventory. Remote Sens. 2012, 4, 1519–1543. [Google Scholar] [CrossRef] [Green Version]
  19. de Wit, A.J.W.; van Diepen, C.A. Crop model data assimilation with the ensemble Kalman filter for improving regional crop yield forecasts. Agric. For. Meteorol. 2007, 146, 38–56. [Google Scholar] [CrossRef]
  20. Yan, M.; Tian, X.; Li, Z.; Chen, E.; Wang, X.; Han, Z.; Sun, H. Simulation of forest carbon fluxes using model incorporation and data assimilation. Remote Sens. 2016, 8, 567. [Google Scholar] [CrossRef] [Green Version]
  21. Yu, K.K.C.; Watson, N.R.; Arrillaga, J. An adaptive Kalman filter for dynamic harmonic state estimation and harmonic injection tracking. IEEE Trans. Power Del. 2005, 20, 1577–1584. [Google Scholar] [CrossRef]
  22. Dong, Z.; You, Z. Finite-horizon robust Kalman filtering for uncertain discrete time-varying systems with uncertain-covariance white noises. IEEE Signal Process. Lett. 2006, 13, 493–496. [Google Scholar] [CrossRef]
  23. Zhou, H.; Huang, H.; Zhao, H.; Zhao, X.; Yin, X. Adaptive unscented Kalman filter for target tracking in the presence of nonlinear systems involving model mismatches. Remote Sens. 2017, 9, 657. [Google Scholar] [CrossRef] [Green Version]
  24. Ciach, G.J.; Morrissey, M.L.; Krajewski, W.F.; Ciach, G.J.; Morrissey, M.L.; Krajewski, W.F. Conditional bias in radar rainfall estimation. J. Appl. Meteorol. 2000, 39, 1941–1946. [Google Scholar] [CrossRef]
  25. Seo, D.-J.; Saifuddin, M.M.; Lee, H. Conditional bias-penalized Kalman filter for improved estimation and prediction of extremes. Stochastic Environ. Res. Risk Assess 2018, 32, 183–201, Erratum in Stochastic Environ. Res. Risk Assess. 2018, 32, 3561–3562. [Google Scholar] [CrossRef]
  26. Jolliffe, I.T.; Stephenson, D.B. Forecast Verification: A Practitioner’s Guide in Atmospheric Science; John Wiley & Sons: Hoboken, NJ, USA, 2003. [Google Scholar]
  27. Brown, J.D.; Seo, D.-J.; Brown, J.D.; Seo, D.-J. A nonparametric postprocessor for bias correction of hydrometeorological and hydrologic ensemble forecasts. J. Hydrometeorol. 2010, 11, 642–665. [Google Scholar] [CrossRef] [Green Version]
  28. Seo, D.-J. Conditional bias-penalized kriging (CBPK). Stochastic Environ. Res. Risk Assess. 2013, 27, 43–58. [Google Scholar] [CrossRef]
  29. Seo, D.-J.; Siddique, R.; Zhang, Y.; Kim, D. Improving real-time estimation of heavy-to-extreme precipitation using rain gauge data via conditional bias-penalized optimal estimation. J. Hydrol. 2014, 519, 1824–1835. [Google Scholar] [CrossRef]
  30. Kim, B.; Seo, D.-J.; Noh, S.J.; Prat, O.P.; Nelson, B.R. Improving multi-sensor estimation of heavy-to-extreme precipitation via conditional bias-penalized optimal estimation. J. Hydrol. 2018, 556, 1096–1109. [Google Scholar] [CrossRef]
  31. Lee, H.; Noh, S.J.; Kim, S.; Shen, H.; Seo, D.-J.; Zhang, Y. Improving flood forecasting using conditional bias-penalized ensemble Kalman filter. J. Hydrol. 2019, 575, 596–611. [Google Scholar] [CrossRef]
  32. Schweppe, F.C. Uncertain Dynamic Systems, Prentice-Hall. 1973. Available online: https://openlibrary.org/books/OL5291577M/Uncertain_dynamic_systems (accessed on 12 January 2022).
  33. Woodbury, M.A. Inverting Modified Matrices; Princeton University: Princeton, NJ, USA, 1950. [Google Scholar]
  34. Hausman, J. Mismeasured variables in econometric analysis: Problems from the right and problems from the left. J. Econ. Perspect. 2001, 15, 57–67. [Google Scholar] [CrossRef] [Green Version]
  35. Catherine Loader, Locfit: Local Regression, Likelihood and Density Estimation. 2013. Available online: https://CRAN.Rproject.org/package=locfit (accessed on 12 January 2022).
  36. Lee, H.; Seo, D.-J. Assimilation of hydrologic and hydrometeorological data into distributed hydrologic model: Effect of adjusting mean field bias in radar-based precipitation estimates. Adv. Water Resour. 2014, 74, 196–211. [Google Scholar] [CrossRef]
  37. Lee, H.; Seo, D.-J.; Koren, V. Assimilation of streamflow and in situ soil moisture data into operational distributed hydrologic models: Effects of uncertainties in the data and initial model soil moisture states. Adv. Water Resour. 2011, 34, 1597–1615. [Google Scholar] [CrossRef]
  38. Lee, H.; Seo, D.-J.; Liu, Y.; Koren, V.; McKee, P.; Corby, R. Variational assimilation of streamflow into operational distributed hydrologic models: Effect of spatiotemporal scale of adjustment. Hydrol. Earth Syst. Sci. 2012, 16, 2233–2251. [Google Scholar] [CrossRef] [Green Version]
  39. Lee, H.; Zhang, Y.; Seo, D.-J.; Xie, P. Utilizing satellite precipitation estimates for streamflow forecasting via adjustment of mean field bias in precipitation data and assimilation of streamflow observations. J. Hydrol. 2015, 529, 779–794. [Google Scholar] [CrossRef]
  40. Rafieeinasab, A.; Seo, D.-J.; Lee, H.; Kim, S. Comparative evaluation of maximum likelihood ensemble filter and ensemble Kalman filter for real-time assimilation of streamflow data into operational hydrologic models. J. Hydrol. 2014, 519, 2663–2675. [Google Scholar] [CrossRef]
  41. Seo, D.-J.; Koren, V.; Cajina, N. Real-time variational assimilation of hydrologic and hydrometeorological data into operational hydrologic forecasting. J. Hydrometeor. 2003, 4, 627–641. [Google Scholar] [CrossRef]
Figure 1. Comparison of κ k and σ k | k 2 for KF, the VIKF approximation and CBPKF for three different cases: σ k | k 1 2 = 1 and σ Z 2 = 1 (left), σ k | k 1 2 = 1 and σ Z 2 = 4 (middle), σ k | k 1 2 = 4 and σ Z 2 = 1 (right).
Figure 1. Comparison of κ k and σ k | k 2 for KF, the VIKF approximation and CBPKF for three different cases: σ k | k 1 2 = 1 and σ Z 2 = 1 (left), σ k | k 1 2 = 1 and σ Z 2 = 4 (middle), σ k | k 1 2 = 4 and σ Z 2 = 1 (right).
Hydrology 09 00035 g001
Figure 2. Percent reduction in RMSE by CBPKF over KF for a range of values of α for Cases 1 (left), 5 (middle), and 9 (right).
Figure 2. Percent reduction in RMSE by CBPKF over KF for a range of values of α for Cases 1 (left), 5 (middle), and 9 (right).
Hydrology 09 00035 g002
Figure 3. Filtered error variance vs. error squared for KF (left), the VIKF approximation (middle), and CBPKF (right). The one-to-one line is shown in black and the local regression fit is shown in green.
Figure 3. Filtered error variance vs. error squared for KF (left), the VIKF approximation (middle), and CBPKF (right). The one-to-one line is shown in black and the local regression fit is shown in green.
Hydrology 09 00035 g003
Figure 4. Percent reduction in RMSE by adaptive CBPKF over KF in which α is prescribed using the KF estimate (left) and the truth (right).
Figure 4. Percent reduction in RMSE by adaptive CBPKF over KF in which α is prescribed using the KF estimate (left) and the truth (right).
Hydrology 09 00035 g004
Figure 5. Example scatter plots of KF (black) and adaptive CBPKF (red) estimates vs. truth for Cases 1 (left) and 9 (right) in Table 2.
Figure 5. Example scatter plots of KF (black) and adaptive CBPKF (red) estimates vs. truth for Cases 1 (left) and 9 (right) in Table 2.
Hydrology 09 00035 g005
Table 1. Comparison of gain and filtered error variance among KF, the VIKF approximation, and CBPKF.
Table 1. Comparison of gain and filtered error variance among KF, the VIKF approximation, and CBPKF.
Gain ,   κ k Filtered   Error   Variance ,   σ k | k 2
KF h σ k | k 1 2 h 2 σ k | k 1 2 + σ Z 2 σ Z 2 h 2 σ k | k 1 2 + σ Z 2 σ k | k 1 2
VIKF approx. h ( 1 + α ) σ k | k 1 2 h 2 ( 1 + α ) σ k | k 1 2 + σ Z 2 { ( 1 + α ) 2 h 2 σ k | k 1 2 + σ Z 2 } σ Z 2 { ( 1 + α ) h 2 σ k | k 1 2 + σ Z 2 } 2 σ k | k 1 2
CBPKF h ( 1 + 2 α ) σ k | k 1 2 h 2 ( 1 + 2 α ) σ k | k 1 2 + σ Z 2 { ( 1 + 2 α ) 2 h 2 σ k | k 1 2 + σ Z 2 } σ Z 2 { ( 1 + 2 α ) h 2 σ k | k 1 2 + σ Z 2 } 2 σ k | k 1 2
Table 2. Parameter settings for the 12 cases considered.
Table 2. Parameter settings for the 12 cases considered.
GroupCase σ w , k 1 γ w σ v , k γ v φ k 1 γ φ
110.10.011.50.40.70.1
20.10.011.50.40.70.8
30.10.011.51.20.70.1
40.10.011.51.20.70.8
250.10.11.50.40.70.1
60.10.11.50.40.70.8
70.10.11.51.20.70.1
80.10.11.51.20.70.8
390.10.21.50.40.70.1
100.10.21.50.40.70.8
110.10.21.51.20.70.1
120.10.21.51.20.70.8
Table 3. Comparison of computing time among KF, CBPKF, and VIKF approximation.
Table 3. Comparison of computing time among KF, CBPKF, and VIKF approximation.
DimensionalityNormalized Computing Time
mnKFCBPKFVIKF approx.
11015.231.51
140118.412.74
51016.441.67
540124.032.88
1010114.272.03
1040127.963.46
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Shen, H.; Lee, H.; Seo, D.-J. Adaptive Conditional Bias-Penalized Kalman Filter for Improved Estimation of Extremes and Its Approximation for Reduced Computation. Hydrology 2022, 9, 35. https://doi.org/10.3390/hydrology9020035

AMA Style

Shen H, Lee H, Seo D-J. Adaptive Conditional Bias-Penalized Kalman Filter for Improved Estimation of Extremes and Its Approximation for Reduced Computation. Hydrology. 2022; 9(2):35. https://doi.org/10.3390/hydrology9020035

Chicago/Turabian Style

Shen, Haojing, Haksu Lee, and Dong-Jun Seo. 2022. "Adaptive Conditional Bias-Penalized Kalman Filter for Improved Estimation of Extremes and Its Approximation for Reduced Computation" Hydrology 9, no. 2: 35. https://doi.org/10.3390/hydrology9020035

APA Style

Shen, H., Lee, H., & Seo, D. -J. (2022). Adaptive Conditional Bias-Penalized Kalman Filter for Improved Estimation of Extremes and Its Approximation for Reduced Computation. Hydrology, 9(2), 35. https://doi.org/10.3390/hydrology9020035

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop