compute analysis error covariance matrix
P = (1/(N-1)) * (uai - ua.reshape(-1,1)) @ (uai - ua.reshape(-1,1)).T
return uai, P
```
We highlight a few remarks regarding the EnKF as below,


#### *8.1. Deterministic Ensemble Kalman Filter*

The use of an ensemble of perturbed observations in the EnKF leads to a match between the analysis error covariance and its theoretical value given by Kalman filter. However, this is in a statistical sense only when the ensemble size is large. Unfortunately, this perturbation introduces sampling error, which renders the filter suboptimal, particularly for small ensembles [31]. Alternative formulations that do not require virtual observations can be found in literature, including ensemble square root filters [31,32]. We focus here on a simple formulation proposed by Sakov and Oke [30] that maintains the numerical effectiveness and simplicity EnKF without the need to virtual observations, denoted as deterministic ensemble Kalman filter (DEnKF).

Without measurement perturbation, it can be derived that the resulting analysis error covariance matrix is given as follows,

$$\begin{split} \mathbf{P}\_{k+1} &= (\mathbf{I}\_{n} - \mathbf{K}\_{k+1} \mathbf{D}\_{h}(\mathbf{u}\_{b}(t\_{k+1}))) \mathbf{B}\_{k+1} (\mathbf{I} - \mathbf{K} \mathbf{D}\_{h}(\mathbf{u}\_{b}(t\_{k+1})))^{T} \\ &= \mathbf{B}\_{k+1} - \mathbf{K}\_{k+1} \mathbf{D}\_{h}(\mathbf{u}\_{b}(t\_{k+1})) \mathbf{B}\_{k+1} - \mathbf{B}\_{k+1} \mathbf{D}\_{h}(\mathbf{u}\_{b}(t\_{k+1}))^{T} \mathbf{K}\_{k+1}^{T} \\ &+ \mathbf{K}\_{k+1} \mathbf{D}\_{h}(\mathbf{u}\_{b}(t\_{k+1})) \mathbf{B}\_{k+1} \mathbf{D}\_{h}(\mathbf{u}\_{b}(t\_{k+1}))^{T} \mathbf{K}\_{k+1}^{T} .\end{split}$$

With the definition of the Kalman gain, it can be seen that **K***k*+1**D***h*(**u***b*(*tk*<sup>+</sup>1))**B***k*+<sup>1</sup> = **B***k*+1**D***h*(**u***b*(*tk*<sup>+</sup>1))*T***K***<sup>T</sup> <sup>k</sup>*+1. Thus,

$$\mathbf{P}\_{k+1} = \mathbf{B}\_{k+1} - 2\mathbf{K}\_{k+1}\mathbf{D}\_{h}(\mathbf{u}\_{b}(t\_{k+1}))\mathbf{B}\_{k+1} + \mathbf{K}\_{k+1}\mathbf{D}\_{h}(\mathbf{u}\_{b}(t\_{k+1}))\mathbf{B}\_{k+1}\mathbf{D}\_{h}(\mathbf{u}\_{b}(t\_{k+1}))^{T}\mathbf{K}\_{k+1}^{T}\dots$$

For small values of **K***k*+1**D***h*(**u***b*(*tk*<sup>+</sup>1)), this form converges to **P***k*+<sup>1</sup> = **B***k*+<sup>1</sup> − 2**K***k*+1**D***h*(**u***b*(*tk*<sup>+</sup>1))**B***k*+<sup>1</sup> up to the quadratic term. It can be seen that this asymptotically match the theoretical value of analysis covariance matrix in standard Kalman filtering (i.e., **P***k*+<sup>1</sup> = (**I***<sup>n</sup>* − **K***k*+1**D***h*(**u***b*(*tk*<sup>+</sup>1)))**B***k*+<sup>1</sup> = **B***k*+<sup>1</sup> − **K***k*+1**D***h*(**u***b*(*tk*<sup>+</sup>1))) by dividing the Kalman gain by two. Therefore, it can be argued that the DEnKF linearly recovers the theoretical analysis error covariance matrix. This is achieved by applying the analysis equation separately to the forecast mean **u***b*(*tk*<sup>+</sup>1) ≈ **ub**(*tk*<sup>+</sup>1) with the Kalman gain matrix and ensemble of anomalies *ξ* (*i*) *<sup>b</sup>* (*tk*<sup>+</sup>1) = **<sup>u</sup>**(*i*) *<sup>b</sup>* (*tk*<sup>+</sup>1) − **ub**(*tk*<sup>+</sup>1) using half of the standard Kalman gain matrix. These steps are summarized as follows,

Inputs: **<sup>u</sup>***<sup>b</sup>*(*tk*), **<sup>B</sup>***<sup>k</sup>*, *<sup>M</sup>*(·; ·), **<sup>Q</sup>***k*<sup>+</sup>1, **<sup>w</sup>**(*tk*<sup>+</sup>1), **<sup>R</sup>***k*<sup>+</sup>1, *<sup>h</sup>*(·) Ensemble initialization: **<sup>u</sup>**(*i*) *<sup>b</sup>* (*tk*) = **<sup>u</sup>***<sup>b</sup>*(*tk*) + **<sup>e</sup>** (*i*) *<sup>b</sup>* , **e** (*i*) *<sup>b</sup>* ∼ N (**0**, **B***<sup>k</sup>*) Forecast: **<sup>u</sup>**(*i*) *<sup>b</sup>* (*tk*<sup>+</sup>1) = *<sup>M</sup>*(**<sup>u</sup>**(*i*) *<sup>b</sup>* (*tk*); *θ*) + *ξ* (*i*) *<sup>p</sup>* (*tk*<sup>+</sup>1) **u***b*(*tk*<sup>+</sup>1) ≈ **ub**(*tk*<sup>+</sup>1) *ξ* (*i*) *<sup>b</sup>* (*tk*<sup>+</sup>1) = **<sup>u</sup>**(*i*) *<sup>b</sup>* (*tk*<sup>+</sup>1) − **ub**(*tk*<sup>+</sup>1) **<sup>B</sup>***k*+<sup>1</sup> <sup>≈</sup> <sup>1</sup> *N* − 1 *N* ∑ *i*=1 *ξ* (*i*) *<sup>b</sup>* (*tk*<sup>+</sup>1)*ξ* (*i*) *<sup>b</sup>* (*tk*<sup>+</sup>1)*<sup>T</sup>* Kalman gain: **K***k*+<sup>1</sup> = **B***k*+1**D***h*(**u***b*(*tk*<sup>+</sup>1))*<sup>T</sup>* **D***h*(**u***b*(*tk*<sup>+</sup>1))*k*+1**B***k*+1**D***h*(**u***b*(*tk*<sup>+</sup>1))*<sup>T</sup>* + **R***k*+<sup>1</sup> −<sup>1</sup> Analysis: **u***a*(*tk*<sup>+</sup>1) = **u***b*(*tk*<sup>+</sup>1) + **K***k*+<sup>1</sup> **<sup>w</sup>**(*tk*<sup>+</sup>1) <sup>−</sup> *<sup>h</sup>*(**u***b*(*tk*<sup>+</sup>1)) *ξ* (*i*) *<sup>a</sup>* (*tk*<sup>+</sup>1) = *ξ* (*i*) *<sup>b</sup>* (*tk*<sup>+</sup>1) <sup>−</sup> <sup>1</sup> 2 **K***k*+<sup>1</sup> *h*(*ξ* (*i*) *<sup>b</sup>* (*tk*<sup>+</sup>1)) **<sup>u</sup>**(*i*) *<sup>a</sup>* (*tk*<sup>+</sup>1) = **u***a*(*tk*<sup>+</sup>1) + *ξ* (*i*) *<sup>a</sup>* (*tk*<sup>+</sup>1) **<sup>P</sup>***k*+<sup>1</sup> <sup>≈</sup> <sup>1</sup> *N* − 1 *N* ∑ *i*=1 *ξ* (*i*) *<sup>a</sup>* (*tk*<sup>+</sup>1)*ξ* (*i*) *<sup>a</sup>* (*tk*<sup>+</sup>1)*<sup>T</sup>*

and Listing 18 provides a Python execution of the presented DEnKF approach. Note that the ensemble of observations is not created in this case, compared to the EnKF.

**Listing 18.** Implementation of DEnKF without virtual observations. ✞ ☎

```
import numpy as np
def DEnKF(ubi,w,ObsOp,JObsOp,R,B):