**1. Introduction**

In order to obtain the optimal control strategy of generator in power system, phasor measurement unit (PMU) is required to observe the generator data (power angle, electric angular velocity, etc.) to determine its state. However, there are random noises and measurement errors in PMU measurements, which cannot be directly used as the true state value [1–3]. Statistical method is used to calculate the estimated value of generator state variables and predict the future state. This method is called dynamic state estimation (DSE). Therefore, the prediction of generator's future state by DSE is helpful for power system regulation and has important significance to the stability of power network.

In recent years, the problem of electromechanical transient state estimation of generators has attracted wide attentions. Generator is a complex nonlinear system, and its DSE needs corresponding nonlinear filtering algorithm. In the field of generator DSE algorithms, many studies mainly focus on extended Kalman filter (EKF), unscented Kalman filter (UKF) and cubature Kalman filter (CKF) [4,5]. In [6], EKF was implemented to estimate the dynamic state of generator, where the fourth-order model of generator with unknown input was utilized. However, the linearization process of the generator model by EKF will lead to large truncation errors, which cannot be applied to the strongly nonlinear systems. In [7,8], the deterministic sampling filter-UKF algorithm was applied to estimate the dynamic state of generator. Due to the poor flexibility of UKF in parameter selection, the CKF method was proposed by S. Haykin [9,10]. In [11], the CKF was used for the DSE of generator, which could achieve much better performance than both EKF and UKF.

On the other hand, considering the system uncertainties, in [12], based on the second-order model of generator, adaptive interpolation and *H* ∞ theory was introduced into EKF to estimate the dynamic state of generator, by which the robustness was improved. In [13], an online state estimation method for synchronous generators based on UKF was proposed, and the fourth-order nonlinear model of generators was used to verify that the algorithm has not only high filtering speed and accuracy but also strong robustness. In [14], square root filtering was introduced into CKF to ensure the nonnegative qualitative and numerical stability of covariance matrix, by which, the asymmetric or nonnegative covariance of CKF in the iteration of DSE could be avoided and the estimation accuracy of CKF could also be improved. To the best of the authors' knowledge, there have been few results concerning DSE for generator by using ensemble Kalman filtering algorithm [15], which has been widely utilized in meteorology.

In this paper, based on the improved EnKF-EnSRF, which can approximate the nonlinear system by random sampling method, an adaptive ensemble root mean square Kalman filter algorithm is proposed. The proposed method does not need to interfere with the measured values, avoiding the problem of underestimating the analysis error covariance in EnKF and improving the filtering precision. Meanwhile, the Sage–Husa algorithm is introduced to estimate measurement noise online. When there is a deviation in measurement noise, the method can revise the covariance of measurement noise dynamically and filter it, which reduces the influence of noise error on filtering. Extensive simulation results show that the proposed method can not only achieve a higher filtering precision than the conventional EnKF but also with a stronger robustness to noise. Hence, the main contributions of this paper are:


The rest of the paper is organized as follows: In Section 2, the second-order generator DSE model is presented. Section 3 describes the main algorithm in detail. IEEE 9-bus system and the real power grid system are used to test and illustrate the usefulness of the proposed method, and some necessary comparison results with ENKF and ENSRF are also performed in Section 4. Finally, the conclusions are provided in Section 5.

#### **2. Generator Dynamic State Estimation Model**

#### *2.1. Mathematical Model of Dynamic State Estimation*

State space model can be used to describe the dynamic system as 

$$\begin{cases} X\_{k+1} = \Phi X\_k + \Gamma l I\_k + \mathcal{W}\_k\\ Z\_k = \mathcal{C} X\_k + V\_k \end{cases} \tag{1}$$

where *k* indicates the time instant, *Wk* represents system noise, *Vk* represents measurement noise, Φ denotes the state transition matrix, Γ is the input matrix, and *C* is the observation matrix.

The statistical characteristics of noise *Wk* and *Vk* are given as follows

$$\begin{cases} E[\mathcal{W}\_k] = E[V\_k] = 0\\ E[\mathcal{W}\_k \mathcal{W}\_k^T] = \mathcal{Q}\delta\_{kj} \\ E[V\_k V\_k^T] = R\delta\_{kj} \end{cases} \tag{2}$$

where *Q* is the variance matrix of system process noise, *R* is the variance matrix of system measurement noise, δ*kj* = - 1, *k* = *j* 0,*kj*.

#### *2.2. Equation of Measurement and State Estimation*

In the electromechanical transient process of power system, the system topology and bus voltage cannot be acquired in real time. Thus, the rotor power angle and angular velocity, which do not abruptly change and are constrained by the rotor differential equation, are selected as the state variables of the generator DSE.

The second-order differential equations of synchronous generators can be expressed as follows

$$\begin{cases} \frac{d\delta}{dt} = (\omega - 1)\omega\_0\\ \frac{d\omega}{dt} = \frac{1}{T\_{\parallel}}(T\_m - T\_\varepsilon - D\omega) = \frac{1}{T\_{\parallel}}(\frac{P\_m}{\omega} - \frac{P\_\varepsilon}{\omega} - D\omega) \end{cases} \tag{3}$$

where δ indicates the power angle, ω represents the electric angular velocity; *Tm* denotes the mechanical torque, *Te* is the electromagnetic torque; *Pm* is the mechanical power, *Pe* is the electromagnetic power, *TJ* represents the generator's rotor's inertia time constant, and *D* is the damping coefficient.

The differential equation of generator model in (3) can be rewritten as

$$
\begin{bmatrix} \dot{\delta} \\ \dot{\omega} \end{bmatrix} = \begin{bmatrix} 0 & \frac{180}{\pi} \omega\_0 \\ 0 & -\frac{D}{T\_f} \end{bmatrix} \begin{bmatrix} \delta \\ \omega \end{bmatrix} + \begin{bmatrix} 1 & 0 \\ 0 & \frac{1}{T\_f} \end{bmatrix} \begin{bmatrix} -\frac{180}{\pi} \omega\_0 \\ (P\_m - P\_\varepsilon)/\omega \end{bmatrix} \tag{4}
$$

where the unit of δ is degree. For convenience, one has

$$\begin{cases} A = \begin{bmatrix} \delta \\ \alpha \end{bmatrix} \\ A = \begin{bmatrix} 0 & \frac{180}{\pi}\omega\_0 \\ 0 & -D/T\_f \end{bmatrix} \\ B = \begin{bmatrix} 1 & 0 \\ 0 & 1/T\_f \end{bmatrix} \\ \mathcal{U} = \begin{bmatrix} -\frac{180}{\pi}\omega\_0 \\ (P\_m - P\_\varepsilon)/\omega \end{bmatrix} \end{cases} \tag{5}$$

In general, the first three terms of Taylor expansion of general state transition matrix can meet the precision requirements

$$
\Phi(T) \approx I + AT + \frac{A^2 T^2}{2} \tag{6}
$$

$$\Gamma(T) = \left[ \int\_0^T \Phi(\alpha) d(\alpha) \right] \mathcal{B} \tag{7}$$

where *T* represents the sampling period.

The power angle and angular velocity of generator can be measured directly by PMU, which are usually selected as the measurements

$$\begin{cases} Z = \begin{bmatrix} \delta \\ w \end{bmatrix} \\ \mathcal{C} = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} \end{cases} \tag{8}$$

According to (4) to (7), the discrete form of generator DSE model can be obtained

$$\begin{cases} X\_{k+1} = \Phi X\_k + \Gamma \mathcal{U}\_k = f(X\_{k\prime} \mathcal{U}\_k) \\ Z\_k = \mathcal{C} X\_k = h(X\_{k\prime} \mathcal{U}\_k) \end{cases} \tag{9}$$

## *2.3. Error Analysis*

Equations (4) to (9) show that the imprecision of the model parameters *TJ*, *D* and the errors in the measurement of *Pe* and *Pm* will lead to the imprecision of the model. The obtained value of *TJ* is generally accurate. In general, the value of *D* is very small, indicating the influence of mechanical friction and wind resistance on the operation of generators. Therefore, the measurement errors of *Pe* and *Pm* are the main components of generator process noise. Actually, it is difficult to obtain precise *Pm* value, assuming that *Pm* value is constant when there is only governor but no shut-off and fast shut-off valve; the change of *Pm* caused by governor action is treated as the process noise of the system.

According to [16,17], the measurement error is generally between 1% and 2%. Considering the influence, the process noise variance matrix is set as

$$Q = \operatorname{diag}(0, 0.0004P\_{t0} + 0.0001) \tag{10}$$

where *Pe*0 is the standard unit of output electromagnetic power in steady state.

The measurement errors of δ and ω are mainly caused by PMU measurement errors. Ideally, the measurement error variance of δ is 22, ω is <sup>10</sup>−6, and the measurement noise variance matrix of the system is set as

$$R = \operatorname{diag}(2^2, 10^{-6})\tag{11}$$

#### **3. Adaptive Ensemble Square Root Kalman Filter**

#### *3.1. Ensemble Kalman Filter*

Regarding the nonlinear systems and in order to track their state trajectories, the extended Kalman filter and the nonlinear filtering method which combines the techniques of unscented transformation and numerical difference without computing Jacobian matrix are widely used. However, when the system's nonlinearity is strong and the noise is non-Gaussian, the performance of these algorithms will be degraded significantly, resulting in unreliable state estimation. Therefore, ensemble Kalman filter, a sub-optimal filtering method, has attracted more and more attention. Based on Kalman filtering, ensemble prediction is introduced into ensemble Kalman filtering. Sequential Monte Carlo simulation method is employed to create the initial set of samples representing the state statistics. The state transition function is used to calculate the new set of states at the current time for each sample in the set, then the samples of the new set of states are calculated. Mean and covariance are used to ge<sup>t</sup> the optimal value of the current time estimate [18].

In the traditional Kalman filter, the calculation formulas of state prediction error covariance and analysis error covariance are presented as

$$\begin{cases} P^f = \overline{\left(X^f - X^t\right)\left(X^f - X^t\right)^T} \\ P^a = \overline{\left(X^a - X^t\right)\left(X^a - X^t\right)^T} \end{cases} \tag{12}$$

where *X* denotes the state vector, *P* represents the state covariance, *T* is the matrix transposition. The superscript *f* indicates the prediction state, the superscript *a* is the analysis state, and the superscript *t* is the real state.

In the EnKF, the posterior distribution of the state function is approximated by the ensemble elements, and the degree of approximation depends on the number of elements in the ensemble. Assuming that the number of set elements is *N*, *XEf* represents the set storing all predicted states, and *XEa* represents the set storing all analyzed states. When *N* tends to infinity, the real value of the state can be replaced by the mean value of the set of state estimators. It follows from [19] that the prediction error covariance and the analysis error covariance can be calculated by

$$\begin{array}{l}P^f \approx \frac{1}{N-1} \left( \overline{X}E^f - \overline{X}\overline{E^f} \right) \left( \overline{X}E^f - \overline{X}\overline{E^f} \right)^T\\P^a \approx \frac{1}{N-1} \left( \overline{X}E^a - \overline{X}\overline{E^a} \right) \left( \overline{X}E^a - \overline{X}\overline{E^a} \right)^T\end{array} \tag{13}$$

The EnKF uses the finite element in the ensemble for the estimation of the state error covariance, thus avoiding the prediction of the covariance matrix.

In the process of EnKF, firstly, according to the known state priori information, an initial state set of *N* elements is obtained by SMC method, then a prediction set *XEf* is obtained by predicting each element in the initial state set through Kalman filter, in which the elements are expressed by *Xfi* . Secondly, the observation set of state by SMC method as well, and the prediction state set is modified by covariance of *XEf* to ge<sup>t</sup> the analysis set *XEa* of state. The elements are expressed by *Xai* , and the average value of the state analysis set *XEa* is the optimal state estimation value. The analysis set of states is used for prediction [20,21].

The iteration formula of ensemble Kalman filter algorithm is given as follows

(1) State prediction:

$$X\_{k+1}^{f\_i} = f(X\_k^{a\_i}, \mathcal{U}\_k) + \mathcal{W}\_k^i \tag{14}$$

$$\overline{X}\_{k+1}^{f} = \frac{1}{N} \sum\_{i=1}^{N} X\_{k+1}^{f\_i} \tag{15}$$

$$E\_{X,k+1}^f = (\ \ X\_{k+1}^{f\_1} - \overline{X}\_{k+1}^f \ \ \ \ \cdots \ \ \ X\_{k+1}^{f\_N} - \overline{X}\_{k+1}^f \ \ \ \ ) \tag{16}$$

$$\mathbf{E}\_{\mathbf{Z},k+1}^{f} = (\ \mathbf{Z}\_{k+1}^{f\_1} - \mathbf{Z}\_{k+1}^{f} \quad \cdots \ \mathbf{Z}\_{k+1}^{f\_N} - \mathbf{Z}\_{k+1}^{f} \quad \text{ } \tag{17}$$

$$P\_{\text{XZ},k+1}^{f} = P\_{k+1}^{f} H\_{k+1}^{T} = \frac{1}{N-1} E\_{\text{X},k+1}^{f} \left( E\_{\text{Z},k+1}^{f} \right)^{T} \tag{18}$$

$$\mathbf{P}\_{\text{ZZ},k+1}^{f} = H\_{k+1} \mathbf{P}\_{k+1}^{f} \mathbf{H}\_{k+1}^{T} = \frac{1}{N-1} \mathbf{E}\_{\text{Z},k+1}^{f} \left(\mathbf{E}\_{\text{Z},k+1}^{f}\right)^{T} \tag{19}$$

(2) State update:

$$K\_{k+1} = P\_{\chi Z, k+1}^f \left(P\_{ZZ, k+1}^f\right)^{-1} \tag{20}$$

$$X\_{k+1}^{di} = X\_{k+1}^{f\_i} + K\_{k+1}(Z\_{k+1}^{f\_i} - h(X\_{k+1}^{f\_i})) \tag{21}$$

$$X\_{k+1}^{\mathbf{r}} = \frac{1}{N} \sum\_{i=1}^{N} X\_{k+1}^{\mathbf{r}\_i} \tag{22}$$

where *Xfi k*+1 and *Zfi k*+1 are the *i* elements of prediction state set and measurement state set, respectively; *Xai k* are the *i* elements of analysis state set, *Xfk*+1, *<sup>X</sup>ak*+<sup>1</sup> and *<sup>Z</sup>fk*+<sup>1</sup> are the average values of predicted state set, analyzed state set and measured state set at *k* + 1 time instant, respectively, *<sup>E</sup>fX*,*k*+<sup>1</sup> and *<sup>E</sup>fZ*,*k*+<sup>1</sup> are the deviation matrices of elements and average values in predicted state set and measured state set, respectively, *<sup>P</sup>fXZ*,*k*+<sup>1</sup> is the covariance of predictive state error and observation state error at *k* + 1 time, *<sup>P</sup>fZZ*,*k*+<sup>1</sup>is the covariance of observation state error at *k* + 1 time.

The EnKF algorithm not only overcomes the problem that Kalman filter is limited to linear systems, but also avoids the problem of large amount of computing resources when calculating prediction error covariance. Tangent linear model and adjoint model are no longer needed. The method also gives the optimal value of the estimated result and the confidence interval of the estimated value at the same time.

The EnKF is a sub-optimal estimation method which uses the ensemble to describe the traditional Kalman filter algorithm. A prediction set estimation is used to calculate the state prediction error covariance matrix of the Kalman gain matrix. If each element of the set is updated with the same observation value and the same gain, the set will systematically underestimate the state update error covariance matrix. It even can lead to subsequent analysis degradation and filtering divergence. This problem can be alleviated by adding random perturbations to the measurements with correct statistical data. However, the introduction of random perturbation will inevitably increase the additional error of sampling error related to measurement error covariance. This additional error will reduce the covariance accuracy of the observation state error as well as the estimation accuracy.

#### *3.2. Ensemble Square Root Kalman Filter*

In EnSRF, the calculation of the analytic state variables is divided into two parts [22], one is the update of the mean value of the analytic state set, the other is the update of the mean deviation of the analytic state set sample

$$
\overline{X}^{\mathfrak{a}} = \overline{X}^{f} + K(\overline{Z} - H\overline{X}^{f}) \tag{23}
$$

$$X^{\prime a} = X^{\prime f} + \overline{\mathcal{K}}(Z^{\prime} - HX^{\prime f}) \tag{24}$$

where *Xa* represents the mean of analysis state set, *Xa* indicates the mean deviation of analysis state set samples, *Xf* denotes the mean deviation of prediction state set, *Xf* is the mean deviation of prediction state set samples, *Z* is the observation value obtained by PMU, *Z* is the random observation deviation obeying the probability distribution of observation error, *K* is the gain of traditional Kalman filter, *K* is the mean deviation gain of the update set.

In EnKF, when all sets of samples are updated with the same observation value and the same gain [23], i.e., *Z* = 0 and *K* = *K*, the covariance of the analysis state set can be expressed as (25). In the traditional Kalman filter, the covariance expression of state analysis error is shown in (26). As can be seen from (25) and (26), the value of *Pa* in EnKF lacks the term *KRKT*, so EnKF has the problem of systematic underestimation

$$P^{\mathfrak{a}} = (I - KH)P^{f} \left(I - KH\right)^{T} \tag{25}$$

$$P^t = (I - KH)P^f(I - KH)^T + KKK^T = (I - KH)P^f \tag{26}$$

EnSRF eliminates the problem of systematic underestimation of EnKF without disturbing the observed values [22–24]. In EnSRF, (24) can be reformulated as

$$X'^a = X'^f - \overline{K} H X'^f = \left( I - \overline{K} H \right) X'^f$$

*K* is substituted for (25) and the analysis error covariance of the set *K* needs to satisfy (26). Furthermore, if *K* = α*K* and α is a constant, one can further derive

$$(I - \overline{K}H)P^f(I - \overline{K}H)^T = (I - \overline{K}H)P^f \tag{27}$$

$$\frac{H P^f H^T}{H P^f H^T + R} \widetilde{K} \overline{K}^T - K \overline{K}^T - \widetilde{K} K^T + K K^T = 0 \tag{28}$$

$$\frac{H P^f H^T}{H P^f H^T + R} \alpha^2 - 2\alpha + 1 = 0\tag{29}$$

$$\alpha = \left( 1 + \sqrt{\frac{R}{HP^f H^T + R}} \right)^{-1} \tag{30}$$

The traditional Kalman filtering algorithm is a linear unbiased minimum variance estimation algorithm under standard conditions. Kalman filter can obtain better estimation results when the dynamic system model and noise statistics are known. However, in the practical application, it is di fficult to obtain accurate system model as well as accurate noise statistical characteristics, which will affect the accuracy of Kalman filter estimation results.

## *3.3. Sage–Husa Algorithm*

In 1969, Sage and Husa presented an improved noise filtering algorithm. The algorithm can compute both matrices of process noise variance *Q* and measurement noise variance *R* [25,26] online and in real time when the system noise mean and covariance are unknown. Comparing with the adaptive filtering algorithm based on maximum likelihood criterion, this algorithm adds a forgetting factor, which strengthens the influence of newer observations on filtering and weakens the influence of older observations on the results. This algorithm e ffectively reduces the influence of model error on filtering, restrains filter divergence, and improves estimation accuracy [27].

The steps of Sage–Husa algorithm are given as:

(1) Filtering equation:

$$
\hat{X}\_{k+1|k} = F\_k \hat{X}\_{k|k} + \mathfrak{d}\_k \tag{31}
$$

$$P\_{k+1\parallel k} = F\_k P\_{k\parallel k} F\_k^T + \hat{Q}\_k \tag{32}$$

$$K\_{k+1} = P\_{k+1|k} H\_{k+1}^T \left( H\_{k+1} P\_{k+1|k} H\_{k+1}^T + \hat{R}\_{k+1} \right)^{-1} \tag{33}$$

$$\mathcal{L}\_{k+1} = Z\_{k+1} - H\_{k+1} \hat{X}\_{k+1\parallel k} - \hat{r}\_{k+1} \tag{34}$$

$$
\hat{X}\_{k+1|k+1} = \hat{X}\_{k+1|k} + \mathcal{K}\_{k+1} e\_{k+1} \tag{35}
$$

$$P\_{k+1|k+1} = (I - K\_{k+1}H\_{k+1})P\_{k+1|k} \tag{36}$$

(2) Time-varying noise estimator:

$$\mathfrak{r}\_{k+1} = (1 - d\_{k+1})\mathfrak{r}\_k + d\_{k+1}(Z\_{k+1} - H\_{k+1}\mathfrak{X}\_{k+1\mid k})\tag{37}$$

$$\hat{R}\_{k+1} = (1 - d\_{k+1})\hat{R}\_k + d\_{k+1}(c\_{k+1}e\_{k+1}^T - H\_{k+1}P\_{k+1|k}H\_{k+1}^T) \tag{38}$$

$$
\hat{q}\_{k+1} = (1 - d\_{k+1})\hat{q}\_k + d\_{k+1}(\hat{X}\_{k+1|k+1} - F\_{k+1}\hat{X}\_{k|k})\tag{39}
$$

$$\begin{array}{l} \hat{Q}\_{k+1} = (1 - d\_{k+1})\hat{Q}\_k + d\_{k+1}(K\_{k+1}e\_{k+1}e\_{k+1}^T K\_{k+1}^T \\ \quad + P\_{k+1|k+1} - F\_{k+1}P\_{k+1|k+1}F\_{k+1}^T) \end{array} \tag{40}$$

where *q*<sup>ˆ</sup>*k*+<sup>1</sup> represents the mean of process noise at *k* + 1; *<sup>r</sup>*<sup>ˆ</sup>*k*+<sup>1</sup> is the mean of *k* + 1, *ek*+1 is the residual at time instant *k* + 1; *dk*+<sup>1</sup> = (1 − *b*)/(1 − *<sup>b</sup>k*+<sup>2</sup>), *b* is the forgetting factor and 0 < *b* < 1.

Generally, Sage–Husa algorithm can simultaneously estimate process noise *Q*ˆ *k*+1 and measurement noise *R*ˆ *k*+1. In fact, the algorithm cannot estimate both *Q*ˆ *k*+1 and *R*ˆ *k*+1 at the same time. It can be seen from (37) and (39) that the calculation of *Q*ˆ *k*+1 and *R*ˆ *k*+1 depends on *ek*+1. The change of *ek*+1 will affect the calculation of *Q*ˆ *k*+1 and *R*ˆ *k*+1 at the same time, which cannot guarantee the accuracy of the estimation results. In addition, there are minus signs in the calculation of *Q*ˆ *k*+1 and *R*ˆ *k*+1, which may affect the semi-positive and positive definiteness of *Q*ˆ *k*+1 and *R*ˆ *k*+1, leading to filter divergence.

In the electromechanical transient process of generator, the influence of measurement noise *R*ˆ *k*+1 on the estimation is more important. Generally speaking, process noise *Q* is a constant. Assuming that both process noise as well as measurement noise are Gauss, white noise with mean value equals to zero, i.e., *r* = 0 and *q* = 0.

Sage–Husa algorithm can be simplified as follows.

(1) Filtering equation:

$$
\hat{X}\_{k+1|k} = F\_k \hat{X}\_{k|k} \tag{41}
$$

$$P\_{k+1\parallel k} = F\_k P\_{k\parallel k} F\_k^T + Q \tag{42}$$

$$K\_{k+1} = P\_{k+1|k} H\_{k+1}^T \left( H\_{k+1} P\_{k+1|k} H\_{k+1}^T + \mathcal{R}\_{k+1} \right)^{-1} \tag{43}$$

$$e\_{k+1} = Z\_{k+1} - H\_{k+1} \hat{X}\_{k+1|k} \tag{44}$$

$$
\mathcal{X}\_{k+1|k+1} = \mathcal{X}\_{k+1|k} + \mathcal{K}\_{k+1} e\_{k+1} \tag{45}
$$

$$P\_{k+1|k+1} = (I - K\_{k+1}H\_{k+1})P\_{k+1|k} \tag{46}$$

(2) Measurement noise estimator:

$$
\hat{R}\_{k+1} = (1 - d\_{k+1})\hat{R}\_k + d\_{k+1}(c\_{k+1}e\_{k+1}^T - H\_{k+1}P\_{k+1|k}H\_{k+1}^T) \tag{47}
$$

In order to ensure the positive covariance of measurement noise, (47) is replaced by (48). It can be seen from the formula that a certain filtering-precision is sacrificed to ensure the stability of the filter.

$$
\hat{R}\_{k+1} = (1 - d\_{k+1})\hat{R}\_k + d\_{k+1}(e\_{k+1}e\_{k+1}^T) \tag{48}
$$

#### *3.4. Adaptive Ensemble Square Root Kalman Filter*

The proposed AEnSRF combines ensemble root mean square Kalman filter with Sage–Husa algorithm. Comparing with ensemble Kalman filter, the proposed AEnSRF not only has a higher filtering-precision but also can be effectively corrected while measurement noise deviates, so as to suppress filter divergence and improve estimation accuracy.

The specific steps are as follows:


$$X^0 = \begin{bmatrix} \mathbf{x}\_{11} & \mathbf{x}\_{12} & \mathbf{x}\_{13} & \cdots & \mathbf{x}\_{1N} \\ \mathbf{x}\_{21} & \mathbf{x}\_{22} & \mathbf{x}\_{23} & \cdots & \mathbf{x}\_{2N} \\ \vdots & \vdots & \vdots & & \vdots & \vdots \\ \mathbf{x}\_{31} & \mathbf{x}\_{32} & \mathbf{x}\_{33} & \cdots & \mathbf{x}\_{3N} \\ \mathbf{x}\_{n1} & \mathbf{x}\_{n2} & \mathbf{x}\_{n3} & \cdots & \mathbf{x}\_{nN} \end{bmatrix} \tag{49}$$

#### (3) **State prediction:**

$$X\_{k+1}^{f\_i} = f(X\_k^{a\_i}, \mathcal{U}\_k) + \mathcal{W}\_k^i \tag{50}$$

$$\overline{X}\_{k+1}^{f} = \frac{1}{N} \sum\_{i=1}^{N} X\_{k+1}^{f\_i} \tag{51}$$

$$E\_{X,k+1}^f = (\quad X\_{k+1}^{f\_1} - \overline{X}\_{k+1}^f \quad \dots \quad X\_{k+1}^{f\_{\mathcal{N}}} - \overline{X}\_{k+1}^f \quad \text{ } \tag{52}$$

$$E\_{Z,k+1}^f = \begin{pmatrix} Z\_{k+1}^{f\_1} - \overline{Z}\_{k+1}^f & \cdots & Z\_{k+1}^{f\_{\mathcal{N}}} - \overline{Z}\_{k+1}^f \end{pmatrix} \tag{53}$$

$$P\_{XZ,k+1}^{f} = P\_{k+1}^{f} H\_{k+1}^{T} = \frac{1}{N-1} E\_{X,k+1}^{f} \left( E\_{Z,k+1}^{f} \right)^{T} \tag{54}$$

$$\mathbf{P}\_{\mathbf{Z}\mathbf{Z},k+1}^{f} = H\_{k+1}\mathbf{P}\_{k+1}^{f}\mathbf{H}\_{k+1}^{T} = \frac{1}{N-1}\mathbf{E}\_{\mathbf{Z},k+1}^{f}\left(\mathbf{E}\_{\mathbf{Z},k+1}^{f}\right)^{T} \tag{55}$$

#### (4) **Calculating Kalman gain:**

$$\mathcal{K}\_{k+1} = \mathbf{P}\_{\mathbf{XZ},k+1}^{f} \left(\mathbf{P}\_{\mathbf{ZZ},k+1}^{f}\right)^{-1} \tag{56}$$

#### (5) **Calculate the measurement noise covariance:**

$$
\omega\_{k+1} = Z\_{k+1} - \overline{Z}\_{k+1}^f \tag{57}
$$

$$d\_{k+1} = (1 - b) / (1 - b^{k+2}) \tag{58}$$

$$\mathcal{R}\_{k+1} = (1 - d\_{k+1})\mathcal{R}\_k + d\_{k+1}(\varepsilon\_{k+1}\varepsilon\_{k+1}^T) \tag{59}$$

#### (6) **Calculate the mean deviation gain of the updated set:**

$$a = \left[1 + \sqrt{\hat{\mathbb{R}}\_{k+1} \left(P\_{ZZ,k+1}^f\right)^{-1}}\right]^{-1} \tag{60}$$

$$
\overline{K}\_{k+1} = aK\_{k+1} \tag{61}
$$

(7) **State update:**

$$
\overline{X}\_{k+1}^{q} = \overline{X}\_{k+1}^{f} + \mathbb{K}\_{k+1} \mathfrak{e}\_{k+1} \tag{62}
$$

$$X\_{k+1}^{\prime a\_i^\*} = X\_{k+1}^{\prime f\_i} - \overline{\mathcal{K}}\_{k+1} h(X\_{k+1}^{\prime f\_i}) \tag{63}$$

$$\overline{X}\_{k+1}^{\prime a} = \frac{1}{N} \sum\_{i=1}^{N} X\_{k+1}^{\prime a\_i} \tag{64}$$

$$X\_{k+1}^{a} = \overline{X}\_{k+1}^{a} + \overline{X}\_{k+1}^{\prime a} \tag{65}$$

For convenience, the proposed AEnSRF method's implementation flow chart is presented in Figure 1.

**Figure 1.** The flowchart of adaptive ensemble square root Kalman filter (AEnSRF) algorithm.

## **4. Case Study**

#### *4.1. Basic Test Analysis*

The IEEE-9 bus system is firstly taken as the test system in this paper. The single-line diagram of it is given in Figure 2. The classical second-order model of generator considering governor is adopted in simulation. The inertia time constants of G1, G2, and G3 are respectively 47.28, 12.8, 6.02 s; and the damping coefficient is set to 2. The initial values of generator state variables and control variables are set to the values of the system's stable operation before fault occurred. Suppose that in the IEEE 9 bus system, three-phase short-circuit fault occurs at the beginning of branch 4–8 at 0.8 s. After 0.36 s, the switch on both sides of branch jumps off and the fault is removed. The whole simulation time is 6 s.

**Figure 2.** The single-line diagram of IEEE-9 bus test system.

The real values of power angle and angular velocity of generator are obtained by Bonneville Power Administration (BPA) simulation software, and the measured values are obtained by adding the real values to the random noise where its mean value obeys normal distribution with zero standard deviation. In this paper, the DSE algorithm is implemented in Matlab. The initial values of the estimated generator state variables are set to stable operation value, and the initial state error covariance matrix is set to unit matrix.

In order to further compare the difference of filtering results of each algorithm, the root mean square error (*RMSE*) is used

$$RMSE = \sqrt{\frac{1}{n} \sum\_{k=1}^{n} \left(\hat{\mathbf{X}}\_{k|k} - \mathbf{X}\_k\right)^2} \tag{66}$$

where *n* is the sampling period.

The process noise variance matrix and the measurement noise variance matrix of the generator are selected as the corresponding values in [10,11]. Figure 3 is a comparison of the filtering results of G1 by utilizing the EnKF and EnSRF algorithms under normal noise. Table 1 gives the RMSE results of EnKF and EnSRF algorithms.

**Figure 3.** *Cont.*

**Figure 3.** Comparison of G1 state filtering results under ensemble Kalman filter (EnKF) and AEnSRF: (**a**) Power angle dynamic estimation curve of G1; (**b**) electric angular velocity dynamic estimation curve of G1.


**Table 1.** The RMSE results of EnKF and EnSRF.

As it might be noted from Figure 3 and Table 1, EnKF and AEnSRF both show good filtering effect without deviation of measurement noise. Because the observation value in AEnSRF does not introduce disturbance, the problem of underestimation in EnKF filtering is avoided. The filtering-precision of AEnSRF is improved comparing with EnKF.

Figure 4 is a sketch of the noise estimator tracking the measured noise in AEnSRF without deviation from the measured noise.

**Figure 4.** Measurement noise tracking curve of G1 under AEnSRF: (**a**) Power angle measurement noise tracking curve of G1; (**b**) angular velocity measurement noise tracking curve of G1.

From the simulation, we can see that in the process of tracking power angle measurement noise by AEnSRF, the standard deviation of 0.14 s is stable at 1.9 degrees, and the angular velocity is too small (almost 0). Combining with Figure 4, it can be remarked that AEnSRF can track the measured noise more accurately.

#### *4.2. Noise Measurement and Analysis on the IEEE 9-Bus System*

In this subsection, G2 is taken as an example to test the estimation e ffect of AEnSRF on generator state as well as the tracking of measurement noise when it is biased. Assuming that the actual value of the standard deviation of power angle measurement noise is 3 degrees, the value is set as 2 degrees in the filtering program to detect the tracking e ffect of AEnSRF on power angle measurement noise.

Figure 5 depicts the comparison results of G2 by utilizing EnKF and AEnSRF algorithms, respectively. It can be obviously noted from Figure 5 that both of them can show good estimation results under the consideration of measurement noise deviation condition.

**Figure 5.** Comparisons of G2 state filtering results between EnKF and AEnSRF in the presence of measurement noise deviation: (**a**) Power angle dynamic estimation curve of G2; (**b**) electric angular velocity dynamic estimation curve of G2.

The simulation results of G2 show that under AEnSRF algorithm, the RMSE of power angle estimation is 0.0516, the RMSE of electric angular velocity estimation is 0.0011. Under EnKF algorithm, the RMSE of power angle estimation is 0.0536 and the RMSE of electric angular velocity estimation is 0.0012. By comparing both algorithms, AEnSRF has higher filtering-precision.

Figure 6 is the simulation diagram of AEnSRF tracking curve for measurement noise when there is a deviation in measurement noise. As it can be noticed from this figure, when the power angle measurement noise is biased, the Sage–Husa noise estimator with on-line real-time estimation noise is introduced by AEnSRF, which makes the estimated standard deviation of power angle measurement noise stabilize around 3 degrees at about 1.1 s.

**Figure 6.** Tracking curve of G2 measurement noise by AEnSRF in the case of measurement noise deviation: (**a**) Power angle measurement noise tracking curve of G2; (**b**) angular velocity measurement noise tracking curve of G2.

#### *4.3. Noise Measurement and Analysis on a Real Power System*

For further verification of the proposed algorithm's effectiveness, simulation tests are also carried out on a real power grid system. One generator outgoing circuit has a short-circuit fault, and the generator parameters are based on the actual data. The initial value of the state variable is the steady value, and the initial error covariance matrix is the unit matrix. Assuming that the actual value of the standard deviation of power angle measurement noise is 2.5 degrees, the value is set to 2 degrees in the filtering program.

Comparing EnKF with the proposed method, Figures 7 and 8 show the simulation results of DSE of generator in the presence of measurement noise deviation. The simulation results of G2 illustrate that under the AEnSRF, the RMSE of power angle estimation is 0.0435, the RMSE of electric angular velocity estimation is 0.0010. Under the EnKF, the RMSE of power angle estimation is 0.0438 and the RMSE of electric angular velocity estimation is 0.0013.

**Figure 7.** Contrast chart of EnKF and AEnSRF for generator state filtering with deviation of measurement noise: (**a**) Power angle dynamic estimation curve of generator; (**b**) dynamic angular velocity estimation curve of generator.

**Figure 8.** *Cont.*

**Figure 8.** Tracking curve of AEnSRF for generator measurement noise with deviation of measurement noise: (**a**) Power angle measurement noise tracking curve of generator; (**b**) angular velocity measurement noise tracking curve of generator.

Comparing with EnKF, AEnSRF has higher filtering-precision. In AEnSRF, the noise estimator tracks the power angle measurement noise at 0.3 s and stabilizes at about 2.5 degrees. This shows that AEnSRF has a good tracking effect on measurement noise.
