*2.3. Online Identification of Parameters for a Second-Order RC Circuit*

From Figure 1, the KVL (Kirchhoff's voltage law) equation of the circuit can be obtained:

$$\mathcal{U}\_{\rm OC} = \mathcal{U}\_{p1} + \mathcal{U}\_{p2} + \mathcal{R}\_0 I\_L + \mathcal{U}\_L \tag{5}$$

According to Equation (1), taking the Laplace transform on both sides:

$$\mathcal{U}\_{\rm CC}(s) = (\frac{R\_{p1}}{R\_{p1}\mathcal{C}\_{p1}s + 1} + \frac{R\_{p2}}{R\_{p2}\mathcal{C}\_{p2}s + 1} + R\_0)I(s) + \mathcal{U}\_L(s) \tag{6}$$

Let the time constants be *τp*<sup>1</sup> = *Rp*1*Cp*<sup>1</sup> and *τp*<sup>2</sup> = *Rp*2*Cp*2, and let *a* = *τp*1*τp*2, *b* = *τp*<sup>1</sup> + *τp*2, *c* = *Rp*<sup>1</sup> + *Rp*<sup>2</sup> + *R*0; then, Equation (6) can be simplified as:

$$a l L\_{\rm OC} \mathrm{s}^2 + b l L\_{\rm OC} \mathrm{s} + l L\_{\rm OC} = a R\_0 I\_\mathrm{s}^2 + d I\_\mathrm{s} + c I + a l I\_\mathrm{s}^2 + l I\_\mathrm{L} \tag{7}$$

Equation (8) can be obtained via discretion using Equation (7) as follows:

$$\mathcal{U}(k) = \frac{-bT - 2a}{T^2 + bT + a} \mathcal{U}(k - 1) + \frac{a}{T^2 + bT + a} \mathcal{U}(k - 2) + \frac{cT^2 + dT + aR\_0}{T^2 + bT + a} \mathcal{I}(k) + \frac{-dT - 2aR\_0}{T^2 + bT + a} \mathcal{I}(k - 1) + \frac{aR\_0}{T^2 + bT + a} \mathcal{I}(k)$$

After simplification, the expression (8) becomes:

$$\mathcal{L}I(k) = k\_1 \mathcal{U}(k-1) + k\_2 \mathcal{U}(k-2) + k\_3 I(k) + k\_4 I(k-1) + k\_5 I(k-2) \tag{9}$$

Substituting Equation (9) into FFRLS, the value of parameter *θ*(*k*)=(*k*1, *k*2, *k*3, *k*4, *k*5) can be calculated, and then the circuit model parameter *R*0, *Rp*1, *Rp*2, *Cp*1, *Cp*<sup>2</sup> from the identification results can be deduced as follows:

$$\begin{aligned} \tau\_{p1} &= \frac{T}{2(k\_1 + k\_2 + 1)} [\sqrt{k\_1^2 - 4k\_2} - k\_1 - 2k\_2] \\ \tau\_{p2} &= -\frac{T}{2(k\_1 + k\_2 + 1)} [\sqrt{k\_1^2 - 4k\_2} + k\_1 + 2k\_2] \end{aligned} \tag{10}$$

Combining Equations (4), (8) and (10), Equation (11) can be obtained as follows:

$$\begin{array}{l} R\_0 = \frac{k\_5}{k\_2} \\ R\_{p1} = \frac{\tau\_{p1}\frac{k\_3 + k\_4 + k\_5}{k\_1 + k\_2 + 1} + \tau\_{p2}R\_0 + \frac{T(k\_4 + 2k\_5)}{k\_1 + k\_2 + 1}}{\tau\_{p1} - \tau\_{p2}} \\ R\_{p2} = \frac{k\_3 + k\_4 + k\_5}{k\_1 + k\_2 + 1} - R\_{p1} - R\_0 \\ C\_{p1} = \frac{\tau\_{p1}}{R\_{p1}} \\ C\_{p2} = \frac{\tau\_{p2}}{R\_{p2}} \end{array} \tag{11}$$

#### **3. SOC Estimation Method Based on AEKF**

*3.1. Adaptive Extended Kalman Filter State Equation*

The Kalman filter algorithm applies to linear systems. If it is used in a nonlinear system, the Taylor expansion formula needs to be used to locally approximate the nonlinear state equation as a linear equation. The method used for nonlinear systems is called the extended Kalman filter algorithm. Nonlinear systems can be represented by Formula (12):

$$\begin{cases} \mathbf{x}\_{k+1} = f(\mathbf{x}\_{k'} \boldsymbol{u}\_k) + \boldsymbol{w}\_k\\ \boldsymbol{y}\_k = g(\mathbf{x}\_{k'} \boldsymbol{u}\_k) + \boldsymbol{v}\_k \end{cases} \tag{12}$$

where *xk* is the state variable, *yk* is the observation variable, *f*(*xk*, *uk*) is the nonlinear state function, *g*(*xk*, *uk*) is the nonlinear observation function, and *wk* and *vk* are gaussian white noise with zero means and covariances of *Qk* and *Rk*, respectively. The EKF algorithm takes advantage of the local linearity property of nonlinear functions by locally linearizing both nonlinear functions. As described in the previous section, *x*ˆ*k*/*<sup>k</sup>* this is the optimal estimated value of the state variable at *k* time. By performing a first-order Taylor expansion of the nonlinear state function *f*(*xk*, *uk*) around *x*ˆ*k*/*k*, Equation (13) can be obtained as follows:

$$f(\mathbf{x}\_k, u\_k) = f(\mathbf{\hat{x}}\_{k/k}, u\_k) + \frac{\partial f}{\partial \mathbf{x}\_k}|\_{\mathbf{x}\_k = \mathbf{\hat{x}}\_{k/k}} (\mathbf{x}\_k - \mathbf{\hat{x}}\_{k/k}) + o(\mathbf{x}\_k - \mathbf{\hat{x}}\_{k/k}) \tag{13}$$

In Equation (13), ignoring the high-order term *<sup>o</sup>*(*xk* <sup>−</sup> *<sup>x</sup>*ˆ*k*/*k*) and letting *<sup>∂</sup> <sup>f</sup> ∂xk xk*=*x*ˆ*k*/*<sup>k</sup>* <sup>=</sup> *Fk*, Equation (2) can be simplified as follows:

$$\mathbf{x}\_{k+1} = f(\mathbf{\hat{x}}\_{k/k\prime}\boldsymbol{u}\_k) + F\_k(\mathbf{x}\_k - \mathbf{\hat{x}}\_{k/k}) + w\_k \tag{14}$$

Expanding the nonlinear observation function *g*(*xk*, *uk*) around the prior state estimate *<sup>x</sup>*ˆ*k*/*k*−<sup>1</sup> at *<sup>k</sup>* time by a first-order Taylor series, Equation (15) can be obtained as follows:

$$\log(\mathbf{x}\_{k\prime}\boldsymbol{u}\_{k}) = \mathbf{g}(\mathbf{\hat{x}}\_{k/k-1\prime}\boldsymbol{u}\_{k}) + \frac{\partial \mathbf{g}}{\partial \mathbf{x}\_{k}}\Big|\_{\mathbf{x}\_{k} = \mathbf{\hat{x}}\_{k/k-1}} (\mathbf{x}\_{k} - \mathbf{\hat{x}}\_{k/k-1}) + o(\mathbf{x}\_{k} - \mathbf{\hat{x}}\_{k/k-1}) \tag{15}$$

Ignoring higher-order terms in *<sup>o</sup>*(*xk* <sup>−</sup> *<sup>x</sup>*ˆ*k*/*k*−1) and setting *<sup>∂</sup><sup>g</sup> ∂xk xk*=*x*ˆ*k*/*k*−<sup>1</sup> <sup>=</sup> *Gk*, Equation (14) can be simplified as follows:

$$y\_k = \mathcal{g}(\mathfrak{x}\_{k/k-1}, \mathfrak{u}\_k) + \mathcal{G}\_k(\mathfrak{x}\_k - \mathfrak{x}\_{k/k-1}) + v\_k \tag{16}$$

If both the nonlinear state equation and the observation equation are linear, then Equation (11) can be rewritten as follows:

$$\begin{cases} \mathbf{x}\_{k+1} = F\_k \mathbf{x}\_k + f(\mathbf{f}\_{k/k'} \boldsymbol{\mu}\_k) - F\_k \mathbf{f}\_{k/k} + \boldsymbol{w}\_k \\\ y\_k = \mathbf{G}\_k \mathbf{x}\_k + \mathcal{g}(\mathbf{f}\_{k/k-1}, \boldsymbol{\mu}\_k) - \mathbf{G}\_k \mathbf{f}\_{k/k-1} + \boldsymbol{v}\_k \end{cases} \tag{17}$$

Here, matrices *Fk* and *Gk* can be obtained by calculating the Jacobian matrices of *f* and *g*. If the state variable *x* is *n*-dimensional, i.e., *x* = [*x*1, *x*2, ..., *xn*] *<sup>T</sup>*, then the solution for matrices *Fk* and *Gk* is as follows.

$$F\_k = \frac{\partial f}{\partial x} = \begin{bmatrix} \frac{\partial f\_1}{\partial x\_1} & \frac{\partial f\_1}{\partial x\_2} & \cdots & \frac{\partial f\_1}{\partial x\_n} \\ \frac{\partial f\_2}{\partial x\_1} & \frac{\partial f\_2}{\partial x\_2} & \cdots & \frac{\partial f\_2}{\partial x\_n} \\ \vdots & \vdots & \cdots & \vdots \\ \vdots & \vdots & \cdots & \vdots \\ \frac{\partial f\_n}{\partial x\_1} & \frac{\partial f\_n}{\partial x\_2} & \cdots & \frac{\partial f\_n}{\partial x\_n} \end{bmatrix} \tag{18}$$
 
$$G\_k = \frac{\partial g}{\partial x} = \begin{bmatrix} \frac{\partial g\_1}{\partial x\_1} & \frac{\partial g\_1}{\partial x\_2} & \cdots & \frac{\partial g\_1}{\partial x\_n} \\ \frac{\partial g\_2}{\partial x\_1} & \frac{\partial g\_2}{\partial x\_2} & \cdots & \frac{\partial g\_2}{\partial x\_n} \\ \vdots & \vdots & \cdots & \vdots \\ \vdots & \vdots & \cdots & \vdots \\ \frac{\partial g\_n}{\partial x\_1} & \frac{\partial g\_n}{\partial x\_2} & \cdots & \frac{\partial g\_n}{\partial x\_n} \end{bmatrix} \tag{19}$$

The EKF algorithm requires advanced calibration of the covariance matrices for observation noise and process noise, which is often calculated by experience, and the covariance matrices *Qk* and *Rk* are fixed values. However, in high-rate conditions of lithium iron phosphate batteries, the noise often changes due to the internal chemical reaction and resulting temperature variation and is no longer a fixed value. To improve the accuracy of SOC estimation, the AEKF algorithm is introduced. Based on the EKF algorithm, the AEKF adds the Sage–Husa adaptive filtering algorithm, enabling the observation noise covariance matrix and the process noise covariance matrix in the EKF algorithm to be adaptively updated, thus improving the accuracy of the SOC estimation. The steps of the AEKF algorithm are listed as follows:

Step 1. Initialization

Set *x*ˆ0 = *x*0, *y*0, *P*0, *Q*0, *R*<sup>0</sup> when *k* = 0, where *x*ˆ0 is the initial estimate of the state variables, *y*<sup>0</sup> is the initial observation value, *P*<sup>0</sup> is the initial value of the error covariance matrix, and *Q*<sup>0</sup> and *R*<sup>0</sup> are the initial values of the process covariance matrix and observation noise covariance matrix, respectively.

Step 2. State prediction

$$\pounds\_{k/k-1} = f\left(\pounds\_{k-1/k-1\prime}\mu\_{k-1}\right) \tag{20}$$

Step 3. Prediction of error covariance

$$P\_{k/k-1} = F\_{k-1} P\_{k-1/k-1} F\_{k-1} \, ^T + Q\_{k-1} \tag{21}$$

Step 4. Calculate the Kalman gain

$$K\_k = P\_{k/k-1} G\_k^T \left[ G\_k P\_{k/k-1} G\_k^T + R\_{k-1} \right]^{-1} \tag{22}$$

Step 5. State estimation

$$\pounds\_{k/k} = \pounds\_{k/k-1} + K\_k[y\_k - \lg(\pounds\_{k/k-1} \mu\_k)] \tag{23}$$

Step 6. Update the noise covariance matrix

$$\mathfrak{e}\_k = \mathfrak{y}\_k - \mathfrak{g}(\mathfrak{x}\_{k/k-1}, \mathfrak{u}\_k) \tag{24}$$

$$\begin{cases} \; E\_k = E\left(\mathfrak{e}\_k \mathfrak{e}\_k^T\right) \\ \; d = \mathfrak{e}\_k^T E\_k^{-1} \mathfrak{e}\_k \end{cases} \tag{25}$$

In Equations (24) and (25), *ek* is called the innovation matrix, and *d* is the estimated residual value expressed using the adaptive window factor in the windowing function, which is used to calculate the observation dimension *M*. According to the principle of the windowing function, if *M* is small, the computational burden of the adaptive algorithm is reduced, but the accuracy of the algorithm is lower. Conversely, if *M* is large, the accuracy of the algorithm is significantly improved, but the computational burden is too high, which is not suitable for recursive estimation algorithms. Therefore, *M* needs to be adjusted according to the convergence time, as shown in Equation (26).

$$\begin{cases} \begin{array}{l} M=1, d=1\\ M=k, d=0 \end{array} \\ \begin{array}{l} M=k \times \eta^d, 0 < d < 1 \end{array} \end{cases} \tag{26}$$

In Equation (26), *η* is the window convergence rate, which has a range of 0 < *η* < 1. After calculating *M*, the noise covariance matrix can be updated as shown in Equations (27) and (28):

$$H\_k = \frac{1}{M} \sum\_{i=k-M+1}^{i=k} e\_k e\_k^T \tag{27}$$

$$\begin{cases} \begin{array}{l} \mathcal{R}\_{k} = H\_{k} - G\_{k} P\_{k/k - 1} \mathcal{G}\_{k}^{T} \\ \mathcal{Q}\_{k} = K\_{k} H\_{k} \mathcal{K}\_{k}^{T} \end{array} \end{cases} \tag{28}$$

Step 7. Estimation of error covariance.

$$P\_{k/k} = (I - K\_k G\_k) P\_{k/k - 1} \tag{29}$$

Subsequently, repeat step 2~step 7 for recursive estimation to obtain the optimal estimate of the state variables.

#### *3.2. Discretization of State Space Equation*

Taking the operating current *IL* as the input variable, battery SOC, battery electrochemical polarization voltage *Up*1, and concentration difference polarization voltage *Up*<sup>2</sup> as state variables, and *UOC* as the observation variable, Equation (30) can be obtained by combining Equations (8) and (9):

$$
\begin{pmatrix}
\text{SOC}(t) \\
\frac{d\mathcal{U}\_{p1}(t)}{dt} \\
\frac{d\mathcal{U}\_{p2}(t)}{dt}
\end{pmatrix} = \begin{pmatrix}
1 & 0 & 0 \\
0 & -\frac{1}{\mathcal{R}\_{p1}(t)\mathcal{C}\_{p1}(t)} & 0 \\
0 & 0 & -\frac{1}{\mathcal{R}\_{p2}(t)\mathcal{C}\_{p2}(t)}
\end{pmatrix} \begin{pmatrix}
\text{SOC}\_{t0} \\
\mathcal{U}\_{p1}(t) \\
\mathcal{U}\_{p2}(t)
\end{pmatrix} + \begin{pmatrix}
\frac{1}{\mathcal{C}\_{p1}(t)} \\
\frac{1}{\mathcal{C}\_{p2}(t)}
\end{pmatrix} I(t) \tag{30}
$$

$$\mathcal{U}L\_L(t) = \mathcal{U}\_{\text{OC}}[\text{SOC}(t)] - I(t)\mathcal{R}\_0(t) - \mathcal{U}\_{p1}(t) - \mathcal{U}\_{p2}(t) \tag{31}$$

Equations (30) and (31) are the continuous state equation and continuous observation equation, respectively. Here, *UOC*[*SOC*(*t*)] is the open-circuit voltage at time *t* obtained using the OCV-SOC relation function.

Setting the Coulomb efficiency *η* to be 1 and defining the sampling period as *T*, Equations (30) and (31) are discretized as follows:

$$
\begin{pmatrix}
\text{SOC}(k+1) \\
\text{II}\_{p1}(k+1) \\
\text{II}\_{p2}(k+1)
\end{pmatrix} = \begin{pmatrix}
1 & 0 & 0 \\
0 & e^{\frac{-\overline{T}}{p\_1 \text{1}(k)}} & 0 \\
0 & 0 & e^{\frac{-\overline{T}}{p\_2 \text{1}(k)}}
\end{pmatrix} \begin{pmatrix}
\text{SOC}(k) \\
\text{II}\_{p1}(k) \\
\text{II}\_{p2}(k)
\end{pmatrix} + \begin{pmatrix}
R\_{p1}(k)(1 - e^{\frac{-\overline{T}}{p\_1 \text{1}(k)}}) \\
R\_{p2}(k)(1 - e^{\frac{-\overline{T}}{p\_2 \text{1}(k)}})
\end{pmatrix} I(k)
$$

$$\mathcal{U}\_L(k) = \mathcal{U}\_{\rm OC}[\rm SOC(k)] - \mathcal{U}\_{p1}(k) - \mathcal{U}\_{p2}(k) - I(k)\mathcal{R}\_0(k) \tag{33}$$

The vector composed of *SOC*(*k*), *Up*1(*k*), and *Up*2(*k*) in the above equation is the state vector *x*<sup>k</sup> of the system at *k* time in Equation (11), while *UL*(*k*) is the observation vector *y*<sup>k</sup> of the system at *k* time. Equation (32) is a linear equation, while the nonlinearity of the battery system state space equation is reflected in the *U*OC[*SOC*(*k*)] part of Equation (33). Therefore, the Kalman filtering algorithm for SOC estimation is carried out using both EKF and AEKF, and the results are compared with the coulomb counting method for SOC estimation.
