*3.1. Recursive Least Squares Estimation*

The RLS algorithm used in this paper employs an optimal forgetting factor to give more weight to recent data, and avoid the saturation phenomenon [19]. The forgetting factor is applied to the parameter vector θ*k*. The recursive algorithm of Equation (10) can be represented as follows:

$$K\_k = \frac{P\_{k-1}\phi\_k}{\lambda + \phi\_k^T P\_{k-1}\phi\_k} \tag{13}$$

$$P\_k = \frac{P\_{k-1} - K\_k \phi\_k^T P\_{k-1}}{\lambda} \tag{14}$$

$$
\partial\_k = \partial\_{k-1} + \mathbb{K}\_k \Big( y\_k - \partial\_{k-1}^T \phi\_k \Big) \tag{15}
$$

where θ ˆ *k* is the estimated parameter vector θ*k*, *Kk* is the algorithm gain, *Pk* is the covariance matrix, and λ is the forgetting factor, which will be optimized in the range of [0.95, 1]. The values of θ0 and P0 are initially guessed. The schematic diagram for the RLS algorithm is shown in Figure 2.

**Figure 2.** Schematic diagram of the recursive least squares (RLS) algorithm.
