*4.3. On-Line Estimation*

In the previous section, we have discussed about how to use sufficient statistics to learn *θ* in batch mode. In order to apply the on-line estimation, a common way [30] is to update the sufficient statistics when a new observed data come in:

$$S\_{n+1} = (1 - \rho\_{n+1}) \cdot S\_n + \rho\_{n+1} \cdot \mathbb{E}\_{\theta\_n} \left( s\_{n+1} \left| y\_{n+1} \right| \right), \tag{35}$$

where *ρ<sup>n</sup>* is the stepsize sequence that satisfies ∑<sup>∞</sup> *<sup>n</sup>*=<sup>1</sup> *<sup>ρ</sup><sup>n</sup>* <sup>=</sup> <sup>∞</sup>, <sup>∑</sup><sup>∞</sup> *<sup>n</sup>*=<sup>1</sup> *ρ*<sup>2</sup> *<sup>n</sup>* < ∞. Normally it is set to *ρ<sup>n</sup>* = 1/*n*. Then, the new parameter *θn*+<sup>1</sup> is available by Equations (29)–(34). The estimation of *xn*+1, *un*+<sup>1</sup> can be obtained by Equations (5) and (11).

In this paper, we do not update *θ* at every sampling time. Instead, we set a window length *Wl* and accumulate the latest *Wl* observed data first. Then use Equations (13)–(16) to get the smoothed result, compute the sequenced statistics *sn*| *Wl* <sup>1</sup> for all the *Wl* data by Equations (23)–(27). Afterward, update the sequenced sufficient statistics *Sn*| *Wl* <sup>1</sup> and *θn*| *Wl* <sup>1</sup> by Equations (35) and (29)–(34), respectively. It should be noticed that in on-line mode, the initial probability *ζ<sup>k</sup>* is not necessary.

After describing the batch mode and on-line parameter learning, a diagram of the training and testing is displayed in Figure 3. In the training stage, the block of model parameter learning is the Baum-Welch algorithm. The trained model is used in both the two kinds of testing, the model is updated in on-line testing, but not in the batch mode testing. Besides, the estimated hidden state of batch mode is from the smoothed probability, whereas the one of on-line is from the filtered probability.

**Figure 3.** Diagram of the training stage, and the testing stage for both batch mode and on-line testing.
