*5.1. Example 1*

Consider the SEIR epidemic model (4) with the following parameters defining the time-varying matrix *A*(*k*) as

> *a*(*k*) = *a*(0) + 0.05 sin(0.5*πk*) *b*(*k*) = *b*(0) + 0.02 sin(0.5*πk*) *c*(*k*) = *c*(0) + 0.1sin2(0.25*πk*) *d*(*k*) = *d*(0) + 0.1 sin(0.25*πk*)

with *a*(0) = 0.4/month, *b*(0) = 0.124/month, *c*(0) = 0.2/month, *d*(0) = 0.45/month and degree of seasonality *η* = 0.4.

The uncertain input parameter is ℘(*k*) = 0.5(1 + *η* cos(0.25*k*)), while the bounded disturbance and output measurement noise are: *w*(*k*) ∈ [*w*(*k*) *w*(*k*)] for *w*(*k*)=[0.35 0 0 0] *T*, *w*(*k*)=[0.45 0 0 0] *<sup>T</sup>* and *<sup>v</sup>*(*k*) = *<sup>V</sup>* cos(0.25*πk*) with *<sup>V</sup>* <sup>=</sup> <sup>−</sup>0.00001.

The two matrices Σ*<sup>A</sup>* and Σ*CA* are, respectively, obtained using (12a) and (12b), and bounds on the uncertain quantities are calculated by (10), (11) and (13). The observability matrix for *κ<sup>j</sup>* = (*k* + *j*); *j* = 0, 1, 2 is computed as follows:

$$
\begin{split}
\langle\rangle^{-1} &= \begin{bmatrix}
1 & 0 & 0 \\
1 - a\_{\kappa\_0} & 0 & 0 \\
(1 - a\_{\kappa\_1}) & 0 & b\_{\kappa\_1} d\_{k0} \\
(1 - a\_{\kappa\_0}) & b\_{\kappa\_1} d\_{\kappa\_2} (1 - a\_{\kappa\_0} - d\_{\kappa\_0}) \\
(1 - a\_{\kappa\_1}) & b\_{\kappa\_1} c\_{\kappa\_0} d\_{\kappa\_2} & + d\_{\kappa\_0} (b\_{\kappa\_1} (1 - a\_{\kappa\_2}) \\
(1 - a\_{\kappa\_0}) & b\_{\kappa\_1} d\_{\kappa\_2} (1 - a\_{\kappa\_1} - b\_{\kappa\_1}) \\
 & & + b\_{\kappa\_2} (1 - a\_{\kappa\_1} - b\_{\kappa\_1})
\end{bmatrix} \\ & \begin{aligned}
b\_{(\kappa\_0)} \\
b\_{(\kappa\_0)}
\end{aligned}
\end{split}
$$

$$
\begin{aligned}
\begin{aligned}
(1 - a\_{(\kappa\_1)}) b\_{(\kappa\_0)} \\
+ b\_{(\kappa\_1)} (1 - a\_{(\kappa\_0)} - b\_{(\kappa\_0)}) \\
+ (1 - a\_{\kappa\_0} - b\_{\kappa\_0})
\end{aligned}
\end{cases}
$$

It should be noted that our model is of order four, i.e., *n* = 4. Therefore, we need to know the first two state intervals to implement the given interval estimator (9) for *k* ≥ 3. Hence, the interval predictor (24) is used for *k* = 1, 2 provided that the initial values *x*(0) ≤ *x*(0) ≤ *x*(0) are satisfied. It is worth mentioning that the given model does not have to be non-negative for the proposed interval estimator to operate.

Simulation experiments of the proposed method and the one in [28] are conducted to show the efficiency of the given approach. As perceived, the actual states are confined inside the two boundaries generated by (24) and (9). Figure 2 depicts the evolution of the actual states *xs*, *s* = 1, 2, 3, 4, the estimated bounds by the proposed method (solid pink lines), and the estimated bounds by MST (blue dashed line) [28]. The figure shows that the developed approach estimates tighter bounds than those calculated using the method described in [28]. Furthermore, in regards to the design perspective, the observer gain matrix in [28] needs to be Schur and non-negative, while we do not need an observer gain to design the interval estimator.

**Figure 2.** Interval estimations by proposed method vs. given in [28] for each state variable *x*1, *x*2, *x*3, *x*<sup>4</sup> corresponding to *S*, *E*, *I*, *R*.

Secondly, the comparison of the interval state estimation errors *eS*, *eE*, *eI* and *eR* reflected in Figures 3 and 4 further clarify that the estimated bounds generated by the proposed method are more accurate and precise compared with [28]. Finally, Figure 5 shows the convergence of the interval widths given by (28). After three steps, the interval widths converge to their final values, proving the finite-time convergence performance of the proposed technique. Thus, it is concluded that the proposed method has a better performance.

**Figure 3.** Upper-bound error: (a) proposed method; (b) method given in [28].

**Figure 4.** Lower-bound error: (a) proposed method; (b) method given in [28].

**Figure 5.** Finite-time convergence of *S*, *E*, *I* and *R* after 3rd iteration.
