*Article* **Hybrid Model for Time Series of Complex Structure with ARIMA Components**

**Oksana Mandrikova, Nadezhda Fetisova and Yuriy Polozov \***

Institute of Cosmophysical Research and Radio Wave Propagation, Far Eastern Branch of the Russian Academy of Sciences, Mirnaya st, 7, Paratunka, 684034 Kamchatskiy Kray, Russia; oksanam1@ikir.ru or oksanam1@mail.ru (O.M.); nv.glushkova@yandex.ru (N.F.)

**\*** Correspondence: polozov@ikir.ru or up\_agent@mail.ru

**Abstract:** A hybrid model for the time series of complex structure (HMTS) was proposed. It is based on the combination of function expansions in a wavelet series with ARIMA models. HMTS has regular and anomalous components. The time series components, obtained after expansion, have a simpler structure that makes it possible to identify the ARIMA model if the components are stationary. This allows us to obtain a more accurate ARIMA model for a time series of complicated structure and to extend the area for application. To identify the HMTS anomalous component, threshold functions are applied. This paper describes a technique to identify HMTS and proposes operations to detect anomalies. With the example of an ionospheric parameter time series, we show the HMTS efficiency, describe the results and their application in detecting ionospheric anomalies. The HMTS was compared with the nonlinear autoregression neural network NARX, which confirmed HMTS efficiency.

**Keywords:** time series model; wavelet transform; ARIMA model; neural network NARX; ionospheric parameters

### **1. Introduction**

Time series modeling and analysis are important bases for the methods of studying the processes and phenomena of different natures. They are used in various spheres of human activity (physics, biology, medicine, economics, etc.). Methods of data modeling and analysis aimed at detecting and identifying anomalies are of special actuality. The examples are the problems of the recognition of anomalies in geophysical monitoring data, such as the detection of magnetic and ionospheric storms [1–4], earthquakes [5,6], tsunamis [7,8], geological anomalies [9] and other catastrophic natural phenomena. The need to detect anomalies often arises in the medical field, for example, to detect and to identify clinical conditions of patients [10]. An important property of such methods is their ability to adapt, providing the possibility to detect and identify rapid changes in the system or object state, indicating anomaly occurrences.

As a rule, time series of empirical data have a complex non-stationary structure and contain local features of various forms. The methods for the time series analysis include deterministic [11], stochastic [12–14] approaches and their various combinations [15–19]. Traditional methods for data time series modeling and analysis (AR models, ARMA [20,21], exponential smoothing [22], stochastic approximation [13], etc.) do not allow us to describe the time series of complex structure adequately [23]. At present, hybrid approaches [16,17,19,23–28] are widely applied. They make it possible to improve the efficiency of the procedure of data analysis in case of its complicated structure. For example, in [19], on the basis of wavelet decomposition, a technique was developed to estimate the coefficients of turbulent diffusion and power exponents from single Lagrangian trajectories of particles. Wavelet transform is a flexible tool and was applied in the paper [29] to study the relationship between vegetation and climate in India. The 2D empirical wavelet filters

**Citation:** Mandrikova, O.; Fetisova, N.; Polozov, Y. Hybrid Model for Time Series of Complex Structure with ARIMA Components. *Mathematics* **2021**, *9*, 1122. https:// doi.org/10.3390/math9101122

Academic Editors: Lucas Jódar and Rafael Company

Received: 31 March 2021 Accepted: 13 May 2021 Published: 15 May 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

developed by the authors of [30] are effective in image processing applications. Currently, neural network methods are also widely used [4,15,23,31]. They allow us to approximate complex nonlinear relationships in data and are easily automated. However, the reliability and accuracy of neural networks depend on data representativity and it is very laborious to adapt them. For example, the authors of the paper [31] proposed a neural network structure, based on the LSTM paradigm, which allowed them to obtain an accurate forecast of time series for web traffic on a limited data set. The authors of the paper [23] considered combinations of wavelet transform with neural networks to analyze hydrological data.

Due to these aspects and despite the intensive development of machine learning methods and their active application in various fields of artificial intelligence, classical models of time series, in particular, ARIMA models [4,15,32,33], are popular. The obvious advantages of ARIMA models are their mathematical validity, a formalized methodology for model identification and verification for its adequacy. However, the ARIMA model construction is based on the assumption that the process has a normal distribution and is stationary (or stationary in differences). If these assumptions are not satisfied, the model accuracy is significantly reduced. In order to improve the ARIMA efficiency, a number of papers [16,17,26,27,34,35] suggested a hybrid approach to the time series analysis. For example, the paper [17] proposed to apply ARIMA together with discrete wavelet transform and neural network LSTM. The authors of the paper [17] showed that the combination of ARIMA and LSTM with a discrete wavelet transform allowed them to improve the accuracy of ARIMA and LSTM models in order to make forecasts of a monthly precipitation time series. A combination of the discrete wavelet transform with ARIMA and neural network was also proposed in [35] to forecast a hydrological time series.

In this paper, we propose a hybrid model for a time series of complex structure (HMTS). The model includes regular and anomalous components. The HMTS identification is based on the combination of function expansion in a wavelet series [36] with ARIMA models [20]. The time series components obtained after expansion have a simpler structure allowing us to identify ARIMA models in the case of components stationarity. This makes it possible to obtain a more accurate ARIMA model for the time series of a complex structure and expands the field of its application. The HMTS anomalous component describes irregular (sporadic) changes in time series. It is identified on the basis of threshold functions. A large dictionary of wavelet bases allows us to identify models for the time series of complex structure [9,36,37], including local features of various forms. The paper describes a method of HMTS identification and suggests algorithms for anomaly detection. The HMTS efficiency is illustrated on the example of an ionospheric parameter time series. The results and their application in detecting ionospheric anomalies of different intensities are presented. The paper also compares the HMTS with the nonlinear autoregressive neural network NARX, which also confirmed the HMTS efficiency.

#### **2. Materials and Methods**

#### *2.1. Description of the Method*

The time series of a complex structure may be represented as

$$f(t) = A^{REG}(t) + lI(t) + e(t) = \sum\_{\mu = \overline{1, T}} a\_{\mu}(t) + lI(t) + e(t), \tag{1}$$

where *<sup>A</sup>REG*(*t*) = <sup>∑</sup> *<sup>μ</sup>*=1,*<sup>T</sup> αμ*(*t*) is a regular component, which is a linear combination of

the components *αμ*(*t*), *<sup>μ</sup>* is the component number; *<sup>U</sup>*(*t*) is the anomalous component including local features of various forms occurring at random times, *e(t)* is the noise component, *t* is time.

### *2.2. Wavelet Series Expansion and Determination of the Model Regular Components*

It is assumed that *<sup>f</sup>* <sup>∈</sup> *<sup>L</sup>*2(*R*)(*L*2(*R*) is Lebesgue space) there is a unique representation [36]

$$f(t) = \dots + \mathbb{g}\_{-1}(t) + \mathbb{g}\_0(t) + \mathbb{g}\_1(t) + \dots, \dots$$

where *gj* <sup>∈</sup> *Wj*, *<sup>j</sup>* <sup>∈</sup> <sup>Z</sup> (<sup>Z</sup> is the set of integers), *gj*(*t*) <sup>=</sup> <sup>∑</sup> *k dj*,*k*Ψ*j*,*k*(*t*), <sup>Ψ</sup>*j*,*<sup>k</sup>* <sup>=</sup> + Ψ*j*,*<sup>k</sup>* , *<sup>k</sup>*∈*<sup>Z</sup>* is the basis of the space *Wj*, the coefficients *dj*,*<sup>k</sup>* <sup>=</sup> @ *f* , Ψ*j*,*<sup>k</sup>* A , <sup>Ψ</sup>*j*,*<sup>k</sup>* <sup>=</sup> <sup>2</sup>*j*/2<sup>Ψ</sup> 2*j t* − *k* are considered as a result of mapping *<sup>f</sup>* into the space *Wj* with resolution *<sup>j</sup>*. If <sup>Ψ</sup> <sup>∈</sup> *<sup>L</sup>*2(R) is <sup>R</sup>-function and the sequence + Ψ*j*,*<sup>k</sup>* , is a Riesz basis [37] in *<sup>L</sup>*2(R), space *<sup>L</sup>*2(R) expansion structure generated by the wavelet <sup>Ψ</sup> <sup>∈</sup> *<sup>L</sup>*2(R) is

$$L^2(\mathcal{R}) = \sum\_{j \in \mathbb{Z}}^{\bullet} \mathcal{W}\_j := \dots \stackrel{\bullet}{+} \mathcal{W}\_{-1} \stackrel{\bullet}{+} \mathcal{W}\_0 \stackrel{\bullet}{+} \mathcal{W}\_1 \stackrel{\bullet}{+} \dots \tag{2}$$

where *Wj* :<sup>=</sup> *closL*<sup>2</sup>(R) <sup>Ψ</sup>*j*,*k*; *<sup>k</sup>* <sup>∈</sup> <sup>Z</sup> , the dots above the summation sign and above the plus signs denote the direct sum.

Using expansion (2), we obtain a sequence of nested and closed subspaces *Vj* ∈ *<sup>L</sup>*2(R), *<sup>j</sup>* <sup>∈</sup> <sup>Z</sup> defined as

$$\mathcal{W}\_{\mathbf{j}} = \dots \stackrel{\bullet}{+} \mathcal{W}\_{\mathbf{j}-2} \stackrel{\bullet}{+} \mathcal{W}\_{\mathbf{j}-1} \tag{3}$$

where the space *Vj* <sup>=</sup> *closL*<sup>2</sup>(R) *φ* 2*j t* − *k* , *φ* is the scaling function. Based on (2) and (3), we obtain space *<sup>L</sup>*2(R) expansion:

$$L^2(\mathbb{R}) = \mathcal{V}\_{\vec{\jmath}} \stackrel{\bullet}{+} \mathcal{W}\_{\vec{\jmath}} \stackrel{\bullet}{+} \mathcal{W}\_{\vec{\jmath}+1} \stackrel{\bullet}{+} ...\_{\prime}$$

in case of an orthogonal wavelet Ψ, we have

$$L^2(\mathbb{R}) = V\_{\vec{\jmath}} \oplus \mathcal{W}\_{\vec{\jmath}} \oplus \mathcal{W}\_{\vec{\jmath}+1} \oplus \dots \, \tag{4}$$

where ⊕ is the orthogonal sum.

Considering the space *Vj* <sup>=</sup> *closL*<sup>2</sup>(*R*) *φ* 2*j t* − *k* with *<sup>j</sup>* = 0 as the base space *<sup>f</sup>* , and using (4) *m* times, we obtain the following expansion [36]:

$$V\_0 = \mathcal{W}\_{-1} \oplus \mathcal{W}\_{-2} \oplus \dots \oplus \mathcal{W}\_{-m} \oplus V\_{-m} \dots$$

In this case, for *f*<sup>0</sup> we have the following representation:

$$f\_0(t) = \mathcal{g}\_{-1}(t) + \mathcal{g}\_{-2}(t) + \dots + \mathcal{g}\_{-m}(t) + f\_{-m}(t) = \sum\_{j=-1}^{-m} \mathcal{g}\_j(t) + f\_{-m}(t)\tag{5}$$

where *<sup>f</sup>*−*<sup>m</sup>* <sup>∈</sup> *<sup>V</sup>*−*m*, *gj* <sup>∈</sup> *Wj*, *<sup>f</sup>*−*m*(*t*) <sup>=</sup> <sup>∑</sup> *k <sup>c</sup>*−*m*,*kφ*−*m*,*k*(*t*) is the smoothed component, *<sup>c</sup>*−*m*,*<sup>k</sup>* <sup>=</sup> B *f*0, *φ*−*m*,*<sup>k</sup>* C , *<sup>φ</sup>*−*m*,*k*(*t*) <sup>=</sup> <sup>2</sup>−*m*/2*φ*(2−*mt* <sup>−</sup> *<sup>k</sup>*) is the scaling function, *gj*(*t*) <sup>=</sup> <sup>∑</sup> *k dj*,*k*Ψ*j*,*k*(*t*) are the detailing components, *dj*,*<sup>k</sup>* <sup>=</sup> @ *f*0, Ψ*j*,*<sup>k</sup>* A , <sup>Ψ</sup>*j*,*k*(*t*) <sup>=</sup> <sup>2</sup> *j* <sup>2</sup> Ψ 2*j t* − *k* is the wavelet.

Note that, when the scaling function *φ* has *L* zero moments, i.e., +"∞ −∞ *t ϑφ*(*t*)*dt* = 0, *<sup>ϑ</sup>* = \_\_\_\_\_ 1, *<sup>L</sup>* and *<sup>f</sup>* <sup>∈</sup> *<sup>C</sup><sup>L</sup>* (*C<sup>L</sup>* is the space of functions continuously differentiable by L times), then for *t* near 2*mk* [38]:

$$\mathcal{L}\_{-m,k} = \langle f, \phi\_{-m,k} \rangle \cong 2^{-m/2} f(2^m k) \tag{6}$$

It follows from (6) that the component *f*−*<sup>m</sup>* ∈ *V*−*<sup>m</sup>* gives approximation *f* with resolution 2*<sup>m</sup>* (it approximates the trend). The detailing component *gj* has the resolution 2−*<sup>j</sup>* , and approximates the local features of the scale *j*. Figure 1 shows the amplitude– frequency characteristics (AFC) of the scaling function (solid line) and the wavelet (dashed line) for different *m*, obtained for the 3rd-order Daubechies wavelet.

**Figure 1.** AFC of the scaling function and the wavelet for *<sup>m</sup>* = 1, 2, 3, 4 obtained for the 3rd-order Daubechies wavelet.

Thus, we can obtain different representations of *f*<sup>0</sup> in the form (5) for different *m*. Obviously, it is necessary to determine the level of expansion *m<sup>r</sup>* , for which the component *f*−*m<sup>r</sup>* is regular. It is natural to assume that the component *f*−*<sup>m</sup>* is regular if it is strictly stationary. In this case, the problem of determining regular components is reduced to the problem of obtaining representation (5) for which the component *f*−*<sup>m</sup>* is strictly stationary. The condition of stationarity of the component *f*−*<sup>m</sup>* will allow us to identify the ARIMA model for it. Following the theory by Box and Jenkins [20], a time series is strictly stationary if its autocorrelation function (ACF) damps rapidly during average and large delays. To determine the model type (AR, MA, ARMA) and the order, ACF and partial ACF (PACF) are studied [20]. Taking into account the fact that the *f* resolution decreases with the *m* increase, we define *m<sup>r</sup>* sequentially:

The components *f*−*m<sup>r</sup>* and *gjr* obtained on the basis of Algorithm 1 describe the regular changes of the time series. Then from (1) and (5), we have the representation:

$$f\_0(t) = \sum\_{\mu=\overline{1,T}} a\_{\mu}(t) + \mathcal{U}(t) + \varepsilon(t) = f\_{-\mathfrak{m}^r}(t) + \sum\_{j^r} \mathcal{g}\_{j^r}(t) + \sum\_{j \in P\_j} \mathcal{g}\_j(t),\tag{7}$$

where *<sup>A</sup>REG*(*t*) <sup>=</sup> <sup>∑</sup> *<sup>μ</sup>*=1,*<sup>T</sup> αμ*(*t*) <sup>=</sup> *<sup>f</sup>*−*m<sup>r</sup>* (*t*) <sup>+</sup> <sup>∑</sup> *jr gjr* (*t*), and we assume that *<sup>f</sup>*−*m<sup>r</sup>* (*t*) <sup>=</sup> *<sup>α</sup>*1(*t*), *gjr* (*t*) <sup>=</sup> *αμ*(*t*), *<sup>μ</sup>* <sup>=</sup> 2, *<sup>T</sup>*, *<sup>T</sup>* is the number of regular components; *Pj* <sup>=</sup> + *<sup>j</sup>* = <sup>−</sup>1, <sup>−</sup>(*m<sup>r</sup>* <sup>−</sup> <sup>1</sup>) *<sup>j</sup>* <sup>=</sup> *<sup>j</sup> r* , .

#### **Algorithm 1:**


#### *2.3. Estimation of the Parameters for the Model Regular Component*

The components *f*−*m<sup>r</sup>* and *gjr* are strictly stationary, thus, we can estimate ARIMA models of order (*p*, *<sup>ν</sup>*, *<sup>h</sup>*) for them [20]. Then for the component *<sup>f</sup>*−*m<sup>r</sup>* (*t*) <sup>=</sup> <sup>∑</sup> *k <sup>c</sup>*−*mr*,*kφ*−*mr*,*k*(*t*)

for brevity, we omit index *r* and obtain

$$
\omega\_{-m,k} = \gamma\_1^m \omega\_{-m,k-1} + \dots + \gamma\_p^m \omega\_{-m,k-p} - \theta\_1^m a\_{-m,k-1} - \dots - \theta\_h^m a\_{-m,k-h} \tag{8}
$$

where *<sup>ω</sup>*−*m*,*<sup>k</sup>* <sup>=</sup> <sup>∇</sup>*νc*−*m*,*k*, <sup>∇</sup>*<sup>ν</sup>* is the difference operator of order *<sup>ν</sup>*; *<sup>p</sup>*, *<sup>γ</sup><sup>m</sup>* <sup>1</sup> , ... , *<sup>γ</sup><sup>m</sup> <sup>p</sup>* are the order and the parameters of autoregression, respectively; *h*, *θ<sup>m</sup>* <sup>1</sup> , ... , *<sup>θ</sup><sup>m</sup> <sup>h</sup>* are the order and parameters of the moving average, respectively; *a*−*m*,*<sup>k</sup>* are residual errors.

In a similar way, for the component *gjr* (*t*) <sup>=</sup> <sup>∑</sup> *k djr*,*k*Ψ*jr*,*k*(*t*) we omit index *<sup>r</sup>* and obtain

$$
\omega\_{j,k}(t) = \gamma\_1^j \omega\_{j,k-1} + \dots + \gamma\_z^j \omega\_{j,k-z} - \theta\_1^j a\_{j,k-1} - \dots - \theta\_u^j a\_{j,k-u} \tag{9}
$$

where *<sup>ω</sup>j*,*<sup>k</sup>* <sup>=</sup> <sup>∇</sup>*<sup>ν</sup>jdj*,*k*, <sup>∇</sup>*ν<sup>j</sup>* is the difference operator of order *<sup>ν</sup>j*, *<sup>z</sup>*,, *<sup>γ</sup><sup>j</sup>* <sup>1</sup>, ... , *<sup>γ</sup><sup>j</sup> <sup>z</sup>* are the order and the parameters of autoregression, respectively; *u*,, *θ j* <sup>1</sup>, ... , *θ j <sup>u</sup>* are the order and the parameters of the moving average, respectively; *aj*,*<sup>k</sup>* are residual errors.

From (7) to (9) we obtain the representation:

$$A^{REG}(t) = \sum\_{\mu=\overline{1,T}} \sum\_{k=\overline{1,N}^{\mu}} s\_{j,k}^{\mu} b\_{j,k}^{\mu}(t) \,, \tag{10}$$

where *s μ <sup>j</sup>*,*<sup>k</sup>* <sup>=</sup> *<sup>p</sup><sup>μ</sup>* ∑ *l*=1 *γμ <sup>l</sup> <sup>ω</sup><sup>μ</sup> <sup>j</sup>*,*k*−*<sup>l</sup>* <sup>−</sup> *<sup>h</sup><sup>μ</sup>* ∑ *<sup>n</sup>*=<sup>1</sup> *θ μ n a μ <sup>j</sup>*,*k*−*<sup>n</sup>* is the estimated value of the parameters of a regular *μ*-th component, *pμ*, *γ<sup>μ</sup> <sup>l</sup>* are the order and the parameters of autoregression of the *μ*-th component, *hμ*, *θ μ <sup>n</sup>* are the order and the parameters of the moving average of the *μ*-th component, *ω<sup>μ</sup> <sup>j</sup>*,*<sup>k</sup>* <sup>=</sup> <sup>∇</sup>*νμ <sup>δ</sup> μ <sup>j</sup>*,*k*, *νμ* is the order of the *<sup>μ</sup>*-th component difference, *<sup>δ</sup>*<sup>1</sup> *<sup>j</sup>*,*<sup>k</sup>* <sup>=</sup> *<sup>c</sup>*−*m*,*k*, *δ μ <sup>j</sup>*,*<sup>k</sup>* <sup>=</sup> *dj*,*k*, *<sup>μ</sup>* <sup>=</sup> 2, *<sup>T</sup>*, *<sup>T</sup>* is the number of modeled components, *<sup>a</sup> μ <sup>j</sup>*,*<sup>k</sup>* are the residual errors for the *μ*-th component model, *N<sup>μ</sup>* is the *μ*-th component length, *b*<sup>1</sup> *<sup>j</sup>*,*<sup>k</sup>* <sup>=</sup> *<sup>φ</sup>*−*m*,*k*, *<sup>φ</sup>* is the scaling function, *b μ <sup>j</sup>*,*<sup>k</sup>* <sup>=</sup> <sup>Ψ</sup>*j*,*k*?,*<sup>μ</sup>* <sup>=</sup> 2, *<sup>T</sup>*, <sup>Ψ</sup> is the wavelet.

The identification of the ARIMA model for the *μ*-th component requires the determination of the different order *νμ* and the identification of the resulting ARMA process (model order and parameter estimation). The ARIMA model identification is described in detail in [20] and is not presented in the paper.

The diagnostic verification of each of the components *f*−*m<sup>r</sup>* and *gjr* models can be based on the analysis of the model residual errors. Commonly used tests based on the analysis of model residual errors are the cumulative fitting criterion [20] and the cumulative periodogram test [20].

#### *2.4. Anomalous Component of the Model*

The anomalous component *<sup>U</sup>*(*t*) of model (1) includes local features of various shapes occurring at random times. Therefore, the application of the parametric approach to identify it is ineffective.

#### 2.4.1. Application of Threshold Functions

*Pj*

In the case of a nonparametric approach, following the results of [37], the function *U* can be approximated by threshold functions:

$$\mathcal{U}(t) = \sum\_{j,k} P\_j(d\_{j,k}) \Psi\_{j,k}(t), \tag{11}$$

$$\mathcal{U}\left(d\_{j,k}\right) = \begin{cases} \begin{array}{c} 0, \text{if } \left| \, d\_{j,k} \right| \le T\_{\bar{j}} \\\ d\_{j,k'} \text{if } \left| \, d\_{j,k} \right| > T\_{\bar{j}} \end{array} \right.$$

In this case, from (7) and (10), we obtain the *hybrid model of time series (HMTS)*

$$f\_0(t) = A^{REG}(t) + lI(t) + \varepsilon(t) = \sum\_{\mu = \overline{1, T}} \sum\_{k = \overline{1, N^{\mu}}} s\_{j,k}^{\mu} b\_{j,k}^{\mu}(t) + \sum\_{j,k} P\_j(d\_{j,k}) \Psi\_{j,k}(t) + \varepsilon(t) \tag{12}$$

It was shown in [37] that the mappings (11) allow us to obtain approximations close to optimal ones (by minimizing the minimax risk) for a complex structure function. Moreover, the equivalence of discrete and continuous wavelet expansions [36,38] provides the opportunity to analyze a function on any resolution. In its turn, the increase in the amplitudes of the wavelet coefficients *dj*,*<sup>k</sup>* in the vicinity of local features of a function (Jaffard's theorem [39]) will provide, based on (11), their mapping into the component *U* of model (12).

Obviously, by applying different orthogonal wavelets Ψ we can obtain different representations (12).

We should note that due to the random nature of *U*, application of any thresholds *Tj* (see (11)) is inevitably associated with erroneous decisions. In this case, the thresholds can be chosen by minimizing the posteriori risk [40].

The threshold divides the *F* value space of the function under analysis into two nonintersecting domains *F*<sup>1</sup> and *F*<sup>2</sup> determining anomalous and non-anomalous states, respectively. For the specific state *hb*, the loss average can be estimated as [40]

$$R\_b(f) = \sum\_{z=1}^{2} \prod\_{bz} P\{f \in F\_z | h\_b\},\tag{13}$$

where ∏*bz* is the loss function, *P*{ *f* ∈ *Fz*|*hb*} is the conditional probability of falling within the domain *Fz* if the state *hb* actually exists, *<sup>b</sup>* <sup>=</sup> *<sup>z</sup>*, *<sup>b</sup>*, *<sup>z</sup>* are the state indices ("|" denotes conditional probability).

Averaging the conditional function of the risk over all the states *hb* we obtain the average risk

$$R = \sum\_{b=1}^{2} p\_b R\_{b\prime} \tag{14}$$

where *pb* is a priori probability of the state *hb*.

If we do not know priori probabilities of the states *pb*, then having statistical (priori) data, we can determine posteriori probabilities *<sup>P</sup>*{*hb*<sup>|</sup> *<sup>f</sup>* }, *<sup>b</sup>* <sup>=</sup> 1, 2. Then, applying a simple loss function

$$\prod\_{bz} = \begin{cases} 1, b \neq z, \\ 0, b = z, \end{cases}$$

from (13) and (14), a posteriori risk equals

$$R = \sum\_{b \neq z} P\{h\_b | f \in F\_z\}.\tag{15}$$

2.4.2. Analysis of the Model's Regular Component Errors and Detection of Anomalies

Obviously, during anomalous periods, the residual errors of the model regular component *AREG* (see (10)) increase. Then *anomaly detection* can be based on the conditional test

$$\varepsilon\_j^{\mu} = \sum\_{q=1}^{Q\_{\mu}} \left| a\_{j,k+q}^{\mu} \right| > H\_{\mu\nu}$$

where *q* ≥ 1 is the data lead step, *a μ <sup>j</sup>*,*<sup>k</sup>* are the residual errors of the *μ*-th component model, *Qμ* is the data lead length.

We can estimate the confidence interval of the predicted data [20], which is why it is logical to define the thresholds *Hμ* as

$$H\_{\mu}\left(Q\_{\mu}\right) = \left\{ 1 + \sum\_{q=1}^{Q\_{\mu}-1} \left(\psi\_{q}^{\mu}\right)^{2} \right\}^{1/2} \sigma\_{a^{\mu}}$$

where *σ*<sup>2</sup> *<sup>a</sup><sup>μ</sup>* is the variance of residual errors of the <sup>μ</sup>-th component model; *<sup>ψ</sup><sup>μ</sup> <sup>q</sup>* are the weighting coefficients of the *μ*-th component model, they are determined from the equation [20]

$$\begin{cases} \left(1 - \boldsymbol{\varphi}\_1^{\boldsymbol{\mu}} \boldsymbol{B} - \boldsymbol{\varphi}\_2^{\boldsymbol{\mu}} \boldsymbol{B}^2 - \dots - \boldsymbol{\varphi}\_{p^{\boldsymbol{\mu}} + \boldsymbol{\nu}\_{\boldsymbol{\mu}}}^{\boldsymbol{\mu}} \boldsymbol{B}^{p^{\boldsymbol{\mu}} + \boldsymbol{\nu}\_{\boldsymbol{\mu}}}\right) \left(1 + \boldsymbol{\psi}\_1^{\boldsymbol{\mu}} \boldsymbol{B} + \boldsymbol{\psi}\_2^{\boldsymbol{\mu}} \boldsymbol{B}^2 + \dots \right) = 0\\ \boldsymbol{\psi} = \left(1 - \boldsymbol{\theta}\_1^{\boldsymbol{\mu}} \boldsymbol{B} - \boldsymbol{\theta}\_2^{\boldsymbol{\mu}} \boldsymbol{B}^2 - \dots - \boldsymbol{\theta}\_{h^{\boldsymbol{\mu}}}^{\boldsymbol{\mu}} \boldsymbol{B}^{h^{\boldsymbol{\mu}}}\right), \end{cases}$$

where *ϕ<sup>μ</sup> <sup>j</sup>* <sup>=</sup> *<sup>γ</sup>μ*(*B*)(<sup>1</sup> <sup>−</sup> *<sup>B</sup>*) *νμ* is the generalized autoregressive operator, *B* is the back shift operator: *B<sup>l</sup> ωμ <sup>j</sup>*,*<sup>k</sup>* <sup>=</sup> *<sup>ω</sup><sup>μ</sup> <sup>j</sup>*,*k*−*<sup>l</sup>* .

It is also possible to use the following probability limits:

$$H\_{\mu}(Q\_{\mu}) = u\_{\xi/2} \left\{ 1 + \sum\_{q=1}^{Q\_{\mu}-1} \left( \psi\_q^{\mu} \right)^2 \right\}^{1/2} \sigma\_{\mathfrak{a}^{\mathfrak{p}\_{\mu}}}$$

where *uε*/2 is the quantile of the level (1 − *ε*/2) of standard normal distribution.

#### **3. Results of the Model Application**

*3.1. Modeling of Ionospheric Parameter Time Series*

The ionosphere is the upper region of the earth's atmosphere. It is located at heights from 70 to 1000 km and higher, and affects radio wave propagation [41]. Ionospheric anomalies occur during extreme solar events (solar flares and particle ejections) and magnetic storms. They cause serious malfunctions in the operation of modern ground and space technical equipment [42]. An important parameter characterizing the state of the ionosphere is the critical frequency of the ionospheric F2-layer (foF2). The foF2 time series have a complex structure and contain seasonal and diurnal components, as well as local features of various shapes and durations occurring during ionospheric anomalies. Intense

ionospheric anomalies can cause failures in the operation of technical systems. Therefore, their timely detection is an important applied problem.

In the experiments, we used hourly (1969–2019) and 15-min (2015–2019) foF2 data obtained by the method of vertical radiosonding of the ionosphere at Paratunka station (53.0◦ N and 158.7◦ E, Kamchatka, Russia, IKIR FEB RAS). The proposed HMTS was identified separately for foF2 hourly and 15-min data.

To identify HMTS regular components, we used the foF2 data recorded during the periods of absence of ionospheric anomalies. The application of Algorithm 1 showed that the components *f*−<sup>3</sup> and *g*−<sup>3</sup> are stationary (having damping ACF), thus ARIMA models can be identified for them. Figures 2 and 3 show ACF and PACF of foF2 initial time series, as well as the components *f*−<sup>3</sup> and *g*−3. The results confirm stationarity of the components *f*−<sup>3</sup> and *g*−3. An analysis of PACF shows the possibility to identify the AR models of orders 2 and 3 for the first differences of these components. The results in Figures 2 and 3 also illustrate that foF2 initial time series are non-stationary and, therefore, it is impossible to approximate them by ARIMA model without wavelet decomposition operation.

**Figure 2.** The analyzed period is from 4 January 2014 to 29 January 2014 (high solar activity): (**a**) ACF of the original signal; (**b**) ACF of the component *f*−3; (**c**) ACF of the component *g*−3; (**d**) PACF of the 1st difference of the original signal; (**e**) PACF of the 1st difference of the component *f*−3; (**f**) PACF of the 1st difference of the component *g*−3.

**Figure 3.** The analyzed period is from 9 February 2008 to 27 February 2008 (low solar activity): (**a**) ACF of the original signal; (**b**) ACF of the component *f*−3; (**c**) ACF of the component *g*−3; (**d**) PACF of the 1st difference of the original signal; (**e**) PACF of the 1st difference of the component *f*−3; (**f**) PACF of the 1st difference of the component *g*−3.

According to ratio (10) and based on the PACF of the first differences of the components *f*−<sup>3</sup> and *g*−<sup>3</sup> (Figure 3e,f), we obtain the HMTS regular component

$$A^{REG}(t) = f\_{-3}(t) + g\_{-3}(t) = \sum\_{k} c\_{-3,k} \phi\_{-3,k}(t) + \sum\_{k} d\_{-3,k} \Psi\_{-3,k}(t) = \sum\_{\mu=1,2} \sum\_{k=\overline{1,\overline{\mathcal{N}}^{\mu}}} s\_{-3,k}^{\mu} b\_{-3,k}^{\mu}(t),$$

where *s μ* <sup>−</sup>3,*<sup>k</sup>* <sup>=</sup> *<sup>p</sup><sup>μ</sup>* ∑ *l*=1 *γμ <sup>l</sup> <sup>ω</sup><sup>μ</sup>* −3,*k*−*l* , *<sup>μ</sup>* = 1, 2, *<sup>ω</sup>*<sup>1</sup> <sup>−</sup>3,*<sup>k</sup>* <sup>=</sup> <sup>∇</sup>*c*−3,*k*, *<sup>ω</sup>*<sup>2</sup> <sup>−</sup>3,*<sup>k</sup>* <sup>=</sup> <sup>∇</sup>*d*−3,*k*, *<sup>b</sup>*<sup>1</sup> <sup>−</sup>3,*<sup>k</sup>* <sup>=</sup> *<sup>φ</sup>*−3,*k*, *b*2 <sup>−</sup>3,*<sup>k</sup>* <sup>=</sup> <sup>Ψ</sup>−3,*k*. Estimated parameters for *<sup>s</sup>*<sup>1</sup> <sup>−</sup>3,*<sup>k</sup>* and *<sup>s</sup>*<sup>2</sup> <sup>−</sup>3,*<sup>k</sup>* are presented in Table 1. The parameters were estimated separately for different seasons and different levels of solar activity.

**Table 1.** HMTS regular component parameters.


Based on the data from Table 1 we obtain

(1) for wintertime:

$$\begin{array}{ll}s\_{-3,k}^{1} = -0.6\omega\_{-3,k-1}^{1} - 0.6\omega\_{-3,k-2}^{1} + 0.4\omega\_{-3,k-3}^{1} + a\_{-3,k'}^{1} \\ s\_{-3,k}^{2} = -0.9\omega\_{-3,k-1}^{2} - 0.9\omega\_{-3,k-2}^{2} + a\_{-3,k'}^{2} \\ \text{A} \quad \text{for non-zero and b.h. and non-orbitations} \end{array}$$

(2) for summertime and high solar activity: *s*1 <sup>−</sup>3,*<sup>k</sup>* <sup>=</sup> <sup>−</sup>0.5*ω*<sup>1</sup> <sup>−</sup>3,*k*−<sup>1</sup> <sup>−</sup> 0.6*ω*<sup>1</sup> <sup>−</sup>3,*k*−<sup>2</sup> <sup>+</sup> *<sup>a</sup>*<sup>1</sup> −3,*k*,

*s*2 <sup>−</sup>3,*<sup>k</sup>* <sup>=</sup> <sup>−</sup>0.9*ω*<sup>2</sup> <sup>−</sup>3,*k*−<sup>1</sup> <sup>−</sup> 0.8*ω*<sup>2</sup> <sup>−</sup>3,*k*−<sup>2</sup> <sup>+</sup> *<sup>a</sup>*<sup>2</sup> −3,*k*, (3) for summertime and low solar activity:

$$\begin{array}{l}s\_{-3,k}^{1} = -0.8\omega\_{-3,k-1}^{1} - 0.7\omega\_{-3,k-2}^{1} + a\_{-3,k'}^{1} \\ s\_{-3,k}^{2} = -0.9\omega\_{-3,k-1}^{2} - 0.9\omega\_{-3,k-2}^{2} + a\_{-3,k'}^{2} \\ \Gamma\_{1}^{\*} = 1.1 \quad 1 \quad 1 \quad 1 \quad 1 \quad 1 \quad 1 \quad 1 \quad 1 \end{array}$$

Figure 4 shows the modeling results for HMTS regular components (*f*−<sup>3</sup> and *g*−3) during the absence of ionospheric anomalies. The model errors do not exceed the confidence interval that indicates their adequacy.

**Figure 4.** Modeling of the components *f*−<sup>3</sup> and *g*−3: (**a**) foF2 data (8 February 2011–12 February 2011); (**b**) component *<sup>f</sup>*−<sup>3</sup> (black) and its model values *<sup>s</sup>*<sup>1</sup> <sup>−</sup>3,*<sup>k</sup>* (blue dashed line); (**c**) component *<sup>g</sup>*−<sup>3</sup> (black) and its model values *s*<sup>2</sup> <sup>−</sup>3,*<sup>k</sup>* (blue dashed line); (**d**) errors of *<sup>s</sup>*<sup>1</sup> −3,*k*; (**e**) errors of *<sup>s</sup>*<sup>2</sup> −3,*k*. On the graphs (**d**,**e**) the dashed lines show 70% confidence intervals.

Tables 2 and 3, and Figure 5 show the results of validation tests for the obtained models. The tests were carried out for the foF2 data that were not used at the stage of model identification. In order to verify the models, we used the cumulative fitting criterion (Tables 2 and 3), analysis of model residual error ACF (Figure 5a,b) and normalized cumulative periodogram (Figure 5c,d).

**Table 2.** Cumulative fitting criterion for the winter season.


<sup>−</sup>**<sup>3</sup> Table Value** *<sup>χ</sup>***0.12/***χ***0.05<sup>2</sup>** *<sup>Y</sup>* **for** *<sup>s</sup>***<sup>2</sup>**

<sup>−</sup>**<sup>3</sup> Table Value** *<sup>χ</sup>***0.12/***χ***0.05<sup>2</sup>**

**Table 3.** Cumulative fitting criterion for the summer season.

**Periods** *Y* **for** *s***<sup>1</sup>**

**Figure 5.** Results of model verification: ACF of residual errors: (**a**) *a*<sup>1</sup> −3,*k*; (**b**) *<sup>a</sup>*<sup>2</sup> −3,*k*; cumulative periodogram of residual errors: (**c**) *a*<sup>1</sup> −3,*k*; (**d**) *<sup>a</sup>*<sup>2</sup> −3,*k*.

Based on the cumulative fitting criterion [20], the fitted model is satisfactory if

$$\mathcal{Y} = n \sum\_{z=1}^{Z} y\_z^2(a)$$

is distributed approximately as *<sup>χ</sup>*2(*<sup>Z</sup>* <sup>−</sup> *<sup>p</sup>* <sup>−</sup> *<sup>h</sup>*), where *<sup>Z</sup>* are the considered first autocorrelations of model errors, *<sup>p</sup>* is the AR model order, *<sup>h</sup>* is the MA model order, *yz*(*a*) are the autocorrelations of model error series, *<sup>n</sup>* = *<sup>N</sup>* <sup>−</sup> *<sup>ν</sup>*, *<sup>N</sup>* is the series length, *<sup>ν</sup>* is the model difference order.

According to the criterion, if the model is inadequate, the average *Y* grows. Consequently, the model adequacy can be verified by comparing *Y* with the table of *χ*<sup>2</sup> distribution. The results in Tables 2 and 3 show that the *Y* values of the estimated models, at a significance level *<sup>α</sup>* <sup>=</sup> 0.05, do not exceed the table *<sup>χ</sup>*<sup>2</sup> values. The model adequacy is

also confirmed by the analysis of residual error ACF (Figure 5a,b) and the normalized cumulative periodogram (Figure 5c,d).

Figure 6a,b shows the results of modeling of the hourly foF2 data during the magnetic storm on 18 and 19 December 2019. Figure 6c shows the geomagnetic activity index K (K-index), which characterizes geomagnetic disturbance intensity. The K-index represents the values from 0 to 9, estimated for the three-hour interval. It is known that during increased geomagnetic activity (K > 3), anomalous changes are observed in ionospheric parameters [43]. The analysis of the results in Figure 6 shows an increase in the model errors during the increase in K-index and magnetic storm occurrence (Figure 6b). This indicates ionospheric anomaly occurrences. The results show that the HMTS allows us to detect ionospheric anomalies successfully.

**Figure 6.** Modeling of foF2 data for the period from 17 December 2019 to 24 December 2019. (**a**) foF2 data (black), HMTS (blue); (**b**) errors of *s*<sup>1</sup> <sup>−</sup>3,*<sup>k</sup>* (black) and *<sup>s</sup>*<sup>2</sup> <sup>−</sup>3,*<sup>k</sup>* (green), dashed lines show 70% confidence intervals; (**c**) K-index values.

Figure 7 shows the results of the application of operation (11) to 15-min foF2 data during the same magnetic storm. Based on operation (11), ionospheric anomaly occurrences are determined by the threshold function *Pj dj*,*<sup>k</sup>* with the thresholds *Tj*.

In this paper, we used the thresholds

$$T\_j = V \ast \sqrt{\frac{1}{\Phi - 1} \sum\_{k=1}^{\Phi} \left( d\_{j,k} - \overline{d\_{j,k}} \right)^2}$$

where the coefficient *<sup>V</sup>* = 2.3 was estimated by minimizing a posteriori risk (ratio (15)), *dj*,*<sup>k</sup>* is the average value calculated in a moving time window with the length <sup>Φ</sup> <sup>=</sup> 480 (it corresponds to the interval of 5 days).

**Figure 7.** Modeling of foF2 data for the period from 17 December 2019 to 22 December 2019. (**a**) 15-minute data of foF2; (**b**) positive (red) and negative (blue) ionospheric anomalies; (**c)** ionospheric anomaly intensity; (**d**) K-index values.

Positive (*Pj dj*,*<sup>k</sup>* <sup>&</sup>gt; <sup>0</sup>) and negative (*Pj dj*,*<sup>k</sup>* <sup>&</sup>lt; <sup>0</sup>) anomalies were considered separately. Positive anomalies (shown in red in Figure 7b) characterize the anomalous increase in foF2 values. Negative anomalies (shown in blue in Figure 7b) characterize anomalous decrease in foF2 values. To evaluate the intensity of ionospheric anomalies we used the value

$$I\_k = \sum\_{j} P\_{\vec{j}} \left( d\_{j,k} \right),$$

Assessment of the intensity of positive *I* + *<sup>k</sup>* (*Pj dj*,*<sup>k</sup>* <sup>&</sup>gt; <sup>0</sup>) and negative *<sup>I</sup>* − *<sup>k</sup>* (*Pj dj*,*<sup>k</sup>* <sup>&</sup>lt; <sup>0</sup>) ionospheric anomalies is shown in Figure 7c, positive anomalies are shown in red, negative ones are shown in blue. Figure 7d shows the K-index values. The results show the occurrence of a negative ionospheric anomaly during the initial and the main phases of the magnetic storm (18 December 2019), and a positive ionospheric anomaly during the recovery phase of the storm (19 December 2019). The observed dynamics of the ionospheric parameters are characteristic of the periods of magnetic storms [43]. The results show the efficiency of HMTS application for detecting ionospheric anomalies of different intensities.

#### *3.2. Comparison of HMTS with NARX Neural Network*

To evaluate the HMTS efficiency, we compared it with the NARX neural network [44]. The NARX network is a non-linear autoregressive neural network, and it is often used to forecast time series [44–47]. The architectural structure of recurrent neural networks can take different forms. There are NARX with a Series-Parallel Architecture (NARX SPA) and NARX with a Parallel Architecture (NARX PA) [44,45].

The dynamics of the NARX SPA model is described by the equation

$$y(k+1) = F\left[\mathbf{x}(k), \mathbf{x}(k-1), \dots, \mathbf{x}(k-l\_{\mathbf{x}}), y(k), y(k-1), \dots, y(k-l\_{\mathbf{y}})\right],$$

where *<sup>F</sup>*(·) is the neural network display function, *<sup>y</sup>*(*<sup>k</sup>* <sup>+</sup> <sup>1</sup>) is the neural network output, *<sup>x</sup>*(*k*), *<sup>x</sup>*(*<sup>k</sup>* <sup>−</sup> <sup>1</sup>), ... , *<sup>x</sup>*(*<sup>k</sup>* <sup>−</sup> *lx*) are neural network inputs, *<sup>y</sup>*(*k*), *<sup>y</sup>*(*<sup>k</sup>* <sup>−</sup> <sup>1</sup>), ... , *<sup>y</sup> k* − *ly* are past values of the time series.

In NARX PA, the network input takes the network outputs *<sup>y</sup>*ˆ*<sup>i</sup>* <sup>=</sup> *<sup>y</sup>*ˆ(*i*) instead of the past values of the time series *yi* <sup>=</sup> *<sup>y</sup>*(*i*), *<sup>i</sup>* <sup>=</sup> *<sup>k</sup>*, *<sup>k</sup>* <sup>−</sup> *ly*.

The neural networks were trained separately for different seasons and different levels of solar activity. During the training, we used the data for the periods without ionospheric anomalies. We obtained the networks with delays *lx* <sup>=</sup> *ly* <sup>=</sup> 2 and *lx* <sup>=</sup> *ly* <sup>=</sup> 5 for each season. The results of the networks are shown in Figure 8. Table 4 shows the standard deviations of errors (SD) of networks, which were determined as

*n*

2

: 1

**Figure 8.** Network errors: (**a**,**e**) foF2 data (blue), NARX PA output (black); (**b**,**f**) NARX PA errors; (**c**,**g**) foF2 data (blue), NARX SPA output (black); (**d**,**h**) NARX SPA errors.


**Table 4.** Standard deviations of neural network errors.

The analysis of the results (Figure 8, Table 4) shows that the NARX SPA predicts the data with fewer errors than the NARX PA. Sending the past time series values to the NARX SPA network input (rather than network outputs) made it possible to obtain a more accurate data prediction. The comparison results of the NARX SPA with the HMTS are presented below.

Figure 9 shows the results of ionospheric data modeling based on HMTS and NARX SPA during the periods of absence of ionospheric anomalies. The results show that the model errors have similar values for the winter and summer seasons, and vary within the interval of [−1,1], both for HMTS and NARX SPA.

**Figure 9.** Errors of HMTS and NARX SPA for summer (from 6 June 2019 to 16 June 2019) and winter (from 15 February 2019 to 23 February 2019) seasons: (**a**,**e**) foF2 data; (**b**,**f**) HMTS errors; (**c**,**g**) NARX SPA errors (network delays *lx* <sup>=</sup> *ly* <sup>=</sup> 2); (**d**,**h**) NARX SPA errors (network delays *lx* <sup>=</sup> *ly* <sup>=</sup> 5).

Figure 10 shows the results of the application of HMTS and NARX SPA for hourly foF2 data during magnetic storms that occurred on 21–22 November 2017 and 5–6 August 2019. NARX SPA errors were calculated in a 3-h moving time window: *<sup>ε</sup><sup>i</sup>* <sup>=</sup> *<sup>i</sup>*+<sup>1</sup> ∑ *<sup>i</sup>*=*i*−<sup>1</sup> |*yi* − *y*ˆ*i*|. Figure 10e,j shows the geomagnetic activity Dst-index, which characterizes geomagnetic disturbance intensity during magnetic storms. Dst-index takes negative values during magnetic storms. The increases in HMTS and NARX SPA errors during the analyzed magnetic storms (Figure 10b–d,g–i) indicate ionospheric anomaly occurrences. The results show that HMTS and NARX SPA allow us to detect ionospheric anomalies successfully. However, an increase in NARX SPA errors is also observed in wintertime on the eve and after the magnetic storm (Figure 10c,d). This shows the presence of false alarms.

**Figure 10.** Modeling of hourly foF2 data: (**a**,**f**) recorded foF2 data (black), foF2 median (blue); (**b**,**g**) errors of *s*<sup>1</sup> <sup>−</sup>3,*<sup>k</sup>* (black) and *s*2 <sup>−</sup>3,*<sup>k</sup>* (green), dashed lines show 70% confidence intervals; (**c**,**h**) NARX SPA errors (network delays *lx* <sup>=</sup> *ly* <sup>=</sup> 2); (**d**,**i**) NARX SPA errors (network delays *lx* <sup>=</sup> *ly* <sup>=</sup> 5); (**e**,**j**) Dst-index of geomagnetic activity.

The results of detecting ionospheric anomalies based on HMTS and NARX SPA are shown in Tables 5–8. The estimates were based on statistical modeling. The HMTS results are shown for the 90% confidence interval. The analysis of the results shows that NARX SPA efficiency exceeds that for HMTS during high solar activity. However, the frequency of false alarms for HMTS is significantly less than that for NARX SPA.


**Table 5.** Results for wintertime and high solar activity.

**Table 6.** Results for wintertime and low solar activity.


**Table 7.** Results for summertime and high solar activity.


**Table 8.** Results for summertime and low solar activity.


#### **4. Conclusions**

The paper proposes a hybrid model of time series of complex structure. The model is based on the combination of function expansions in a wavelet series with ARIMA models. Ionospheric critical frequency data were used to estimate the HMTS efficiency. The estimates showed:


Comparison of HMTS with NARX with Series-Parallel Architecture confirmed the HMTS efficiency to detect anomalies in the ionospheric critical frequency data. The results of the experiments showed that the efficiency of the NARX neural network slightly exceeds that of HMTS (about 2–3%) during high solar activity. However, the frequency of false alarms in NARX is significantly higher (about 15%). During the periods of low solar activity, the efficiency of HMTS exceeds that of NARX.

The HMTS can be used for modeling and analysis of time series of complex structure, including seasonal components and local features of various forms.

**Author Contributions:** Conceptualization, O.M.; methodology, O.M. and N.F.; software, N.F. and Y.P.; formal analysis, O.M., Y.P. and N.F.; project administration, O.M. All authors have read and agreed to the published version of the manuscript.

**Funding:** The work was carried out according to the Subject AAAA-A21-121011290003-0 "Physical processes in the system of near space and geospheres under solar and lithospheric influences" IKIR FEB RAS.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Acknowledgments:** The work was carried out by the means of the Common Use Center "North-Eastern Heliogeophysical Center" CKP\_558279", "USU 351757. The authors are grateful to the Institutes that support the ionospheric stations data that were used in the work. We would like to thank anonymous Reviewers for their greatly appreciated efforts helping to improve the paper.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**

