**1. Introduction**

The mutual influence of migratory processes in regional systems is a problem of growing significance in the modern world. The socioeconomic statuses of different regions demonstrate higher heterogeneity in response to rising political and military tension. All these factors cause an abrupt redistribution of migration flows and regional population variations, thereby increasing the cost of regional population maintenance [1–4]. Therefore, it is important to develop different tools (mathematical models, algorithms, and software) for forecasting the distribution of migration flows with adaptation to their dynamics considering available resources.

The authors of [5] suggested a dynamic entropy model for the migratory interaction of regional systems. In comparison with biological reproduction, migration mobility is a rather fast process [1,6]. Thus, the short-term dynamics of regional population size are described by the locally stationary state of a migratory process [7]. The latter can be simulated under the hypothesis that all migrants have a random and independent spatial distribution over interacting regional systems with given prior probabilities. The mathematical model of a locally stationary state is given by a corresponding entropy operator that maps the space of admissible resources into the space of migratory processes [8].

Mathematical modeling and analysis of interregional migration is considered in numerous publications. First, it seems appropriate to mention the monographs [9,10] that are dedicated to a wide range of interregional migration problems, including mathematical modeling of migration flows. Note that the problem of migration touches upon many aspects of socioeconomic, psychological and political status of the space of migratory movements. Thus, of crucial role is the structural analysis of inter- and intraregional migration flows [4] and motivations that generate them [2,11]. The results of structural and motivational analysis of migratory processes are used for computer simulation. There exist three directions of research in this field, each relying on some system of hypotheses. One of the directions involves the stochastic hypothesis about the origin of migratory motivations [12], which is

simulated using agen<sup>t</sup> technologies [13,14]. This direction is adjoined by investigations based on the thermodynamic model of migration flows [3,8]. Of course, the short list above does not exhaust the whole variety of migration studies, merely outlining some topics of research.

This paper studies a stochastic version of the model in [5], in which random parameters and measurement noises are characterized by probability density functions (PDFs). These functions are estimated using retrospective information on the real dynamics of regional population size with "soft" randomized machine learning [15]. The learned model was implemented in the form of computer simulations, i.e., generation of an ensemble of random trajectories with the entropy-optimal PDFs of the model parameters and measurement noises. The resulting ensemble was used for testing of the model and also for short-term forecasting.

The method developed below is illustrated by an example of the randomized modeling and forecasting of the migratory interaction among three EU countries (Germany, France, and Italy—the system GFI) and two countries as sources of immigration (Syria and Libya—the system SL).

## **2. Randomized Model of Migratory Interaction**

Consider the dynamic discrete-time model of migratory interaction with shared resource constraints that is presented in [5]. The first sub-model represents migration flows within the system GFI and is described by the dynamic regression equation

$$\mathbf{K}[(\mathbf{s}+1)h] = (A-E)\mathbf{K}[\mathbf{s}h] + \mathbf{F}(z[\mathbf{s}h]), \qquad (\mathbf{K}, \mathbf{F}) \in R^N, \; \mathbf{s} = \overline{0, K-1}, \tag{1}$$

where

$$A = h \begin{pmatrix} 1 & a\_2 a\_{21} & \cdots & a\_N a\_{N1} \\ a\_1 a\_{12} & 1 & \cdots & a\_N a\_{N2} \\ \dots & \dots & \dots & \dots & \dots \\ a\_1 a\_{1N} & a\_2 a\_{2N} & \cdots & 1 \end{pmatrix} \tag{2}$$

$$E = h \operatorname{diag} [a\_{n\prime} \; n = \overline{1, N}].\tag{3}$$

In these equations, **K**[*sh*] denotes the population distribution in the regional system GFI at a time *sh*.

At a time *sh*, the distribution of immigration flows from the regional system SL to the regional system GFI in terms of an entropy operator is modeled by the second sub-model, which can be described by a vector function **<sup>F</sup>**(*z*[*sh*]) with the components

$$f\_n[sh] = h \sum\_{j=1}^{M} b\_{jn}(z[sh])^{c\_{jn}}, \qquad n = \overline{1, N}, s = \overline{0, K-1}, \tag{4}$$

The variable *z*, which is the exponential Lagrange multiplier in the entropy-optimal distribution problem of immigration flows, satisfies the equation

$$\sum\_{k=1}^{M} \sum\_{n=1}^{N} c\_{kn} b\_{kn} (z[sh])^{c\_{kn}} = T[sh],\tag{5}$$

where *T*[*sh*] is the amount of a shared resource used by all regions from the system GFI to maintain immigrants.

In this model, the input data are the amounts *<sup>T</sup>*[0], *<sup>T</sup>*[*h*], ... , *T*[(*K* − <sup>1</sup>)*h*]; and the output data are the regional population distributions **<sup>K</sup>**[0], **<sup>K</sup>**[*h*],..., **K**[(*K* − <sup>1</sup>)*h*].

The dynamic model in Equations (1)–(5) contains the following parameters:


Normalization means that 0 < *ckn* < 1, *k* = 1, *M*, *n* = 1, *N*.

The parameters form three groups: mobility, migratory movements within the system GFI, and immigratory movements from the system SL to the system GFI. All these characteristics are specified by the regions of both systems. The dimensionality of the parametric space is reduced using the same approach as in [5]. The whole essence is to assign a relative regional differentiation of all parameters except for the weights *b*1 (mobility) and *b*2 (internal migration) of these groups, which are considered as model variables.

This approach leads to the parametric transformation

$$a\_{\rm nl} = b\_1 m\_{\rm n\prime} \quad a\_{\rm in} = b\_2 h\_{\rm in\prime} \tag{6}$$
 
$$(i, n) = \overline{1, N}; \; k = \overline{1, M},$$

where *mn* and *hin* are given parameters which characterize the relation of variables.

Then, the dynamic model of migratory interaction in Equations (1)–(5) takes the form

$$\mathbf{K}[(s+1)h] = (b\_1 b\_2 \mathbf{\tilde{A}} - b\_1 \mathbf{\tilde{E}}) \mathbf{K}[sh] + \mathbf{\tilde{F}}(z[sh]),\tag{7}$$

with the matrix

$$
\vec{A} = h \begin{pmatrix}
1 & m\_2 h\_{21} & \cdots & m\_N h\_{N1} \\
 m\_1 h\_{12} & 1 & \cdots & m\_N h\_{N2} \\
 \cdots & \cdots & \cdots & \cdots & \cdots \\
 m\_1 h\_{1N} & m\_2 h\_{2N} & \cdots & 1 \\
\end{pmatrix} \tag{8}
$$

and the diagonal matrix

$$E = h \operatorname{diag} [m\_{n\nu} \ n = \overline{1\_\prime N}].\tag{9}$$

The vector **F** ˜(*μ*, *z*)[*sh*] consists of the components

$$\tilde{f}\_n(z[sh]) = h \sum\_{k=1}^{M} q\_{kn}(z[sh])^{c\_{kn}}, \quad n = \overline{1, N}, \ s = \overline{0, K-1}. \tag{10}$$

For each time *sh*, the variable *z* satisfies the equation

$$\sum\_{k=1}^{M} \sum\_{n=1}^{N} c\_{kn} q\_{kn} (z[sh])^{c\_{kn}} = T[sh], \qquad s = \overline{0, K-1}, \tag{11}$$

i.e., there exist *K* values *z* = *<sup>z</sup>*<sup>∗</sup>[*sh*], *s* = 0, *K* − 1.

The randomized version of this model is described by Equations (7)–(11) but some parameters (variables) have random character. These are two randomized parameters, *b*1 and *b*2, as well as the variable *z* = *b*3, all of the interval type. More specifically, the parameters *b*1 and *b*2 belong to the intervals

$$\mathcal{B}\_1 = [b\_1^-, b\_1^+], \mathcal{B}\_2 = [b\_2^-, b\_2^+]. \tag{12}$$

The interval B3 of the variable *b*3 is given by Equation (11).

**Theorem 1.** *Let the parameters bkn and ckn in Equation (11) be positive and ckn* ∈ [0, 1]. *Then, the solution b*∗3 *of this equation belongs to the interval*

$$\mathcal{B}\mathfrak{z} = [b\_3^-, b\_3^+]\_{\prime} \tag{13}$$

*where*

$$\begin{array}{rcl} b\_3^- &=& \left(\frac{T[\text{sh}]}{\text{MNc}\_{\text{max}}b\_{\text{max}}}\right)^{1/c\_{\text{max}}}; & b\_3^+ = \left(\frac{T[\text{sh}]}{\text{MNc}\_{\text{min}}b\_{\text{min}}}\right)^{1/c\_{\text{min}}}; & &\\ c\_{\text{min}} &=& \min\_{\text{kn}} c\_{\text{kn}}, c\_{\text{max}} = \max\_{\text{kn}} c\_{\text{kn}}; & b\_{\text{min}} = \min\_{\text{kn}} b\_{\text{kn}}, b\_{\text{max}} = \max\_{\text{kn}} b\_{\text{kn}}. \end{array} \tag{14}$$

## The proof is postponed to the Appendix A.

Therefore, the randomized dynamic model in Equations (7)–(11) includes three random parameters **b** = {*b*1, *b*2, *b*3} of the interval type that are defined over the three-dimensional cube with faces (Equations (12) and (13)), i.e.,

$$\mathcal{B} = \bigotimes\_{j=1}^{3} \mathcal{B}\_{j}.\tag{15}$$

The probabilistic properties of the randomized parameters are described by a continuously differentiable PDF *<sup>W</sup>*(**b**).

By assumption, real distributions of regional population sizes contain errors that are simulated by a random vector ¯ *ξ*[*sh*] ∈ *R N* with the interval components

$$\vec{\xi}[sh] \in \Xi\_{\mathfrak{s}} = \begin{bmatrix} \vec{\xi}^{\leftarrow}[sh], \vec{\xi}^{\leftarrow}[sh] \end{bmatrix} . \tag{16}$$

The probabilistic properties of this vector are described by a continuously differentiable PDF *Q*( ¯ *ξ*). The measured output of the randomized model has an additive noise,

$$\mathbf{v}[sh] = \mathbf{K}[sh] + \vec{\mathfrak{J}}[sh]. \tag{17}$$

## **3. Characterization of Empirical Risk and Measurement Noises**

*Construct a synthetic functional J*[*W*(**b**), *Q*( ¯ *ξ*)] that depends on the PDFs of the model parameters and measurement noises for assessing in quantitative terms the empirical risk (the difference between the regional population distribution generated by the model in Equations (7)–(11) and the real counterpart) and the guaranteed power of these noises. The functional must have components characterizing an intrinsic uncertainty of randomized machine learning (RML) procedures, the approximation quality of empirical balances (the empirical risk) and the worst properties of the corresponding random interval-type noises.

*1. Uncertainty.* In accordance with the general concept of RML, the first component among the listed ones is *an entropy functional* that describes the level of uncertainty:

$$\mathcal{H}[\mathbf{b}), Q(\xi)] = -\int\_{\mathcal{B}} \mathcal{W}(\mathbf{b}) \ln \mathcal{W}(\mathbf{b}) d\mathbf{b} - \int\_{\Xi} Q(\xi) \ln Q(\xi) d\xi. \tag{18}$$

The two other functional components are constructed using Hölder's vector and matrix norms (The vector norm has the form **a**∞ = max*n* |*an*|; the matrix norm, the form *A*∞ = max*ij* |*aij*|.) [16].

*2. Approximate empirical balances.* First, consider a characterization of *the empirical risk*. For the model in Equations (7)–(11), the deviation between the output and real data vectors is given by

$$\mathbb{E}[sh] = \left(b\_1 b\_2 \mathbb{A} - b\_1 \mathbb{E}\right) \mathbf{Y}[sh] + \mathbf{F}(b\_3[sh]) - \mathbf{Y}[sh], \qquad s = \overbrace{0, K-1}^{} \tag{19}$$

Using well-known inequalities for the matrix and vector norms, it is possible to write

$$\begin{split} \|\|\ddot{\varepsilon}[sh]\|\|\_{\infty} &\leq \quad \|\left(b\_{1}b\_{2}\tilde{A} - b\_{1}\tilde{E}\right)\|\|\infty\|\|\mathbf{Y}[sh]\|\|\_{\infty} + \|\|\mathbf{F}(b\_{3}[sh])\|\|\_{\infty} + \|\|\mathbf{Y}[(s+1)h]\|\|\_{\infty} \\ &= \quad \varrho(b\_{1}, b\_{2}, b\_{3}, \mathbf{s}), \quad \mathbf{s} = \overline{0, K-1}. \end{split} \tag{20}$$

> Introducing the average matrix and vector norms over the observation interval,

$$
\varrho(b\_1, b\_2, b\_3) \le h \left( \frac{1}{K} \sum\_{s=0}^{K-1} \max\_n y\_n[sh] \right) \left( b\_1 \max\_n m\_n + b\_1 b\_2 \max\_{i,j} h\_{ij} \right) + 
$$

$$
+ \quad \frac{1}{K} \sum\_{s=0}^{K-1} \max\_n y\_n[(s+1)h] + M \lambda c\_{\max} b\_{\max}(b\_3)^{c\_{\max}}.\tag{21}
$$

The parameters *b*1 and *b*2 take values within the intervals B1 and B2 (Equation (12)) while the parameter *b*3 within the interval

$$\mathcal{B}\_3 = \left[ \left( \frac{T\_{\max}}{MNc\_{\max} q\_{\max}} \right)^{1/\varepsilon\_{\max}}, \left( \frac{T\_{\max}}{MNc\_{\min} q\_{\min}} \right)^{1/\varepsilon\_{\min}} \right], \tag{22}$$

where

$$T\_{\max} = \max\_{\mathfrak{s}} T[\mathfrak{sl}].\tag{23}$$

Denote

$$\mathcal{U}\_{1} \begin{array}{rcl} \mathcal{U}\_{1} &=& \mathcal{U} \left( \frac{1}{K} \sum\_{s=0}^{K-1} \max\_{n} y\_{n} [s \!h] \right) \max\_{n} m\_{n} \text{; } \mathcal{U}\_{2} = \mathcal{U} \left( \frac{1}{K} \sum\_{s=0}^{K-1} \max\_{n} y\_{n} [s \!h] \right) \max\_{i,j} \mathcal{U}\_{ij} \\ \mathcal{U}\_{3} &=& \mathcal{M} \mathcal{N} \mathcal{h} \mathbf{c}\_{\max} \mathcal{b}\_{\max} \text{; } \mathcal{U}\_{4} = \frac{1}{K} \sum\_{s=0}^{K-1} \max\_{n} y\_{n} [(s+1)h] . \end{array} \tag{24}$$

Then, the function *ϕ*(*b*1, *b*2, *b*3) takes the form

$$
\rho(b\_1, b\_2, b\_3) = b\_1 \mathcal{U}\_1 + b\_1 b\_2 \mathcal{U}\_2 + (b\_3)^{\varepsilon\_{\max}} \mathcal{U}\_3 + \mathcal{U}\_4. \tag{25}
$$

Note that the coefficients *U*1, ... , *U*4 are determined by real data on regional population distributions and also by the characteristics of internal migration within the system GFI and immigration flows from the system SL.

The equality in Equation (25) defines a function *ϕ*(*b*1, *b*2, *b*3) of random variables. Let its expectation be the characteristic of the empirical risk, i.e.,

$$\operatorname{tr}[\mathcal{W}(\mathbf{b})] = \int\_{\mathcal{B}} \mathcal{W}(\mathbf{b}) \, \boldsymbol{\varrho}(\mathbf{b}) d\mathbf{b},\tag{26}$$

where B = B1 ⊗ B2 ⊗ B3 and the intervals B1 and B2 have given limits. At the same time, the limits of the interval B3 are specified by the equalities in Equation (22).

*Power of noises.* The measurement noises are simulated by random vectors ¯ *ξ*[*sh*] ∈ *<sup>R</sup>N*, *s* = 0, *K* − 1. The components of these vectors may have different domains (ranges of values) at different times *s* = 0, *K* − 1. For each time, introduce the Euclidean norm ¯*ξ*[*sh*]<sup>2</sup>*N* and its expectation

$$m\_s[Q(\vec{\xi}[sh])] = \int\_{\Xi} Q(\vec{\xi}[sh]) \|\vec{\xi}[sh]\|\_{\mathcal{N}}^2 d\vec{\xi}[sh].\tag{27}$$

The average expectation of this norm over the time interval has the form

$$n\_s[Q(\vec{\xi}[sh])] = \frac{1}{K} \sum\_{s=0}^{K-1} n\_s[Q(\vec{\xi}[sh])].\tag{28}$$

If the measurement noises are the same on the observation interval, then the noise power functional can be written as

$$n\_{\mathbb{S}}[Q(\vec{\xi}[sh])] = n[Q(\vec{\xi})] = \int\_{\Xi} Q(\vec{\xi}) \|\vec{\xi}\|\_{N}^{2} d\vec{\xi}.\tag{29}$$

This formula involves the Euclidean norm for a quantitative characterization of the noise power. However, it is possible to choose other norms depending on problem specifics.

## **4. Soft Randomized Estimation of Model Parameters**

The model characteristics and measurement noises are estimated using a learning data collection: the real cost of immigrants maintenance *<sup>T</sup>*[0], ... , *T*[(*K* − 1)*h*] (input data) and the real distributions of regional population sizes **<sup>Y</sup>**[0],..., **Y**[(*K* − 1)*h*] (output data).

In accordance with the general procedure of soft randomized machine learning [15], the optimal probability density functions *W*(**b**) (model parameters) and *Q*( ¯ *ξ*) (measurement noises) are calculated by the constrained minimization of the synthetic functional *J*[*W*(**b**), *Q*( ¯ *ξ*)] that contains the following functionals:

• the entropy

$$\mathcal{H}[\mathcal{W}(\mathbf{b})] = -\int\_{\mathcal{B}} \mathcal{W}(\mathbf{b}) \ln \mathcal{W}(\mathbf{b}) d\mathbf{b} - \int\_{\Xi} Q(\vec{\xi}) \ln Q(\vec{\xi}) d\vec{\xi};\tag{30}$$

• the average empirical risk over the observation interval

$$\operatorname{tr}[\mathcal{W}(\mathbf{b})] = \int\_{\mathcal{B}} \mathcal{W}(\mathbf{b}) (b\_1 \mathcal{U}\_1 + b\_1 b\_2 \mathcal{U}\_2 + (b\_3)^{\varepsilon\_{\max}} \mathcal{U}\_3 + \mathcal{U}\_4) d\mathbf{b};\tag{31}$$

and

• the average error norm

$$m[\mathbb{Q}(\xi)] = \int\_{\Xi} \mathbb{Q}(\xi) \sum\_{i=1}^{N} \xi\_i^2 d\xi. \tag{32}$$

The soft randomized learning algorithm has the form

$$\begin{split} \mathcal{J}[\mathcal{W}(\mathbf{b}), Q(\boldsymbol{\xi})] &= \mathcal{H}[\mathcal{W}(\mathbf{b})] - r[\mathcal{W}(\mathbf{b})] - n[Q(\boldsymbol{\xi})] \Rightarrow \max, \\ \int\_{\mathcal{B}} \mathcal{W}(\mathbf{b}) d\mathbf{b} &= 1, \qquad \int\_{\Xi} Q(\boldsymbol{\xi}) d\boldsymbol{\xi} = 1. \end{split} \tag{33}$$

The solution of this problem is the optimal PDFs under maximal uncertainty, for the model parameters of the form

$$\mathcal{W}^\*(\mathbf{b}) = \frac{\exp\left(b\_1 l L\_1 - b\_1 b\_2 l L\_2 - (b\_3)^{c\_{\max}} l L\_3 - l L\_4\right)}{\mathcal{P}},\tag{34}$$

where

$$\mathcal{P} = \int\_{\mathcal{B}} \exp\left(b\_1 \mathcal{U}\_1 - b\_1 b\_2 \mathcal{U}\_2 - (b\_3)^{\varepsilon\_{\max}} \mathcal{U}\_3 - \mathcal{U}\_4\right) d\mathbf{b},\tag{35}$$

and for the measurement noises of the form

$$Q^\*(\xi) = \frac{\exp\left(-\sum\_{i=1}^N \xi\_i^2\right)}{\mathcal{Q}},\tag{36}$$

where

$$\mathcal{Q} = \int\_{\Xi} \exp\left(-\sum\_{i=1}^{N} \xi\_i^2\right) d\bar{\xi}. \tag{37}$$

In the case of soft randomization, there is no need for solving the empirical balance equations, which have high complexity and computational intensiveness due to the presence of integral components. Here, computational resources are required for the normalization procedure of the

resulting PDFs. On the other hand, the morphology of the optimal PDFs depends on a specific choice of the approximate data balancing criterion and a numerical characterization of the measurement noises.

## **5. Randomized Forecasting of Dynamic Migratory Interaction**

Consider randomized forecasting of dynamic migratory interaction using the principle of soft randomization. Let T*pr* = [*<sup>s</sup>*0*h*,*sprh*] be the forecasting interval and assume the initial state (the regional population distribution at the initial time *<sup>s</sup>*0*h*) coincides with the real distribution, i.e., **<sup>K</sup>**[*<sup>s</sup>*0*h*] = **<sup>Y</sup>**[*<sup>s</sup>*0*h*]. The shared cost of the system GFI to maintain immigrants is distributed in accordance with a given scenario. For each scenario, the value *Tmax* and also the interval B3 in Equations (12), (22), and (23) are determined.

The forecasted trajectories are constructed using the randomized model in Equations (7), (10), and (11)

$$\begin{array}{rcl} \mathbb{K}[(s+1)h] &=& \langle b\_1 b\_2 \bar{A} - b\_1 \mathbb{E} \rangle \, \mathbb{K}[sh] + \mathbb{F}[sh \, | \, b\_3], \\ \mathbb{F}[sh \, | \, b\_3] &=& \{\sum\_{k=1}^{M} b\_{\text{k}l} (b\_3)^{c\_{\text{k}l}}, \quad n = \overline{1, N} \}, \\ s &=& \overline{s\_0 s\_{p\text{v}}}, \quad \mathbb{K}[s\_0 h] = \mathcal{Y}[s\_0 h]. \end{array} \tag{38}$$

The randomized parameters *b*1, *b*2, and *b*3 take values within the corresponding intervals with the probability density function *W*∗(**b**) (Equation (34)).

An ensemble of the forecasted trajectories for the model's output is obtained taking into account a random vector ¯ *ξ* ∈ Ξ with the PDF *Q*∗( ¯*ξ*)(Equation (36)):

$$\mathbf{v}[sh] = \mathbf{K}[sh] + \overline{\boldsymbol{\zeta}}, \quad \mathbf{s} = \overline{\overline{\boldsymbol{s}\_0 \boldsymbol{s}\_{pr}}}.\tag{39}$$

For each scenario *<sup>T</sup>*[*<sup>s</sup>*0*h*], ... , *<sup>T</sup>*[*sprh*], an ensemble K of random forecasting trajectories is generated via sampling (the transformation of a PDF into a corresponding sequence of random vectors of length *I*) of the optimal PDFs of the model parameters and measurement noises for each time *sh*. The resulting ensemble allows deriving empirical estimates of different numerical characteristics as follows:

• the average trajectory

$$\mathbf{K}[sh] = \frac{1}{I} \sum\_{i=1}^{I} \mathbf{K}^{(i)}[sh], \qquad s = \overline{s\_{0}, s\_{pr}}; \tag{40}$$

• the variance trajectory

$$
\bar{\sigma}^2[sh] = \frac{1}{I-1} \sum\_{i=1}^{I} \left\| \mathbf{K}^{(i)}[sh] - \bar{\mathbf{K}}[sh] \right\|^2 \qquad \text{s} = \overline{\mathbf{s}\_{0, \text{sup}}} ; \tag{41}
$$

• the variance pipe, i.e., the set of random trajectories that almost surely (since an ensemble consists of a finite number of trajectories, the matter concerns not probability but its empirical estimate) belong to the domain

$$\mathcal{D} = \{ \mathbf{K}[sh] : \mathbf{K}[sh] - \vartheta^2[sh] \le \mathbf{K}[sh] \le \mathbf{K}[sh] + \vartheta^2[sh], \quad \mathbf{s} = \overline{\mathbf{s}\_{0} \cdot \mathbf{s}\_{pr}} \}; \tag{42}$$

• the empirical probability distribution and its dynamics on the forecasting interval

$$\mathbb{P}\left(\mathbf{K}[sh] \le \Delta, \ s = \overline{s\_{0}s\_{pr}}\right) = \frac{I\_{\Lambda}}{I},\tag{43}$$

where *I*Δ denotes the number of vectors **K**[*sh*] whose components are smaller than Δ; and • the median trajectory **K**ˆ [*sh*],*<sup>s</sup>* = *<sup>s</sup>*0,*spr*, which satisfies the equation

$$\mathbb{P}(\mathbf{K}[sh]) = 0, \mathbf{5}; \ s = \overline{s\_{0}, s\_{pr}}.\tag{44}$$

The ensemble K can be used to calculate other characteristics, e.g., *α*-quantiles, confidence probabilities, etc.
