**1. Introduction**

Actually, this paper can be regarded as variations on the theme of 'multiplication theorem' 3.3.1 in the famous book of V. M. Zolotarev [1]. Here, multivariate probability distributions are considered that are representable as scale mixtures of multivariate stable distributions. Some properties of these distributions are discussed. Attention is paid to the representations of the corresponding random vectors as products of independent random variables and vectors. In these products, relations of the distributions of the involved terms with popular probability distributions are traced.

As examples of distributions of the class of scale mixtures of multivariate stable distributions, multivariate generalized Linnik distributions and multivariate generalized Mittag–Leffler distributions are considered in detail. Limit theorems are proved presenting necessary and sufficient conditions for the convergence of the distributions of random sequences with independent random indices (including sums of a random number of random vectors and multivariate statistics constructed from samples with random sizes) to scale mixtures of multivariate elliptically contoured stable distributions. As particular cases, conditions are obtained for the convergence of the distributions of random sums of random vectors with covariance matrices to the multivariate generalized Linnik distribution.

Along with general multiplicative properties of the class of scale mixtures of multivariate stable distributions, some important and popular special cases are considered in detail. Multivariate analogs of the Mittag–Leffler distribution are proposed. We study the multivariate (generalized) Linnik and related (generalized) Mittag–Leffler distributions, their interrelation and their relations with multivariate 'ordinary' Linnik distributions, multivariate normal, stable and Laplace laws as well as with univariate 'ordinary' Mittag–Leffler distributions. Namely, we consider mixture representations for the multivariate generalized Mittag–Leffler and multivariate generalized Linnik distributions. We continue the research we started in [2–5]. In most papers (see, e.g., [6–18]), the properties of the (multivariate) generalized Mittag–Leffler and Linnik distributions were deduced by analytical methods from the properties of the corresponding probability densities and/or characteristic functions. Instead, here we use the approach which can be regarded as arithmetical in the space of random variables or vectors. Within this approach, instead of the operation of scale mixing in the space of distributions, we consider the operation of multiplication in the space of random vectors/variables provided the multipliers are independent. This approach considerably simplifies the reasoning and makes it possible to notice some general features of the distributions under consideration. We prove mixture representations for general scale mixtures of multivariate stable distributions and their particular cases in terms of normal, Laplace, generalized gamma (including exponential, gamma and Weibull) and stable laws and establish the relationship between the mixing distributions in these representations. In particular, we prove that the multivariate generalized Linnik distribution is a multivariate normal scale mixture with the univariate generalized Mittag–Leffler mixing distribution and, moreover, show that this representation can be used as the definition of the multivariate generalized Linnik distribution. Based on these representations, we prove some limit theorems for random sums of independent random vectors with covariance matrices. As a particular case, we prove some theorems in which the multivariate generalized Linnik distribution plays the role of the limit law. By doing so, we demonstrate that the scheme of geometric (or, in general, negative binomial) summation is not the only asymptotic setting (even for sums of independent random variables) in which the multivariate generalized Linnik law appears as the limit distribution.

In [2], we showed that along with the traditional and well-known representation of the univariate Linnik distribution as the scale mixture of a strictly stable law with exponential mixing distribution, there exists another representation of the Linnik law as the normal scale mixture with the Mittag–Leffler mixing distribution. The former representation makes it possible to treat the Linnik law as the limit distribution for geometric random sums of independent identically distributed random variables (random variables) in which summands have very large variances. The latter normal scale mixture representation opens the way to treating the Linnik distribution as the limit distribution in the central limit theorem for random sums of independent random variables in which summands have *finite* variances. Moreover, being scale mixtures of normal laws, the Linnik distributions can serve as the one-dimensional distributions of a special subordinated Wiener process. Subordinated Wiener processes with various types of subordinators are often used as models of the evolution of stock prices and financial indexes, e.g., [19]. Strange as it may seem, the results concerning the possibility of representation of the Linnik distribution as a scale mixture of normals were never explicitly presented

in the literature in full detail before [2], although the property of the Linnik distribution to be a normal scale mixture is something almost obvious. Perhaps, the paper [10] was the closest to this conclusion and exposed the representability of the Linnik law as a scale mixture of Laplace distributions with the mixing distribution written out explicitly. These results became the base for our efforts to extend them from the Linnik distribution to the multivariate generalized Linnik law and more general scale mixtures of multivariate stable distributions. Methodically, the present paper is very close to the work of L. Devroye [20] where many examples of mixture representations of popular probability distributions were discussed from the simulation point of view. The presented material substantially relies on the results of [2,5,15].

In many situations related to experimental data analysis, one often comes across the following phenomenon: although conventional reasoning based on the central limit theorem of probability theory concludes that the expected distribution of observations should be normal, instead, the statistical procedures expose the noticeable non-normality of real distributions. Moreover, as a rule, the observed non-normal distributions are more leptokurtic than the normal law, having sharper vertices and heavier tails. These situations are typical in financial data analysis (see, e.g., Chapter 4 in [19] or Chapter 8 in [21] and the references therein), in experimental physics (see, e.g., [22]) and other fields dealing with statistical analysis of experimental data. Many attempts were undertaken to explain this heavy-tailedness. Most significant theoretical breakthrough is usually associated with the results of B. Mandelbrot and others [23–25] who proposed, instead of the standard central limit theorem, to use reasoning based on limit theorems for sums of random summands with very large variances (also see [26,27]) resulting in non-normal stable laws as heavy-tailed models of the distributions of experimental data. However, in most cases, the key assumption within this approach, the lareg size of the variances of elementary summands, can hardly be believed to hold in practice. To overcome this contradiction, in [28], we considered an extended limit setting where it may be assumed that the intensity of the flow of informative events is random resulting in that the number of jumps up to a certain time in a random-walk-type model or the sample size is random. We show that in this extended setting, actually, heavy-tailed scale mixtures of stable laws can also be limit distributions for sums of a random number of random vectors with *finite* covariance matrices.

The paper is organized as follows. Section 2 contains basic notations and definitions. Some properties of univariate scale distributions are recalled in Section 3. In Section 4, we introduce multivariate stable distributions and prove a multivariate analog of the univariate 'multiplication theorem' (see Theorem 3.3.1 in [1]). In Section 5 we discuss some properties of scale-mixed multivariate elliptically contoured stable laws. In particular, we prove that these mixtures are identifiable. Section 6 contains the description of the properties of uni- and multi-variate generalized Mittag–Leffler distributions. In Section 7, we consider the multivariate generalized Linnik distribution. Here, we discuss different approaches to the definition of this distribution and prove some new mixture representations for the multivariate generalized Linnik distribution. General properties of scale-mixed multivariate stable distributions are discussed in Section 8. In Section 9, we first prove a general transfer theorem presenting necessary and sufficient conditions for the convergence of the distributions of random sequences with independent random indices (including sums of a random number of random vectors and multivariate statistics constructed from samples with random sizes) to scale mixtures of multivariate elliptically contoured stable distributions. As particular cases, conditions are obtained for the convergence of the distributions of scalar normalized random sums of random vectors with covariance matrices to scale mixtures of multivariate stable distributions and their special cases: 'pure' multivariate stable distributions and the multivariate generalized Linnik distributions. The results of this section extend and refine those proved in [29].

#### **2. Basic Notation and Definitions**

Let *r* ∈ N. We will consider random elements taking values in the *r*-dimensional Euclidean space R*<sup>r</sup>*. The Euclidean norm of a vector *x* ∈ R*r* will be denoted *x*. Assume that all the random variables and random vectors are defined on one and the same probability space (<sup>Ω</sup>, A, <sup>P</sup>). The distribution of a random variable *Y* or an *r*-variate random vector *Y* with respect to the measure P will be denoted L(*Y*) and L(*Y*), respectively. The weak convergence, the coincidence of distributions and the convergence in probability with respect to a specified probability measure will be denoted by the symbols <sup>=</sup>⇒, *d*= and *P*−→, respectively. The product of *independent* random elements will be denoted by the symbol ◦. The vector with all zero coordinates will be denoted **0**.

A univariate random variable with the standard normal distribution function <sup>Φ</sup>(*x*) will be denoted *X*,

$$\mathbb{P}(X < \mathbf{x}) = \Phi(\mathbf{x}) = \frac{1}{\sqrt{2\pi}} \int\_{-\infty}^{\mathbf{x}} e^{-z^2/2} dz, \quad \mathbf{x} \in \mathbb{R}.$$

Let Σ be a positive definite (*r* × *r*)-matrix. The normal distribution in R*r* with zero vector of expectations and covariance matrix Σ will be denoted NΣ. This distribution is defined by its density

$$\phi(\mathbf{x}) = \frac{\exp\left\{-\frac{1}{2}\mathbf{x}^{\top}\boldsymbol{\Sigma}^{-1}\mathbf{x}\right\}}{(2\pi)^{r/2}|\boldsymbol{\Sigma}|^{1/2}}, \quad \mathbf{x} \in \mathbb{R}^r.$$

The characteristic function f(*X*)(*t*) of a random vector *X* such that L(*X*) = NΣ has the form

$$f^{(X)}(t) = \mathbb{E}\exp\{it^\top X\} = \exp\left\{-\frac{1}{2}t^\top \Sigma t\right\}, \quad t \in \mathbb{R}^r.$$

A random variable having the gamma distribution with shape parameter *r* > 0 and scale parameter *λ* > 0 will be denoted *Gr*,*λ*,

$$\mathbb{P}(G\_{r,\lambda} < \mathbf{x}) = \int\_0^\mathbf{x} \mathbf{g}(z; r, \lambda) dz,\text{ with }\mathbf{g}(\mathbf{x}; r, \lambda) = \frac{\lambda^r}{\Gamma(r)} \mathbf{x}^{r-1} e^{-\lambda \mathbf{x}},\text{ }\mathbf{x} \gg 0,$$

where <sup>Γ</sup>(*r*) is Euler's gamma-function,

$$
\Gamma(r) = \int\_0^\infty \mathbf{x}^{r-1} e^{-x} dx, \quad r > 0.
$$

In this notation, obviously, *G*1,1 is a random variable with the standard exponential distribution: <sup>P</sup>(*<sup>G</sup>*1,1 < *x*) = )1 − *e*<sup>−</sup>*<sup>x</sup>*\***1**(*x* 0) (here and in what follows **1**(*A*) is the indicator function of a set *A*).

The gamma distribution is a particular representative of the class of generalized gamma distributions (GG distributions), that was first described in [30] as a special family of lifetime distributions containing both gamma and Weibull distributions. A *generalized gamma* (*GG*) *distribution* is the absolutely continuous distribution defined by the density

$$\overline{\mathfrak{g}}(\mathbf{x};r,\mathbf{a},\lambda) = \frac{|\alpha|\lambda^r}{\Gamma(r)} \mathbf{x}^{\alpha r - 1} e^{-\lambda \mathbf{x}^a}, \quad \mathbf{x} \gg 0,$$

with *α* ∈ R, *λ* > 0, *r* > 0. A random variable with the density *g*(*<sup>x</sup>*;*r*, *α*, *λ*) will be denoted *Gr*,*α*,*λ*. It is easy to see that

$$
\overline{G}\_{r,a,\mu} \stackrel{d}{=} G\_{r,\mu}^{1/a} \stackrel{d}{=} \mu^{-1/a} G\_{r,1}^{1/a} \stackrel{d}{=} \mu^{-1/a} \overline{G}\_{r,a,1}.\tag{1}
$$

Let *γ* > 0. The distribution of the random variable *<sup>W</sup>γ*:

$$\mathbb{P}(\mathcal{W}\_{\gamma} < \mathbf{x}) = \left[1 - e^{-\mathbf{x}^{\gamma}}\right] \mathbf{1}(\mathbf{x} \gg \mathbf{0})\_{\prime}$$

is called the *Weibull distribution* with shape parameter *γ*. It is obvious that *W*1 is the random variable with the standard exponential distribution: <sup>P</sup>(*<sup>W</sup>*1 < *x*) = )1 − *e*<sup>−</sup>*<sup>x</sup>*\***1**(*x* <sup>0</sup>). The Weibull distribution is a particular case of GG distributions corresponding to the density *g*(*x*; 1, *γ*, <sup>1</sup>). It is easy to see that *W*1/*<sup>γ</sup>* 1 *d*= *<sup>W</sup>γ*. Moreover, if *γ* > 0 and *γ* > 0, then P(*W*1/*<sup>γ</sup> γ x*) = <sup>P</sup>(*<sup>W</sup>γ x<sup>γ</sup>*) = *e*<sup>−</sup>*xγγ* = <sup>P</sup>(*<sup>W</sup>γγ <sup>x</sup>*), *x* 0, that is, for any *γ* > 0 and *γ* > 0

$$\mathcal{W}\_{\gamma\gamma'} \stackrel{d}{=} \mathcal{W}\_{\gamma'}^{1/\gamma}.$$

In the paper [31], it was shown that any gamma distribution with shape parameter no greater than one is mixed exponential. Namely, the density *g*(*<sup>x</sup>*;*r*, *μ*) of a gamma distribution with 0 < *r* < 1 can be represented as

$$\mathcal{g}(\mathbf{x}; r, \mu) = \int\_0^\infty z e^{-z\mathbf{x}} p(z; r, \mu) dz$$

$$\mathbf{u}^r \qquad \mathbf{u}(z > \mu)$$

where

$$p(z;r,\mu) = \frac{\mu^r}{\Gamma(1-r)\Gamma(r)} \cdot \frac{\mathbf{1}(z \rhd \mu)}{(z-\mu)^r z}.\tag{2}$$

Moreover, a gamma distribution with shape parameter *r* > 1 cannot be represented as a mixed exponential distribution.

In [32] it was proved that if *r* ∈ (0, <sup>1</sup>), *μ* > 0 and *Gr*, 1 and *G*1−*r*, 1 are independent gamma-distributed random variables, then the density *p*(*<sup>z</sup>*;*r*, *μ*) defined by (2) corresponds to the random variable

$$Z\_{r, \mu} = \frac{\mu(G\_{r, 1} + G\_{1-r, 1})}{G\_{r, 1}} \stackrel{d}{=} \mu Z\_{r, 1} \stackrel{d}{=} \mu \left(1 + \frac{1 - r}{r} V\_{1 - r, r} \right), \tag{3}$$

where *V*1−*r*,*<sup>r</sup>* is the random variable with the Snedecor–Fisher distribution defined by the probability density

$$q(\mathbf{x}; 1 - r, r) = \frac{(1 - r)^{1 - r} r^r}{\Gamma(1 - r)\Gamma(r)} \cdot \frac{1}{\mathbf{x}^r [r + (1 - r)\mathbf{x}]^{\prime}}, \quad \mathbf{x} \gtrless 0.1$$

In other words, if *r* ∈ (0, <sup>1</sup>), then

$$G\_{r,\mu} \stackrel{d}{=} W\_1 \circ Z\_{r,\mu}^{-1}.\tag{4}$$

#### **3. Univariate Stable Distributions**

Let *r* ∈ N. Recall that the distribution of an *r*-variate random vector *S* is called *stable*, if for any *a*, *b* ∈ R there exist *c* ∈ R and *d* ∈ R*r* such that *<sup>a</sup>S*1 + *bS*2 *d*= *cS* + *d*, where *S*1 and *S*2 are independent and *S*1 *d*= *S*2 *d*= *S*. In what follows we will concentrate our attention on a special sub-class of stable distributions called *strictly stable*. This sub-class is characterized by that in the definition given above *d* = **0**.

In the univariate case, the characteristic function f(*t*) of a strictly stable random variable can be represented in several equivalent forms (see, e.g., [1]). For our further constructions the most convenient form is

$$f\_{a, \theta}(t) = \exp\{-|t|^a + i\theta w(t, \mathfrak{a})\}, \quad t \in \mathbb{R},\tag{5}$$

where

$$w(t,a) = \begin{cases} \tan\frac{\pi a}{2} \cdot |t|^a \text{sign}\,t, & a \neq 1, \\\\ -\frac{2}{\pi} \cdot t \log|t|, & a = 1. \end{cases} \tag{6}$$

Here *α* ∈ (0, 2] is the *characteristic exponent*, *θ* ∈ [−1, 1] is the *skewness* parameter (for simplicity we consider the "standard" case with unit scale coefficient at *t*). Any random variable with characteristic function (5) will be denoted *<sup>S</sup>*(*<sup>α</sup>*, *θ*) and the characteristic function (5) itself will be written as f*<sup>α</sup>*,*<sup>θ</sup>* (*t*). For definiteness, *S*(1, 1) = 1.

From (5) it follows that the characteristic function of a symmetric (*θ* = 0) strictly stable distribution has the form

$$f\_{a,0}(t) = e^{-|t|^a}, \quad t \in \mathbb{R}.\tag{7}$$

From (7) it is easy to see that *S*(2, 0) *d*= √2*X*.

Univariate stable distributions are popular examples of heavy-tailed distributions. Their moments of orders *δ α* do not exist (the only exception is the normal law corresponding to *α* = 2), and if 0 < *δ* < *α*, then

$$\left|\mathbb{E}|\mathcal{S}(a,0)|^\delta = \frac{2^\delta}{\sqrt{\pi}} \cdot \frac{\Gamma(\frac{\delta+1}{2})\Gamma(1-\frac{\phi}{a})}{\Gamma(\frac{2}{\delta}-1)}\right.\tag{8}$$

(see, e.g., [33]). Stable laws and only they can be limit distributions for sums of a non-random number of independent identically distributed random variables with very large variance under linear normalization.

Let 0 < *α* 1. By *<sup>S</sup>*(*<sup>α</sup>*, 1) we will denote a positive random variable with the one-sided stable distribution corresponding to the characteristic function f*<sup>α</sup>*,<sup>1</sup>(*t*), *t* ∈ R. The Laplace–Stieltjes transform *ψ*(*S*) *<sup>α</sup>*,1(*s*) of the random variable *<sup>S</sup>*(*<sup>α</sup>*, 1) has the form

$$\psi\_{a,1}^{(S)}(s) = \mathbb{E}\exp\{-s\mathcal{S}(a,1)\} = \varepsilon^{-s^a}, \quad s > 0.$$

The moments of orders *δ α* of the random variable *<sup>S</sup>*(*<sup>α</sup>*, 1) are very large and for 0 < *δ* < *α* we have

$$\mathbb{E}S^{\delta}(\alpha,1) = \frac{2^{\delta}\Gamma(1-\frac{\delta}{\alpha})}{\Gamma(1-\delta)}$$

(see, e.g., [33]). For more details see [27] or [1].

The following product representations hold for strictly stable random variables. Let *α* ∈ (0, 2], |*θ*| min{1, 2*α* − <sup>1</sup>}, *α* ∈ (0, 1]. Then

$$S(aa',\theta) \stackrel{d}{=} S^{1/a}(a',1) \circ S(a,\theta),\tag{9}$$

see Theorem 3.3.1 in [1]. In particular,

$$S(\mathfrak{a},0) \stackrel{d}{=} \sqrt{2S(\mathfrak{a}/2,1)} \circ X. \tag{10}$$

Another particular case of (9) concerns one-sided strictly stable random variables: if 0 < *α* 1 and 0 < *α* 1, then

$$S(\mathfrak{a}\mathfrak{a}',1) \stackrel{d}{=} S^{1/\mathfrak{a}}(\mathfrak{a}',1) \circ S(\mathfrak{a},1),\tag{11}$$

see Corollary 1 to Theorem 3.3.1 in [1]. Finally, if 0 < *α* 1, then

$$S(\mathfrak{a}, \theta) \stackrel{d}{=} S(\mathfrak{a}, 1) \circ S(1, \theta), \tag{12}$$

see Corollary to Theorem 3.3.2 (relation (3.3.10)) in [1].

#### **4. Multivariate Stable Distributions**

Now turn to the multivariate case. By Q*r* we denote the unit sphere: Q*r* = {*u* ∈ R*r* : *u* = <sup>1</sup>}. Let *μ* be a finite ('spectral') measure on Q*<sup>r</sup>*. It is known that the characteristic function of a strictly stable random vector *S* has the form

$$\mathbb{E}\exp\{i\mathbf{t}^{\top}\mathbf{S}\} = \exp\left\{ -\int\_{\mathbb{Q}'} \left( |\mathbf{t}^{\top}\mathbf{s}|^{a} + iw(\mathbf{t}^{\top}\mathbf{s},a) \right) \mu(d\mathbf{s}) \right\}, \quad \mathbf{t} \in \mathbb{R}^{r},\tag{13}$$

with *<sup>w</sup>*(· , *α*) defined in (6), see [34–37]. An *r*-variate random vector with the characteristic function (13) will be denoted *<sup>S</sup>*(*<sup>α</sup>*, *μ*). We will sometimes use the notation <sup>S</sup>*<sup>α</sup>*,*<sup>μ</sup>* for L -*<sup>S</sup>*(*<sup>α</sup>*, *<sup>μ</sup>*).

As is known, a random vector *S* has a strictly stable distribution with some characteristic exponent *α* if and only if for any *u* ∈ R*r* the random variable *u*"*S* (the projection of *S*) has the univariate strictly stable distribution with the same characteristic exponent *α* and some skewness parameter *<sup>θ</sup>*(*u*) up to a scale coefficient *γ*(*u*):

$$
\mu^\top \mathcal{S}(\mathfrak{a}, \mu) \stackrel{d}{=} \gamma(\mathfrak{a}) \mathcal{S}\left(\mathfrak{a}, \theta(\mathfrak{a})\right),
\tag{14}
$$

see [38]. Moreover, the projection parameter functions are related with the spectral measure *μ* as

$$\left(\gamma(\mathfrak{u})\right)^{\mathfrak{a}} = \int\_{\mathbb{Q}^{r}} |\mathfrak{u}^{\top}\mathfrak{s}|^{\mathfrak{a}} \mu(d\mathfrak{s}),\tag{15}$$

$$\theta(\mathfrak{u}) \left( \gamma(\mathfrak{u}) \right)^{a} = \int\_{\mathbb{Q}'} |\mathfrak{u}^{\top} \mathfrak{s}|^{a} \operatorname{sign}(\mathfrak{u}^{\top} \mathfrak{s}) \mu(d\mathfrak{s}), \quad \mathfrak{u} \in \mathbb{R}^{r}, \tag{16}$$

see [36–38]. Conversely, the spectral measure *μ* is uniquely determined by the projection parameter functions *γ*(*u*) and *<sup>θ</sup>*(*u*). However, there is no simple formula for this [37].

An *r*-variate analog of a one-sided univariate strictly stable random variable *<sup>S</sup>*(*<sup>α</sup>*, 1) is the random vector *<sup>S</sup>*(*<sup>α</sup>*, *μ*+) where 0 < *α* 1 and *μ*<sup>+</sup> is a finite measure concentrated on the set Q*<sup>r</sup>*+ = {*u* = (*<sup>u</sup>*1,..., *ur*)" : *ui* 0, *i* = 1, . . . ,*<sup>r</sup>*}.

Consider multivariate analogs of product representations (9) and (11).

**Theorem 1.** *Let* 0 < *α* 2*,* 0 < *α* 1*, μ be a finite measure on* Q*r, <sup>S</sup>*(*<sup>α</sup>*, *μ*) *be an r-variate random vector having the strictly stable distribution with characteristic exponent α and spectral measure μ. Then*

$$\mathcal{S}(aa',\mu) \stackrel{d}{=} \mathcal{S}^{1/a}(a',1) \circ \mathcal{S}(a,\mu). \tag{17}$$

*If, in addition,* 0 < *α* < 1*, and μ*<sup>+</sup> *is a finite measure on* Q*<sup>r</sup>*+*, then*

$$\mathcal{S}(aa',\mu^{+}) \stackrel{d}{=} \mathcal{S}^{1/a}(a',1) \circ \mathcal{S}(a,\mu^{+}).\tag{18}$$

**Proof.** Let *γ*(*u*) and *<sup>θ</sup>*(*u*), *u* ∈ R*<sup>r</sup>*, be the projection parameter functions corresponding to the measure *μ* (see (15) and (16)). Then, in accordance with (9) and (14), for any *u* ∈ R*r* we have

$$\mathfrak{u}^{\top}(\mathbb{S}^{1/\mathfrak{a}}(\mathfrak{a}',1)\circ \mathbb{S}(\mathfrak{a},\mathfrak{\mu})) = \operatorname{S}^{1/\mathfrak{a}}(\mathfrak{a}',1)\circ \mathfrak{u}^{\top}\mathbb{S}(\mathfrak{a},\mathfrak{\mu}) \stackrel{d}{=} \operatorname{S}^{1/\mathfrak{a}}(\mathfrak{a}',1)\circ \left(\gamma(\mathfrak{u})\mathbb{S}(\mathfrak{a},\mathfrak{\theta}(\mathfrak{u}))\right) \stackrel{d}{=} \operatorname{S}^{1/\mathfrak{a}}(\mathfrak{a}',1)\circ \mathbb{S}(\mathfrak{a},\mathfrak{\theta}(\mathfrak{a})) \stackrel{d}{=} \gamma(\mathfrak{a})\mathcal{S}(\mathfrak{a}\mathfrak{a}',\mathfrak{\theta}(\mathfrak{a})).$$

$$\stackrel{d}{=} \gamma(\mathfrak{u})\cdot \mathcal{S}^{1/\mathfrak{a}}(\mathfrak{a}',1)\circ \mathcal{S}(\mathfrak{a},\mathfrak{\theta}(\mathfrak{a})) \stackrel{d}{=} \gamma(\mathfrak{u})\mathcal{S}(\mathfrak{a}\mathfrak{a}',\mathfrak{\theta}(\mathfrak{a})).$$

The remark that *γ*(*u*) and *<sup>θ</sup>*(*u*) uniquely determine *μ* concludes the proof of (17). Representation (18) is a particular case of (17).

**Remark 1.** *Actually, the essence of Theorem 1 is that all multivariate strictly stable distributions with α* < 2 *are scale mixtures of multivariate scale laws with no less characteristic exponent, the mixing distribution being univariate one-sided strictly stable law. The case α* = 2 *is not an exception: in this case the mixing distribution is degenerate concentrated in the unit point. This degenerate law formally satisfies the definition of a stable distribution being the only stable law that is not absolutely continuous.*

Let Σ be a symmetric positive definite (*r* × *r*)-matrix, *α* ∈ (0, 2]. If the characteristic function f*<sup>α</sup>*,*<sup>μ</sup>*(*t*) of a strictly stable random vector *<sup>S</sup>*(*<sup>α</sup>*, *μ*) has the form

$$f\_{\boldsymbol{\theta},\boldsymbol{\mu}}(\boldsymbol{t}) = \mathbb{E}\exp\{i\boldsymbol{t}^{\top}\mathbb{S}\_{\boldsymbol{\theta},\boldsymbol{\mu}}\} = \exp\{- (\boldsymbol{t}^{\top}\boldsymbol{\Sigma}\boldsymbol{t})^{\boldsymbol{\mu}/2}\}, \quad \boldsymbol{t} \in \mathbb{R}^{r},\tag{19}$$

then the random vector *<sup>S</sup>*(*<sup>α</sup>*, *μ*) is said to have the (centered) *elliptically contoured* stable distribution with characteristic exponent *α*. In this case for better vividness we will use the special notation *<sup>S</sup>*(*<sup>α</sup>*, *μ*) = *<sup>S</sup>*(*<sup>α</sup>*, <sup>Σ</sup>). The corresponding characteristic function (19) will be denoted f*<sup>α</sup>*,<sup>Σ</sup>(*t*) and the elliptically contoured stable distribution with characteristic function (19) will be denoted S*<sup>α</sup>*,Σ. It is easy to see that S2,Σ = N2Σ.

Let *α* ∈ (0, 2]. If *X* is a random vector such that L(*X*) = NΣ independent of the random variable *<sup>S</sup>*(*α*/2, <sup>1</sup>), then from (17) it follows that

$$\mathcal{S}(\mathfrak{a}, \Sigma) \stackrel{d}{=} \mathcal{S}^{1/2}(\mathfrak{a}/2, 1) \circ \mathcal{S}(2, \Sigma) \stackrel{d}{=} \sqrt{2\mathcal{S}(\mathfrak{a}/2, 1)} \circ \mathcal{X} \tag{20}$$

(also see Proposition 2.5.2 in [27]). More general, If 0 < *α* 2 and 0 < *α* 1, then

$$\mathcal{S}(aa',\Sigma) \stackrel{d}{=} \mathcal{S}^{1/a}(a',1) \circ \mathcal{S}(a,\Sigma). \tag{21}$$

If *α* = 2, then (21) turns into (20).

#### **5. Scale Mixtures of Multivariate Elliptically Contoured Stable Distributions**

Let *U* be a nonnegative random variable. The symbol <sup>E</sup>N*U*Σ(·) will denote the distribution which for each Borel set *A* in R*r* is defined as

$$\mathbb{E}\mathfrak{N}\_{l\varPi}(A) = \int\_0^\infty \mathfrak{N}\_{\mathfrak{u}\varSigma}(A) d\mathbb{P}(\varPi < \mathfrak{u})\,.$$

It is easy to see that if *X* is a random vector such that L(*X*) = NΣ, then EN*U*Σ = L(√*U* ◦ *<sup>X</sup>*). In this notation, relation (20) can be written as

$$\mathfrak{S}\_{\mathfrak{a},\Sigma} = \mathbb{E} \mathfrak{N}\_{\mathfrak{S}\mathfrak{L}(\mathfrak{a}/2,\mathfrak{l})\Sigma}. \tag{22}$$

By analogy, the symbol <sup>E</sup>S*<sup>α</sup>*,*U*2/*α*<sup>Σ</sup> will denote the distribution that for each Borel set *A* in R*r* is defined as 

$$\mathbb{E}\mathfrak{S}\_{a,lI^{2/a}\Sigma}(A) = \int\_0^\infty \mathfrak{S}\_{a,\mathfrak{u}^{2/a}\Sigma}(A)d\mathbb{P}(lI < \mathfrak{u}).$$

The characteristic function corresponding to the distribution <sup>E</sup>S*<sup>α</sup>*,*U*2/*α*<sup>Σ</sup> has the form

$$\int\_0^\infty \exp\left\{-\left(\mathbf{t}^\top (\mathbf{u}^{2/a}\Sigma)\mathbf{t}\right)^{a/2}\right\} d\mathbb{P}(\mathcal{U} < u) = \int\_0^\infty \exp\left\{-\left(\left(\mathbf{u}^{1/a}\mathbf{t}\right)^\top \Sigma (\mathbf{u}^{1/a}\mathbf{t})\right)^{a/2}\right\} d\mathbb{P}(\mathcal{U} < u) = \int\_0^\infty \exp\left\{-\left(\mathbf{f} \mathbf{t}\right)^\top \Sigma (\mathbf{f} \mathbf{t})\right\} d\mathbf{f} \tag{23}$$
 
$$= \mathbb{E}\exp\left\{i\mathbf{t}^\top \mathcal{U}^{1/a} \diamond \mathcal{S}(\mathbf{a}, \Sigma)\right\}, \quad \mathbf{t} \in \mathbb{R}^\mathsf{I}, \tag{23}$$

where the random variable *U* is independent of the random vector *<sup>S</sup>*(*<sup>α</sup>*, <sup>Σ</sup>), that is, the distribution <sup>E</sup>S*<sup>α</sup>*,*U*2/*α*<sup>Σ</sup> corresponds to the product *U*1/*α* ◦ *<sup>S</sup>*(*<sup>α</sup>*, <sup>Σ</sup>).

Let U be the set of all nonnegative random variables. Now consider an auxiliary statement dealing with the identifiability of the family of distributions {<sup>E</sup>S*<sup>α</sup>*,*U*2/*α*<sup>Σ</sup> : *U* ∈ U}.

**Lemma 1.** *Whatever a nonsingular positive definite matrix* Σ *is, the family* {<sup>E</sup>S*<sup>α</sup>*,*U*2/*α*<sup>Σ</sup> : *U* ∈ U} *is identifiable in the sense that if U*1 ∈ U*, U*2 ∈ U *and*

$$\mathbb{E}\mathfrak{S}\_{a,\mathcal{U}\_1^{2/a}\Sigma}(A) = \mathbb{E}\mathfrak{S}\_{a,\mathcal{U}\_2^{2/a}\Sigma}(A) \tag{24}$$

*for any set A* ∈ B(R*<sup>r</sup>*)*, then U*1 *d*= *U*2*.* **Proof.** The proof of this lemma is very simple. If *U* ∈ U, then it follows from (13) that the characteristic function v(*U*) *<sup>α</sup>*,Σ (*t*) corresponding to the distribution <sup>E</sup>S*<sup>α</sup>*,*U*2/*α*<sup>Σ</sup> has the form

$$\mathfrak{w}\_{a,\Sigma}^{(\text{II})}(\mathbf{t}) = \int\_0^\infty \exp\left\{-\left(\mathbf{t}^\top (\boldsymbol{\mu}^{2/a}\Sigma)\mathbf{t}\right)^{a/2}\right\} d\mathbb{P}(\mathcal{U} < \mathbf{u}) = $$

$$\mathbf{t} = \int\_0^\infty \exp\{-\mathbf{u}\mathbf{s}\} d\mathbb{P}(\mathcal{U} < \mathbf{u}), \ \mathbf{s} = \left(\mathbf{t}^\top \Sigma \mathbf{t}\right)^{a/2}, \quad \mathbf{t} \in \mathbb{R}^r,\tag{25}$$

But on the right-hand side of (25) there is the Laplace–Stieltjes transform of the random variable *U*. From (24) it follows that v(*<sup>U</sup>*1) *<sup>α</sup>*,Σ (*t*) = v(*<sup>U</sup>*2) *<sup>α</sup>*,Σ (*t*) whence by virtue of (25) the Laplace–Stieltjes transforms of the random variables *U*1 and *U*2 coincide, whence, in turn, it follows that *U*1 *d*= *U*2. The lemma is proved.

**Remark 2.** *When proving Lemma 1 we established a simple but useful by-product result: if ψ*(*U*)(*s*) *is the Laplace–Stieltjes transform of the random variable U, then the characteristic function* v(*U*) *<sup>α</sup>*,Σ (*t*) *corresponding to the distribution* <sup>E</sup>S*<sup>α</sup>*,*U*2/*α*<sup>Σ</sup> *has the form*

$$\mathfrak{w}\_{a,\Sigma}^{(II)}(\mathbf{t}) = \psi^{(II)}\left((\mathbf{t}^\top \Sigma \mathbf{t})^{a/2}\right), \ \mathbf{t} \in \mathbb{R}^r. \tag{26}$$

Let *X* be a random vector such that L(*X*) = NΣ with some positive definite (*r* × *r*)-matrix Σ. Define the multivariate Laplace distribution as L-√2*W*1 ◦ *X* = <sup>E</sup>N2*W*1Σ. The random vector with this multivariate Laplace distribution will be denoted ΛΣ. It is well known that the Laplace—Stieltjes transform *ψ*(*<sup>W</sup>*1)(*s*) of the random variable *W*1 with the exponential distribution has the form

$$
\psi^{(W\_1)}(\mathbf{s}) = (1+\mathbf{s})^{-1}, \quad \mathbf{s} > 0. \tag{27}
$$

Hence, in accordance with (27) and Remark 2, the characteristic function f(Λ) Σ (*t*) of the random variable ΛΣ has the form

$$f\_{\Sigma}^{(\Lambda)}(\mathbf{t}) = \boldsymbol{\psi}^{(W\_1)}(\mathbf{t}^\top \Sigma \mathbf{t}) = \left(1 + \mathbf{t}^\top \Sigma \mathbf{t}\right)^{-1}, \quad \mathbf{t} \in \mathbb{R}^r.$$

#### **6. Generalized Mittag–Leffler Distributions**

We begin with the univariate case. The probability distribution of a nonnegative random variable *Mδ* whose Laplace transform is

$$
\psi\_{\delta}^{(M)}(\mathbf{s}) = \mathbb{E}e^{-sM\_{\delta}} = \left(1 + \lambda s^{\delta}\right)^{-1}, \quad s \gg 0,\tag{28}
$$

where *λ* > 0, 0 < *δ* 1, is called *the Mittag–Leffler distribution*. For simplicity, in what follows we will consider the standard scale case and assume that *λ* = 1.

The origin of the term *Mittag–Leffler distribution* is due to that the probability density corresponding to Laplace transform (28) has the form

$$f\_{\delta}^{(M)}(\mathbf{x}) = \frac{1}{\mathbf{x}^{1-\delta}} \sum\_{n=0}^{\infty} \frac{(-1)^n \mathbf{x}^{\delta n}}{\Gamma(\delta n + 1)} = -\frac{d}{d\mathbf{x}} E\_{\delta}(-\mathbf{x}^{\delta}), \quad \mathbf{x} \gg 0, \mathbf{y}$$

where *<sup>E</sup>δ*(*z*) is the Mittag–Leffler function with index *δ* that is defined as the power series

$$E\_{\delta}(z) = \sum\_{n=0}^{\infty} \frac{z^n}{\Gamma(\delta n + 1)}, \quad \delta > 0, \ z \in \mathbb{Z}.$$

With *δ* = 1, the Mittag–Leffler distribution turns into the standard exponential distribution, that is, *<sup>F</sup>M*1 (*x*)=[<sup>1</sup> − *e*<sup>−</sup>*<sup>x</sup>*]**1**(*x* <sup>0</sup>), *x* ∈ R. But with *δ* < 1 the Mittag–Leffler distribution density has the

heavy power-type tail: from the well-known asymptotic properties of the Mittag–Leffler function it can be deduced that if 0 < *δ* < 1, then

$$f\_{\delta}^{(M)}(x) \sim \frac{\sin(\delta \pi) \Gamma(\delta + 1)}{\pi x^{\delta + 1}}$$

as *x* → <sup>∞</sup>, see, e.g., [39].

It is well-known that the Mittag–Leffler distribution is geometrically stable. This means that if *X*1, *X*2, ... are independent random variables whose distributions belong to the domain of attraction of a one-sided *α*-strictly stable law L(*S*(*<sup>α</sup>*, 1)) and *NB*1, *p* is the random variable independent of *X*1, *X*2, ... and having the geometric distribution

$$\mathbb{P}(\mathsf{N}\_{1,p} = n) = p(1-p)^{n-1}, \ \ n = 1,2,\ldots, \ \ p \in (0,1), \tag{29}$$

then for each *p* ∈ (0, 1) there exists a constant *ap* > 0 such that *ap*-*X*1 + ... + *XNB*1, *p* =⇒ *Mδ* as *p* → 0, see, e.g., [40].

The history of the Mittag–Leffler distribution was discussed in [2]. For more details see e.g., [2,3] and the references therein. The Mittag–Leffler distributions are of serious theoretical interest in the problems related to thinned (or rarefied) homogeneous flows of events such as renewal processes or anomalous diffusion or relaxation phenomena, see [41,42] and the references therein.

Let *ν* > 0, *δ* ∈ (0, 1]. It can be easily seen that the Laplace transform *ψ*(*M*) *δ* (*s*) (see (28)) is greatly divisible. Therefore, any its positive power is a Laplace transform, and, moreover, is greatly divisible as well. The distribution of a nonnegative random variable *Mδ*, *ν* defined by the Laplace–Stieltjes transform

$$\psi\_{\delta\_{\nu}v}^{(M)}(\mathbf{s}) = \mathbb{E}\boldsymbol{\varepsilon}^{-sM\_{\delta\_{\nu}v}} = \left(1 + \boldsymbol{s}^{\delta}\right)^{-\nu}, \quad \mathbf{s} \gg \mathbf{0},\tag{30}$$

is called the *generalized Mittag–Leffler distribution*, see [43,44] and the references therein. Sometimes this distribution is called the *Pillai distribution* [20], although in the original paper [18] R. Pillai called it *semi-Laplace*. In the present paper we will keep to the first term *generalized Mittag–Leffler distribution*.

The properties of univariate generalized Mittag–Leffler distribution are discussed in [4,43–45]. In particular, if *δ* ∈ (0, 1] and *ν* > 0, then

$$M\_{\delta,\boldsymbol{\nu}} \stackrel{d}{=} S(\delta, \mathbf{1}) \circ \overline{G}\_{\boldsymbol{\nu},\delta,\mathbf{1}} \stackrel{d}{=} S(\delta, \mathbf{1}) \circ G\_{\boldsymbol{\nu},\mathbf{1}}^{1/\delta} \tag{31}$$

(see [43,44]). If *ν* = 1, then (31) turns into

$$M\_{\delta} \stackrel{d}{=} S(\delta, \mathbf{1}) \circ \mathcal{W}\_1^{1/\delta}. \tag{32}$$

If *β δ*, then the moments of order *β* of the random variable *Mδ*, *ν* are very large, and if 0 < *β* < *δ* < 1, then

$$\mathbb{E}\mathcal{M}^{\mathcal{S}}\_{\mathcal{S},\nu} = \frac{\Gamma(1-\frac{\mathcal{E}}{\mathcal{S}})\Gamma(\nu+\frac{\mathcal{E}}{\mathcal{S}})}{\Gamma(1-\mathcal{B})\Gamma(\nu)}\gamma$$

see [4].

In [4] it was demonstrated that the generalized Mittag–Leffler distribution can be represented as a scale mixture of 'ordinary' Mittag–Leffler distributions: if *ν* ∈ (0, 1] and *δ* ∈ (0, 1], then

$$M\_{\delta,\nu} \stackrel{d}{=} Z\_{\nu,1}^{-1/\delta} \diamond M\_{\delta}.\tag{33}$$

In [4] it was also shown that any generalized Mittag–Leffler distribution is a scale mixture a one-sided stable law with any greater characteristic parameter, the mixing distribution being the generalized Mittag–Leffler law: if *δ* ∈ (0, 1], *δ* ∈ (0, 1) and *ν* > 0, then

$$M\_{\delta\delta',\nu} \stackrel{d}{=} S(\delta,1) \circ M\_{\delta',\nu}^{1/\delta}. \tag{34}$$

Now turn to the multivariate case. As the starting point for our consideration we take representation (31). The nearest aim is to obtain its multivariate generalization. Let *<sup>S</sup>*(*<sup>α</sup>*, *μ*) be a strictly stable random vector with *α* -= 1. Consider the characteristic function h*<sup>α</sup>*,*ν*,*<sup>μ</sup>*(*t*) of the random vector *G*1/*<sup>α</sup> <sup>ν</sup>*,1 ◦ *<sup>S</sup>*(*<sup>α</sup>*, *μ*). From (13) and (6) we have

$$\mathfrak{h}\_{a,\nu,\mu}(t) = \mathbb{E}\exp\left\{it^{\top} \left(G\_{\nu,1}^{1/a} \circ \mathcal{S}(a,\mu)\right)\right\} = \frac{1}{\Gamma(\nu)} \int\_{0}^{\infty} \mathfrak{f}\_{a,\mu}(\mathbf{s}^{1/a}\mathbf{t}) \mathbf{s}^{\nu-1} \mathbf{e}^{-s} d\mathbf{s} =$$

$$= \frac{1}{\Gamma(\nu)} \int\_{0}^{\infty} \exp\left\{-s\left(1 - \log \mathfrak{f}\_{a,\mu}(\mathbf{t})\right)\right\} \mathbf{s}^{\nu-1} d\mathbf{s} =$$

$$= \frac{1}{\Gamma(\nu) \left(1 - \log \mathfrak{f}\_{a,\mu}(\mathbf{t})\right)^{\mathcal{V}}} \int\_{0}^{\infty} \mathfrak{e}^{-s} \mathbf{s}^{\nu-1} d\mathbf{s} = \left(1 - \log \mathfrak{f}\_{\mathbb{R},\mu}(\mathbf{t})\right)^{-\nu}. \tag{35}$$

That is, from (1) and (35) we obtain the following result.

**Lemma 2.** *The characteristic function* h*<sup>α</sup>*,*ν*,*<sup>μ</sup>*(**t**) *of the product of the random variable G<sup>ν</sup>*,*α*,<sup>1</sup> *with the generalized gamma distribution with parameters ν* > 0*,* 0 < *α* 2*, α* -= 1*, λ* = 1 *and the random vector <sup>S</sup>*(*<sup>α</sup>*, *μ*) *with the multivariate strictly stable distribution with the characteristic exponent α and spectral measure μ, independent of G<sup>ν</sup>*,*α*,1*, has the form* (35)*.*

Now, comparing the right-hand side of (35) with (30) we can conclude that, if *G<sup>ν</sup>*,*α*,<sup>1</sup> is the random variable with the generalized gamma distribution with parameters *ν* > 0, 0 < *α* 2, *α* -= 1, *λ* = 1, and *<sup>S</sup>*(*<sup>α</sup>*, *μ*+) is a random vector with the one-sided strictly stable distribution with characteristic exponent *α* ∈ (0, 1) and spectral measure *μ*<sup>+</sup> concentrated on Q*<sup>r</sup>*+, then we have all grounds to call the distribution of the random vector *G<sup>ν</sup>*,*α*,<sup>1</sup> ◦ *<sup>S</sup>*(*<sup>α</sup>*, *μ*+) *multivariate generalized Mittag–Leffler distribution* with parameters *α*, *ν* and *<sup>μ</sup>*<sup>+</sup>. To provide the possibility to consider the univariate generalized Mittag–Leffler distribution as a particular case of a more general multivariate definition, here we use the measure *μ*<sup>+</sup> and *α* ∈ (0, 1) characterising the "one-sided" stable law, although from the formal viewpoint this is not obligatory. Moreover, as we will see below, in the multivariate case the (generalized) Mittag–Leffler distribution can be regarded as a special case of the (generalized) Linnik law defined in the same way but with *μ* and *α* ∈ (0, 2].

By *<sup>M</sup><sup>α</sup>*,*ν*,*μ*<sup>+</sup> we will denote the random vector with the multivariate generalized Mittag–Leffler distribution, *<sup>M</sup><sup>α</sup>*,*ν*,*μ*<sup>+</sup> *d*= *G<sup>ν</sup>*,*α*,<sup>1</sup> ◦ *<sup>S</sup>*(*<sup>α</sup>*, *<sup>μ</sup>*+).

Setting *ν* = 1 we obtain the definition of the 'ordinary' multivariate Mittag–Leffler distribution as the distribution of the random vector *<sup>M</sup><sup>α</sup>*,*μ*<sup>+</sup> *d*= *W*1/*<sup>α</sup>* 1 ◦ *<sup>S</sup>*(*<sup>α</sup>*, *μ*+) given by the characteristic function h*<sup>α</sup>*,*μ*<sup>+</sup> (*t*) = -1 − log f*<sup>α</sup>*,*μ*<sup>+</sup> (*t*)−1.

Some properties of the multivariate generalized Mittag–Leffler distributions generalizing (33) and (34) to the multivariate case are presented in the following theorem.

**Theorem 2.** *Let δ* ∈ (0, <sup>1</sup>)*, δ* ∈ (0, 1) *and ν* > 0*. Then*

$$\mathbf{M}\_{\delta\delta',\nu,\mu^{+}} \stackrel{d}{=} M\_{\delta',\nu}^{1/\delta} \circ \mathbb{S}(\delta,\mu^{+}),\tag{36}$$

$$\mathbf{M}\_{\delta,\nu,\mu^{+}} \stackrel{d}{=} Z\_{\nu,1}^{-1/\delta} \circ \mathbf{M}\_{\delta,\mu^{+}} \tag{37}$$

*with the random variable <sup>Z</sup><sup>ν</sup>*,<sup>1</sup> *defined in* (3)*.*

**Proof.** To prove (36) use (1) together with representation (18) in Theorem 1 and obtain

$$\mathbf{M}\_{\delta\delta',\nu,\mu^{+}} \stackrel{d}{=} \overline{\mathbf{G}}\_{\nu,\delta\delta',1} \circ \mathbf{S}(\delta\delta',\mu^{+}) \stackrel{d}{=} \overline{\mathbf{G}}\_{\nu,\delta',1}^{1/\delta} \circ \mathbf{S}^{1/\delta}(\delta',1) \circ \mathbf{S}(\delta,\mu^{+}) \stackrel{d}{=} M\_{\delta',\nu}^{1/\delta} \circ \mathbf{S}(\delta,\mu^{+}) .$$

To prove (37) use (1) together with (4) and obtain

$$\mathbf{M}\_{\delta,\nu,\mu^{+}} \stackrel{d}{=} \overline{\mathbf{G}}\_{\nu,\delta,1} \circ \mathbf{S}(\delta,\mu^{+}) \stackrel{d}{=} Z\_{\nu,1}^{-1/\delta} \circ \mathbf{W}\_{1}^{1/\delta} \circ \mathbf{S}(\delta,\mu^{+}) \stackrel{d}{=} Z\_{\nu,1}^{1/\delta} \circ \mathbf{M}\_{\delta,\mu^{+}}.$$

The theorem is proved.

#### **7. Generalized Linnik Distributions**

In 1953 Yu. V. Linnik [46] introduced a class of symmetric distributions whose characteristic functions have the form

$$f\_a^{(L)}(t) = \left(1 + |t|^a\right)^{-1}, \quad t \in \mathbb{R},\tag{38}$$

where *α* ∈ (0, 2]. The distributions with the characteristic function (38) are traditionally called the *Linnik distributions*. Although sometimes the term *α-Laplace distributions* [18] is used, we will use the first term which has already become conventional. If *α* = 2, then the Linnik distribution turns into the Laplace distribution corresponding to the density

$$f^{(\Lambda)}(\mathbf{x}) = \frac{1}{2} e^{-|\mathbf{x}|}, \quad \mathbf{x} \in \mathbb{R}. \tag{39}$$

A random variable with density (39) will be denoted Λ. A random variable with the Linnik distribution with parameter *α* will be denoted *L<sup>α</sup>*.

Perhaps, most often Linnik distributions are recalled as examples of symmetric geometric stable distributions. This means that if *X*1, *X*2, ... are independent random variables whose distributions belong to the domain of attraction of an *α*-strictly stable symmetric law and *NB*1, *p* is the random variable independent of *X*1, *X*2, ... and having the geometric distribution (29), then for each *p* ∈ (0, 1) there exists a constant *ap* > 0 such that *ap*-*X*1 + ... + *XNB*1,*p* =⇒ *Lα* as *p* → 0, see, e.g., [47] or [40].

The properties of the Linnik distributions were studied in many papers. We should mention [7–9,48] and other papers, see the survey in [2].

In [2,7] it was demonstrated that

$$L\_{\mathfrak{a}} \stackrel{d}{=} \mathcal{W}\_1^{1/a} \circ \mathcal{S}(\mathfrak{a}, 0) \stackrel{d}{=} \sqrt{2M\_{\mathfrak{a}/2}} \circ X\_{\prime} \tag{40}$$

where the random variable *Mα*/2 has the Mittag–Leffler distribution with parameter *α*/2.

The multivariate Linnik distribution was introduced by D. N. Anderson in [49] where it was proved that the function

$$f\_{a,\Sigma}^{(L)}(t) = \left[1 + (t^{\top}\Sigma t)^{a/2}\right]^{-1}, \quad t \in \mathbb{R}^r, \ a \in (0, 2), \tag{41}$$

is the characteristic function of an *r*-variate probability distribution, where Σ is a positive definite (*r* × *r*)-matrix. In [49] the distribution corresponding to the characteristic function (41) was called the *r-variate Linnik distribution*. For the properties of these distributions see [16,49]. To distinguish from the general case, in what follows, the distribution corresponding to characteristic function (41) will be called *multivariate* (*centered*) *elliptically contoured Linnik distribution*.

The *r*-variate elliptically contoured Linnik distribution can also be defined in another way. Let *X* be a random vector such that L(*X*) = NΣ, where Σ is a positive definite (*r* × *r*)-matrix, independent of the random variable *Mα*/2. By analogy with (40) introduce the random vector *<sup>L</sup><sup>α</sup>*,<sup>Σ</sup> as

$$L\_{\mathfrak{a},\Sigma} = \sqrt{2M\_{\mathfrak{a}/2}} \circ X.$$

Then, in accordance with what has been said in Section 5,

$$\mathcal{L}(L\_{\mathfrak{u},\Sigma}) = \mathbb{E}\mathfrak{N}\_{2M\_{\mathfrak{u}/2}\Sigma}.\tag{42}$$

Using Remark 1 we can easily make sure that the two definitions of the multivariate elliptically contoured Linnik distribution coincide. Indeed, with the account of (28), according to Remark 2, the characteristic function of the random vector *<sup>L</sup><sup>α</sup>*,<sup>Σ</sup> defined by (42) has the form

$$\mathbb{E}\exp\{it^\top L\_{a,\Sigma}\} = \psi\_{a/2}^{(M)}\left(\mathbf{t}^\top \Sigma \mathbf{t}\right) = \left[1 + \left(\mathbf{t}^\top \Sigma \mathbf{t}\right)^{a/2}\right]^{-1} = \mathfrak{f}\_{a,\Sigma}^{(L)}(\mathbf{t}), \ \mathbf{t} \in \mathbb{R}',$$

that coincides with Anderson's definition (41).

Based on (40), one more equivalent definition of the multivariate elliptically contoured Linnik distribution can be proposed. Namely, let *<sup>L</sup><sup>α</sup>*,<sup>Σ</sup> be an *r*-variate random vector such that

$$L\_{a,\Sigma} = \mathcal{W}\_1^{1/a} \circ \mathcal{S}(a,\Sigma). \tag{43}$$

In accordance with (27) and Remark 2 the characteristic function of the random vector *<sup>L</sup><sup>α</sup>*,<sup>Σ</sup> defined by (43) again has the form

$$\mathbb{E}\exp\{it^\top L\_{a,\Sigma}\} = \psi^{(\mathcal{W}\_1)}((t^\top \Sigma t)^{a/2}) = \left[1 + (t^\top \Sigma t)^{a/2}\right]^{-1} = \mathfrak{f}\_{a,\Sigma}^{(L)}(t), \ t \in \mathbb{R}^r.$$

The definitions (42) and (43) open the way to formulate limit theorems stating that the multivariate elliptically contoured Linnik distribution can not only be limiting for geometric random sums of independent identically distributed random vectors with very large second moments [50], but it also can be limiting for random sums of independent random vectors with finite covariance matrices.

It can be easily seen that the characteristic function f(*L*) *α* (*t*) (see (38)) is very largely divisible. Therefore, any its positive power is a characteristic function and, moreover, is also very largely divisible. In [17], Pakes showed that the probability distributions known as *generalized Linnik distributions* which have characteristic functions

$$f\_{u, \nu}^{(L)}(t) = \left(1 + |t|^{\mathfrak{a}}\right)^{-\nu}, \quad t \in \mathbb{R}, \ 0 < \mathfrak{a} \leqslant 2, \ \nu > 0,\tag{44}$$

play an important role in some characterization problems of mathematical statistics. The class of probability distributions corresponding to characteristic function (44) have found some interesting properties and applications, see [6,7,10–12,14,51,52] and related papers. In particular, they are good candidates to model financial data which exhibits high kurtosis and heavy tails [53].

Any random variable with the characteristic function (44) will be denoted *L<sup>α</sup>*,*ν*.

Recall some results containing mixture representations for the generalized Linnik distribution. The following well-known result is due to Devroye [7] and Pakes [17] who showed that

$$L\_{\mathfrak{a},\nu} \stackrel{d}{=} S(\mathfrak{a},0) \circ G\_{\nu,1}^{1/a} \stackrel{d}{=} S(\mathfrak{a},0) \circ \overline{G}\_{\nu,\mathfrak{a},1} \tag{45}$$

for any *α* ∈ (0, 2] and *ν* > 0.

> It is well known that

$$\mathbb{E}G\_{\nu,1}^{\gamma} = \frac{\Gamma(\nu+\gamma)}{\Gamma(\nu)}$$

for *γ* > <sup>−</sup>*ν*. Hence, for 0 *β* < *α* from (8) and (45) we obtain

$$\mathbb{E}|L\_{\mathfrak{a},\nu}|^{\mathfrak{f}} = \mathbb{E}|S(\mathfrak{a},0)|^{\mathfrak{f}} \cdot \mathbb{E}G\_{\nu,1}^{\mathfrak{f}/\mathfrak{a}} = \frac{2^{\mathfrak{f}}}{\sqrt{\pi}} \cdot \frac{\Gamma(\frac{\mathfrak{f}+1}{2})\Gamma(1-\frac{\mathfrak{f}}{\mathfrak{a}})\Gamma(\nu+\frac{\mathfrak{f}}{\mathfrak{a}})}{\Gamma(\frac{2}{\mathfrak{f}}-1)\Gamma(\nu)}.$$

Generalizing and improving some results of [15,17], with the account of (31) in [5] it was demonstrated that for *ν* > 0 and *α* ∈ (0, 2]

$$L\_{\mathfrak{a},\nu} \stackrel{d}{=} X \circ \sqrt{2S(\mathfrak{a}/2,1)} \circ G\_{\nu,1}^{1/\mathfrak{a}} \stackrel{d}{=} X \circ \sqrt{2S(\mathfrak{a}/2,1) \circ \overline{G\_{\nu,\mathfrak{a}/2,1}}} \stackrel{d}{=} X \circ \sqrt{2M\_{\mathfrak{a}/2,\nu}}.\tag{46}$$

that is, the generalized Linnik distribution is a normal scale mixture with the generalized Mittag–Leffler mixing distribution.

It is easy to see that for any *α* > 0 and *α* > 0

$$\mathbb{G}\_{\nu,\alpha\alpha',1} \stackrel{d}{=} \mathcal{G}\_{\nu,1}^{1/\alpha\alpha'} \stackrel{d}{=} (\mathcal{G}\_{\nu,1}^{1/\alpha'})^{1/\alpha} \stackrel{d}{=} \mathcal{G}\_{\nu,\alpha',1}^{1/\alpha}.$$

Therefore, for *α* ∈ (0, 2], *α* ∈ (0, 1) and *ν* > 0 using (45) and the univariate version of (14) we obtain the following chain of relations:

$$L\_{aa',\nu} \stackrel{d}{=} S(\alpha a',0) \circ G\_{\nu,1}^{1/aa'} \stackrel{d}{=} S(\alpha,0) \circ S^{1/a}(a',1) \circ G\_{\nu,1}^{1/aa'} \stackrel{d}{=}$$

$$\stackrel{d}{=} S(\alpha,0) \circ \left(S(a',1)\overline{G}\_{\nu,a',1}\right)^{1/a} \stackrel{d}{=} S(\alpha,0) \circ M\_{a',\nu}^{1/a}.$$

Hence, the following statement, more general than (46), holds representing the generalized Linnik distribution as a scale mixture of a symmetric stable law with any greater characteristic parameter, the mixing distribution being the generalized Mittag–Leffler law: if *α* ∈ (0, 2], *α* ∈ (0, 1) and *ν* > 0, then

$$L\_{\alpha \alpha', \nu} \overset{d}{=} \mathcal{S}(\alpha, 0) \circ M\_{\alpha', \nu}^{1/\alpha}.$$

Now let *ν* ∈ (0, 1]. From (45) and (4) it follows that

$$L\_{\mathfrak{a},\nu} \stackrel{d}{=} \mathcal{S}(\mathfrak{a},0) \circ \mathcal{G}\_{\nu,1}^{1/\mathfrak{a}} \stackrel{d}{=} \mathcal{S}(\mathfrak{a},0) \circ \mathcal{W}\_1^{1/\mathfrak{a}} \circ Z\_{\nu,1}^{-1/\mathfrak{a}} \stackrel{d}{=} L\_{\mathfrak{a}} \circ Z\_{\nu,1}^{-1/\mathfrak{a}}$$

yielding the following relation proved in [5]: if *ν* ∈ (0, 1] and *α* ∈ (0, 2], then

$$L\_{\mathfrak{a},\nu} \stackrel{d}{=} L\_{\mathfrak{a}} \cdot Z\_{\nu,1}^{-1/a} \cdot$$

In other words, with *ν* ∈ (0, 1] and *α* ∈ (0, 2], the generalized Linnik distribution is a scale mixture of 'ordinary' Linnik distributions. In the same paper the representation of the generalized Linnik distribution via the Laplace and 'ordinary' Mittag–Leffler distributions was obtained.

For *δ* ∈ (0, 1] denote

$$R\_{\delta} = \frac{S(\delta, 1)}{S'(\delta, 1)'},$$

where *<sup>S</sup>*(*<sup>δ</sup>*, 1) and *<sup>S</sup>*(*<sup>δ</sup>*, 1) are independent random variables with one and the same one-sided stable distribution with the characteristic exponent *δ*. In [2] it was shown that the probability density *f* (*R*) *δ* (*x*) of the ratio *Rδ* of two independent random variables with one and the same one-sided strictly stable distribution with parameter *δ* has the form

$$f\_{\delta}^{(\mathbb{R})}(\mathbf{x}) = \frac{\sin(\pi \delta) \mathbf{x}^{\delta - 1}}{\pi [1 + \mathbf{x}^{2\delta} + 2\mathbf{x}^{\delta} \cos(\pi \delta)]}, \quad \mathbf{x} > \mathbf{0},$$

also see [1], Section 3.3, where it was hidden among other calculations, but was not stated explicitly. In [5] it was proved that if *ν* ∈ (0, 1] and *α* ∈ (0, 2], then

$$L\_{\mathfrak{a},\nu} \stackrel{d}{=} X \circ Z\_{\nu,1}^{-1/a} \circ \sqrt{2M\_{\mathfrak{a}/2}} \stackrel{d}{=} \Lambda \circ Z\_{\nu,1}^{-1/a} \circ \sqrt{R\_{\mathfrak{a}/2}}.$$

So, the density of the univariate generalized Linnik distribution admits a simple integral representation via known elementary densities (2), (39) and (45).

As concerns the property of geometric stability, the following statement holds.

**Lemma 3.** *Any univariate symmetric random variable Yα is geometrically stable if and only if it is representable as*

$$\mathcal{Y}\_{\mathfrak{a}} = \mathcal{W}\_1^{1/\mathfrak{a}} \circ \mathcal{S}(\mathfrak{a}, \mathfrak{d}), \quad 0 < \mathfrak{a} \lessapprox 2.$$

*Any univariate positive random variable Yα is geometrically stable if and only if it is representable as*

$$\Upsilon\_{\mathfrak{a}} = \mathcal{W}\_1^{1/\mathfrak{a}} \circ S(\mathfrak{a}, 1), \quad 0 < \mathfrak{a} \lesssim 1.$$

**Proof.** These representations immediately follow from the definition of geometrically stable distributions and the transfer theorem for cumulative geometric random sums, see, e.g., [54].

**Corollary 1.** *If ν* -= 1*, then from the identifiability of scale mixtures of stable laws (see Lemma 1) it follows that the generalized Linnik distribution and the generalized Mittag–Leffler distributions are not geometrically stable.*

Let Σ be a positive definite (*r* × *r*)-matrix, *α* ∈ (0, 2], *ν* > 0. As the 'ordinary' multivariate Linnik distribution, the multivariate elliptically contoured generalized Linnik distribution can be defined in at least two equivalent ways. First, it can be defined by its characteristic function. Namely, a multivariate distribution is called (centered) elliptically contoured generalized Linnik law, if the corresponding characteristic function has the form

$$f\_{a,\nu,\Sigma}^{(L)}(\mathbf{t}) = \left[1 + (\mathbf{t}^\top \Sigma \mathbf{t})^{a/2}\right]^{-\nu}, \quad \mathbf{t} \in \mathbb{R}^r. \tag{47}$$

Second, let *X* be a random vector such that L(*X*) = NΣ, independent of the random variable *<sup>M</sup>α*/2,*ν* with the generalized Mittag–Leffler distribution. By analogy with (46), introduce the random vector *<sup>L</sup><sup>α</sup>*,*ν*,<sup>Σ</sup> as

$$L\_{\mathfrak{a},\nu,\Sigma} = \sqrt{2M\_{\mathfrak{a}/2,\nu}} \circ \mathbf{X}.$$

Then, in accordance with what has been said in Section 5,

$$\mathcal{L}(\mathbf{L}\_{\mathbf{u},\nu,\Sigma}) = \mathsf{E}\mathfrak{N}\_{\mathbf{2}M\_{\mathbf{u}/2,\nu}\Sigma}.\tag{48}$$

The distribution (42) will be called the multivariate (centered) elliptically contoured generalized Linnik distribution.

Using Remark 2 we can easily make sure that the two definitions of the multivariate elliptically contoured generalized Linnik distribution coincide. Indeed, with the account of (30), according to Remark 2, the characteristic function of the random vector *<sup>L</sup><sup>α</sup>*,*ν*,<sup>Σ</sup> defined by (48) has the form

$$\mathbb{E}\exp\{it^\top L\_{a,\nu,\Sigma}\} = \psi\_{a/2,\nu}^{(M)}\left(t^\top \Sigma t\right) = \left[1 + \left(t^\top \Sigma t\right)^{a/2}\right]^{-\nu} = \mathfrak{f}\_{a,\nu,\Sigma}^{(L)}(\mathbf{t}), \ \mathbf{t} \in \mathbb{R}^r,$$

that coincides with (47).

Based on (45), one more equivalent definition of the multivariate elliptically contoured generalized Linnik distribution can be proposed. Namely, let *<sup>L</sup><sup>α</sup>*,*ν*,<sup>Σ</sup> be an *r*-variate random vector such that

$$L\_{\mathfrak{a},\nu,\Sigma} = G\_{\nu,1}^{1/\mathfrak{a}} \diamond \mathcal{S}(\mathfrak{a},\Sigma). \tag{49}$$

If *ν* = 1, then, by definition, we obtain the random vector *W*1/*<sup>α</sup>* 1 ◦ *<sup>S</sup>*(*<sup>α</sup>*, Σ) = *<sup>L</sup><sup>α</sup>*,<sup>Σ</sup> with the 'ordinary' multivariate elliptically contoured Linnik distribution.

It is well known that the Laplace—Stieltjes transform *ψ*(*G*) *<sup>ν</sup>*,1 (*s*) of the random variable *G<sup>ν</sup>*,<sup>1</sup> having the gamma distribution with the shape parameter *ν* has the form

$$
\psi\_{\nu,1}^{(G)}(s) = (1+s)^{-\nu}, \quad s > 0.
$$

Then in accordance with Remark 1 the characteristic function of the random vector *<sup>L</sup><sup>α</sup>*,*ν*,<sup>Σ</sup> defined by (49) again has the form

$$\mathbb{E}\exp\{\mathrm{it}^{\top}L\_{\mathfrak{a},\nu,\Sigma}\} = \boldsymbol{\psi}\_{\nu,1}^{(G)}\left(\left(\mathbf{t}^{\top}\Sigma\mathbf{t}\right)^{\mathbf{a}/2}\right) = \left[1 + \left(\mathbf{t}^{\top}\Sigma\mathbf{t}\right)^{\mathbf{a}/2}\right]^{-\nu} = \boldsymbol{\mathfrak{f}}\_{\boldsymbol{\mathfrak{a}},\nu,\Sigma}^{(L)}\left(\mathbf{t}\right), \ \mathbf{t} \in \mathbb{R}^{\prime}.$$

Definitions (48) and (49) open the way to formulate limit theorems stating that the multivariate elliptically contoured generalized Linnik distribution can be limiting both for random sums of independent identically distributed random vectors with very large second moments, and for random sums of independent random vectors with finite covariance matrices.

There are some different ways of generalization of the univariate symmetric Linnik and generalized Linnik laws to the asymmetric case. The traditional (and formal) approach to the asymmetric generalization of the Linnik distribution (see, e.g., [15,55,56]) consists in the consideration of geometric sums of random summands whose distributions are attracted to an asymmetric strictly stable distribution. The variances of such summands are very large. Since in modeling real phenomena, as a rule, there are no solid reasons to reject the assumption of the finiteness of the variances of elementary summands, in [57], two alternative asymmetric generalizations were proposed based on the representability of the Linnik distribution as a scale mixture of normal laws or a scale mixture of Laplace laws.

Nevertheless, for our purposes it is convenient to deal with the traditional asymmetric generalization of the generalized Linnik distribution. Let *<sup>S</sup>*(*<sup>α</sup>*, *θ*) be a random variable with the strictly stable distribution defined by the characteristic exponent *α* ∈ (0, 2] and asymmetry parameter *θ* ∈ [−1, 1], *G<sup>ν</sup>*,*α*,<sup>1</sup> be a random variable having the GG distribution with shape parameter *ν* > 0 and exponent power parameter *α* independent of *<sup>S</sup>*(*<sup>α</sup>*, *<sup>θ</sup>*). Based on representation (45), we define the *asymmetric generalized Linnik distribution* as L-*G<sup>ν</sup>*,*α*,<sup>1</sup> ◦ *<sup>S</sup>*(*<sup>α</sup>*, *<sup>θ</sup>*). A random variable with this distribution will be denoted *<sup>L</sup><sup>α</sup>*,*ν*,*θ*.

In the multivariate case a natural way of construction of the asymmetric Linnik laws consists in the application of Lemma 2 with not necessarily elliptically contoured strictly stable distribution. Namely, let the random variable *G<sup>ν</sup>*,*α*,<sup>1</sup> have the generalized gamma distribution and be independent of the random vector *<sup>S</sup>*(*<sup>α</sup>*, *μ*) with the strictly stable distribution with characteristic exponent *α* ∈ (0, 2] and spectral measure *μ*. Extending the definitions of multivariate elliptically contoured generalized Linnik distribution given above, we will say that the distribution of the random vector *G<sup>ν</sup>*,*α*,<sup>1</sup> ◦ *<sup>S</sup>*(*<sup>α</sup>*, *μ*) is the *multivariate generalized Linnik distribution*. Formally, this definition embraces both multivariate elliptically contoured generalized Linnik laws and, moreover, multivariate generalized Mittag–Leffler laws (if *μ* = *μ*<sup>+</sup>). A random vector with the multivariate generalized Linnik distribution will be denoted *<sup>L</sup><sup>α</sup>*,*ν*,*μ*.

If *ν* = 1, then we have the 'ordinary' multivariate Linnik distribution. By definition, *<sup>L</sup><sup>α</sup>*,1,*<sup>μ</sup>* = *<sup>L</sup><sup>α</sup>*,*μ*.

Mixture representations for the generalized Mittag–Leffler distribution were considered in [5] and discussed in Section 6 together with their extensions to the multivariate case. Here, we will focus on the mixture representations for the multivariate generalized Linnik distribution. Our reasoning is based on the definition of the multivariate generalized Linnik distribution given above and Theorem 1.

For *α* ∈ (0, 2], *α* ∈ (0, 1) and *ν* > 0 using (1), (17) and (31) we obtain the following chain of relations:

$$L\_{aa',\nu,\mu} \stackrel{d}{=} \mathbb{G}\_{\nu,aa',1} \circ \mathbb{S}(aa',\mu) \stackrel{d}{=} \mathbb{G}\_{\nu,1}^{1/aa'} \circ \mathbb{S}(aa',\mu) \stackrel{d}{=} \mathbb{S}^{1/a}(a',1) \circ \mathbb{G}\_{\nu,1}^{1/aa'} \circ \mathbb{S}(a,\mu) \stackrel{d}{=}$$

$$\stackrel{d}{=} \left( \mathbf{S}(\boldsymbol{\alpha}', 1) \circ \overline{\mathbf{G}}\_{\nu, \boldsymbol{\alpha}', 1} \right)^{1/\alpha} \circ \mathbf{S}(\boldsymbol{\alpha}, \mu) \stackrel{d}{=} M\_{\boldsymbol{\alpha}', \nu}^{1/\alpha} \circ \mathbf{S}(\boldsymbol{\alpha}, \mu).$$

Hence, the following statement holds representing the multivariate generalized Linnik distribution as a scale mixture of a multivariate stable law with any greater characteristic parameter, the mixing distribution being the univariate generalized Mittag–Leffler law.

**Theorem 3.** *If α* ∈ (0, 2]*, α* ∈ (0, 1) *and ν* > 0*, then*

$$\mathcal{L}\_{aa',\nu,\mu} \stackrel{d}{=} M\_{a',\nu}^{1/a} \circ \mathcal{S}(a,\mu). \tag{50}$$

Now let *ν* ∈ (0, 1]. From (45) and (4) it follows that

$$L\_{a,\nu,\mu} \stackrel{d}{=} G\_{\nu,1}^{1/a} \circ \mathcal{S}(a,\mu) \stackrel{d}{=} Z\_{\nu,1}^{-1/a} \circ \mathcal{W}\_1^{1/a} \circ \mathcal{S}(a,\mu) \stackrel{d}{=} Z\_{\nu,1}^{-1/a} \stackrel{d}{=} Z\_{\nu,1}^{-1/a} \circ L\_{a,\mu}$$

yielding the following statement.

**Theorem 4.** *If ν* ∈ (0, 1] *and α* ∈ (0, 2]*, then*

$$L\_{\alpha,\nu,\mu} \overset{d}{=} Z\_{\nu,1}^{-1/\alpha} \diamond L\_{\alpha,\mu}.$$

*In other words, with ν* ∈ (0, 1] *and α* ∈ (0, 2]*, the multivariate generalized Linnik distribution is a scale mixture of 'ordinary' multivariate Linnik distributions.*

Consider projections of a random vector with the multivariate generalized Linnik distribution. For an arbitrary *u* ∈ R*r* we have

$$\begin{split} \mathfrak{u}^{\top} \mathsf{L}\_{\mathfrak{a},\nu,\mu} \stackrel{d}{=} \mathfrak{u}^{\top} \left( \overline{\mathsf{G}}\_{\nu,\mathfrak{a},1} \circ \mathsf{S}(\mathfrak{a},\mu) \right) &= \overline{\mathsf{G}}\_{\nu,\mathfrak{a},1} \circ \mathfrak{u}^{\top} \mathsf{S}(\mathfrak{a},\mu) \stackrel{d}{=} \overline{\mathsf{G}}\_{\nu,\mathfrak{a},1} \circ \left( \gamma(\mathfrak{u}) \mathsf{S}(\mathfrak{a},\theta(\mathfrak{u})) \right) = \\ &= \gamma(\mathfrak{u}) \cdot \overline{\mathsf{G}}\_{\nu,\mathfrak{a},1} \circ \mathsf{S}(\mathfrak{a},\theta(\mathfrak{u})) \stackrel{d}{=} \gamma(\mathfrak{u}) L\_{\mathfrak{a}\nu,\theta(\mathfrak{u})}. \end{split}$$

This means that the following statement holds.

**Theorem 5.** *Let the random vector <sup>L</sup><sup>α</sup>*,*ν*,*<sup>μ</sup> have the multivariate generalized Linnik distribution with α* ∈ (0, 2]*, ν* > 0 *and spectral measure μ. Let to the spectral measure μ there correspond the projection scale parameter function γ*(*u*) *and projection asymmetry parameter function <sup>θ</sup>*(*u*), *u* ∈ R*<sup>r</sup>*. *Then any projection of the random vector <sup>L</sup><sup>α</sup>*,*ν*,*<sup>μ</sup> has the univariate asymmetric Linnik distribution with the asymmetry parameter <sup>θ</sup>*(*u*) *scaled by γ*(*u*)*:*

$$
\mathfrak{u}^\top L\_{\mathfrak{a},\nu,\mathfrak{v}} \overset{d}{=} \gamma(\mathfrak{u}) L\_{\mathfrak{a},\nu,\theta(\mathfrak{u})}.
$$

Now consider the elliptically contoured case. Let *α* ∈ (0, 2] and the random vector ΛΣ have the multivariate Laplace distribution with some positive definite (*r* × *r*)-matrix Σ. In [33] it was shown that if *δ* ∈ (0, 1], then

$$\mathcal{W}\_{\delta} \stackrel{d}{=} \mathcal{W}\_1 \circ S^{-1}(\delta, 1). \tag{51}$$

Hence, it can be easily seen that

$$\mathbf{L}\_{\mathbf{a},\Sigma} \stackrel{d}{=} \mathcal{W}\_1^{1/a} \circ \mathbf{S}(\mathbf{a}, \Sigma) \stackrel{d}{=} \sqrt{2\mathcal{W}\_{\mathbf{a}/2} \circ \mathbf{S}(\mathbf{a}/2, 1)} \circ \mathbf{X} \stackrel{d}{=} \sqrt{2\mathcal{W}\_1 \circ \mathcal{R}\_{\mathbf{a}/2}} \circ \mathbf{X} \stackrel{d}{=} \sqrt{\mathcal{R}\_{\mathbf{a}/2}} \circ \Lambda\_{\Sigma}.\tag{52}$$

So, from Theorem 4 and (52) we obtain the following statement. **Corollary 2.** *If ν* ∈ (0, 1] *and α* ∈ (0, 2]*, then the multivariate elliptically contoured generalized Linnik distribution is a scale mixture of multivariate Laplace distributions:*

$$L\_{\mathfrak{A}, \nu, \Sigma} \overset{d}{=} Z\_{\nu, 1}^{-1/a} \diamond \sqrt{\mathcal{R}\_{a/2}} \circ \Lambda\_{\Sigma}.$$

From (31) with *ν* = 1 and (51) it can be seen that

$$L\_{a, \Sigma} \stackrel{d}{=} \sqrt{2M\_{a/2}} \circ X.$$

Therefore we obtain one more corollary of Theorem 4 representing the multivariate generalized Linnik distribution via 'ordinary' Mittag–Leffler distributions.

**Corollary 3.** *If ν* ∈ (0, 1] *and α* ∈ (0, 2]*, then*

$$L\_{\mathfrak{a},\nu,\Sigma} \stackrel{d}{=} Z\_{\nu,1}^{-1/\mathfrak{a}} \circ \sqrt{2M\_{\mathfrak{a}/2}} \circ \mathcal{X}.$$

#### **8. General Scale-Mixed Stable Distributions**

In the preceding sections we considered special scale-mixed stable distributions in which the mixing distribution was generalized gamma leading to popular Mittag–Leffler and Linnik laws. Now turn to the case where the mixing distribution can be arbitrary.

Let *α* ∈ (0, 2], let *U* be a positive random variable and *<sup>S</sup>*(*<sup>α</sup>*, *μ*) be a random vector with the strictly stable distribution defined by the characteristic exponent *α* and spectral measure *μ*. An *r*-variate random vector *<sup>Y</sup><sup>α</sup>*,*<sup>μ</sup>* is said to have the *U-scale-mixed stable distribution*, if

$$\Upsilon\_{\alpha,\mu} \stackrel{d}{=} \mathcal{U}^{1/a} \circ \mathbf{S}(\alpha,\mu)$$

Correspondingly, for 0 < *α* 1, a univariate positive random variable *Y<sup>α</sup>*,<sup>1</sup> is said to have the *U*-scale-mixed one-sided stable distribution, if *Y<sup>α</sup>*,<sup>1</sup> is representable as

$$
\chi\_{\mathfrak{a},1} \overset{d}{=} \mathcal{U}^{1/\mathfrak{a}} \circ \mathcal{S}(\mathfrak{a},1).
$$

As above, in the elliptically contoured case, where to the spectral measure *μ* there corresponds a positive definite (*r* × *r*)-matrix Σ, instead of *<sup>Y</sup><sup>α</sup>*,*<sup>μ</sup>* we will write *Y<sup>α</sup>*,Σ.

The following statement generalizes Theorem 2 (Equation (36)) and Theorem 3.

**Theorem 6.** *Let U be a positive random variable, α* ∈ (0, 2]*, α* ∈ (0, 1]*. Let <sup>S</sup>*(*<sup>α</sup>*, *μ*) *be a random vector with the strictly stable distribution defined by the characteristic exponent α and spectral measure μ. Let an r-variate random vector <sup>Y</sup>αα*,<sup>Σ</sup> *have the U-scale-mixed stable distribution and a random variable <sup>Y</sup><sup>α</sup>*,<sup>1</sup> *have the U-scale-mixed one-sided stable distribution. Assume that <sup>S</sup><sup>α</sup>*,*<sup>μ</sup> and <sup>Y</sup><sup>α</sup>*,<sup>1</sup> *are independent. Then*

$$\mathbf{Y}\_{\alpha\alpha',\mu} \stackrel{d}{=} Y^{1/\alpha}\_{\alpha',1} \circ \mathbf{S}(\alpha,\mu).$$

**Proof.** From the definition of a *U*-scale-mixed stable distribution and (17) we have

$$\mathcal{Y}\_{\mathsf{aa}',\mu} \stackrel{d}{=} \mathcal{U}^{1/\mathsf{aa}'} \circ \mathcal{S}(\mathsf{aa}',\mu) \stackrel{d}{=} \mathcal{U}^{1/\mathsf{aa}'} \circ \mathcal{S}^{1/\mathsf{a}}(\mathsf{a}',1) \circ \mathcal{S}(\mathsf{a},\mu) \stackrel{d}{=}$$

$$\stackrel{d}{=} \left(\mathcal{U}^{1/\mathsf{a}'} \circ \mathcal{S}(\mathsf{a}',1)\right)^{1/\mathsf{a}} \circ \mathcal{S}(\mathsf{a},\mu) \stackrel{d}{=} \mathcal{Y}\_{\mathsf{a}',1}^{1/\mathsf{a}} \circ \mathcal{S}(\mathsf{a},\mu).$$

In the elliptically contoured case with *α* = 2, from Theorem 6 we obtain the following statement. **Corollary 4.** *Let α* ∈ (0, <sup>2</sup>)*, U be a positive random variable,* Σ *be a positive definite* (*r* × *r*)*-matrix, X be a random vector such that* L(*X*) = NΣ*. Then*

$$\mathbf{Y}\_{a,\Sigma} \stackrel{d}{=} \sqrt{2\mathbf{Y}\_{a/2,1}} \circ \mathbf{X}.$$

*In other words, any multivariate scale-mixed symmetric stable distribution is a scale mixture of multivariate normal laws. On the other hand, since the normal distribution is stable with α* = 2*, any multivariate normal scale mixture is a 'trivial' multivariate scale-mixed stable distribution.*

To give particular examples of 'non-trivial' scale-mixed stable distributions, note that


Among possible mixing distributions of the random variable *U*, we will distinguish a special class that can play important role in modeling observed regularities by heavy-tailed distributions. Namely, assume that *V* is a positive random variable and let

$$
\underline{U} \stackrel{d}{=} V \circ G\_{\underline{\nu}, \underline{1}, \underline{\nu}}
$$

that is, the distribution of *U* is a scale mixture of gamma distributions. We will denote the class of these distributions as G(*V*). This class is rather wide and besides the gamma distribution and its particular cases (exponential, Erlang, chi-square, etc.) with exponentially fast decreasing tail, contains, for example, Pareto and Snedecor–Fisher laws with power-type decreasing tail. In the last two cases the random variable *V* is assumed to have the corresponding gamma and inverse gamma distributions, respectively.

For L(*U*) ∈ G(*V*) we have

$$Y\_{\mathfrak{a},1} \stackrel{d}{=} (V \circ G\_{\mathfrak{v},1})^{1/a} \circ S(\mathfrak{a},1) \stackrel{d}{=} V^{1/a} \circ \left( G\_{\mathfrak{v},1}^{1/a} \circ S(\mathfrak{a},1) \right) \stackrel{d}{=} V^{1/a} \circ M\_{\mathfrak{a},\mathfrak{v}}.$$

and

$$\Upsilon\_{\mathfrak{a},\mathfrak{\mu}} \stackrel{d}{=} (V \diamond G\_{\nu,1})^{1/\mathfrak{a}} \circ \mathcal{S}(\mathfrak{a},\mathfrak{\mu}) \stackrel{d}{=} V^{1/\mathfrak{a}} \circ \left( G\_{\nu,1}^{1/\mathfrak{a}} \circ \mathcal{S}(\mathfrak{a},\mathfrak{\mu}) \right) \stackrel{d}{=} V^{1/\mathfrak{a}} \circ \mathcal{L}\_{\mathfrak{a},\nu,\mathfrak{\mu}}.$$

This means that with L(*U*) ∈ G(*V*), the *U*-scale-mixed stable distributions are scale mixtures of the generalized Mittag–Leffler and multivariate generalized Linnik laws.

Therefore, we pay a special attention to mixture representations of the generalized Mittag–Leffler and multivariate generalized Linnik distributions. These representations can be easily extended to any *U*-scale-mixed stable distributions with L(*U*) ∈ G(*V*).

#### **9. Convergence of the Distributions of Random Sequences with Independent Indices to Multivariate Scale-Mixed Stable Distributions**

In applied probability it is a convention that a model distribution can be regarded as well-justified or adequate, if it is an *asymptotic approximation*, that is, if there exists a rather simple limit setting (say, schemes of maximum or summation of random variables) and the corresponding limit theorem in which the model under consideration manifests itself as a limit distribution. The existence of such limit setting can provide a better understanding of real mechanisms that generate observed statistical regularities, see e.g., [54].

In this section we will prove some limit theorems presenting necessary and sufficient conditions for the convergence of the distributions of random sequences with independent random indices (including sums of a random number of random vectors and multivariate statistics constructed from samples with random sizes) to scale mixtures of multivariate elliptically contoured stable distributions. As particular cases, conditions will be obtained for the convergence of the distributions of random sums of random vectors with both very large and finite covariance matrices to the multivariate generalized Linnik distribution.

Consider a sequence {*<sup>S</sup>n*}*n*<sup>1</sup> of random elements taking values in R*<sup>r</sup>*. Let Ξ(R*r*) be the set of all nonsingular linear operators acting from R*r* to R*<sup>r</sup>*. The identity operator acting from R*r* to R*r* will be denoted *Ir*. Assume that there exist sequences {*Bn*}*n*<sup>1</sup> of operators from Ξ(R*r*) and {*<sup>a</sup>n*}*n*<sup>1</sup> of elements from R*r* such that

$$Q\_n = B\_n^{-1}(S\_n - a\_n) \Longrightarrow \mathbb{Q} \quad (n \to \infty) \tag{53}$$

where *Q* is a random element whose distribution with respect to P will be denoted *H*, *H* = L(*Q*).

Along with {*<sup>S</sup>n*}*n*1, consider a sequence of integer-valued positive random variables {*Nn*}*n*<sup>1</sup> such that for each *n* 1 the random variable *Nn* is independent of the sequence {*<sup>S</sup>k*}*k*1. Let *cn* ∈ R*<sup>r</sup>*, *Dn* ∈ <sup>Ξ</sup>(R*<sup>r</sup>*), *n* 1. Now we will formulate sufficient conditions for the weak convergence of the distributions of the random elements *Zn* = *D*−<sup>1</sup> *n* (*<sup>S</sup>Nn* − *<sup>c</sup>n*) as *n* → ∞.

For *g* ∈ R*r* denote *<sup>W</sup>n*(*g*) = *D*−<sup>1</sup> *n* (*BNn g* + *<sup>a</sup>Nn* − *<sup>c</sup>n*). By measurability of a random field we will mean its measurability as a function of two variates, an elementary outcome and a parameter, with respect to the Cartesian product of the *σ*-algebra A and the Borel *σ*-algebra B(R*r*) of subsets of R*<sup>r</sup>*.

In [58,59] the following theorem was proved which establishes sufficient conditions of the weak convergence of multivariate random sequences with independent random indices under operator normalization.

**Theorem 7.** *[58,59]. Let <sup>D</sup>*−<sup>1</sup> *n* → ∞ *as n* → ∞ *and let the sequence of random variables* {*<sup>D</sup>*−<sup>1</sup> *n BNn* }*n*<sup>1</sup> *be tight. Assume that there exist a random element Q with distribution H and an r-dimensional random field <sup>W</sup>*(*g*)*, g* ∈ R*r, such that* (53) *holds and*

$$W\_n(\mathfrak{g}) \implies W(\mathfrak{g}) \quad (n \to \infty).$$

*for H-almost all g* ∈ R*r. Then the random field <sup>W</sup>*(*g*) *is measurable, linearly depends on g and*

$$\mathcal{Z}\_n \Longrightarrow \mathcal{W}(\mathcal{Q}) \quad (n \to \infty),$$

*where the random field <sup>W</sup>*(·) *and the random element Q are independent.*

Now consider a special case of the general limit setting and assume that the normalization is scalar and the limit random vector *Q* in (53) has an elliptically contoured stable distribution. Namely, let {*bn*}*n*<sup>1</sup> be an very largely increasing sequence of positive numbers and, instead of the general condition (53) assume that

$$
\mathcal{L}\left(b\_n^{-1/a}\mathcal{S}\_n\right) \implies \mathfrak{S}\_{a,\Sigma} \tag{54}
$$

as *n* → <sup>∞</sup>, where *α* ∈ (0, 2] and Σ is some positive definite matrix. In other words, let

$$b\_n^{-1/\alpha} \mathbb{S}\_n \Longrightarrow \mathbb{S}(\alpha, \Sigma) \quad (n \to \infty).$$

Let {*dn*}*n*<sup>1</sup> be an very largely increasing sequence of positive numbers. As *Zn* take the scalar normalized random vector

$$\mathbf{Z}\_n = d\_n^{-1/a} \mathbf{S}\_{N\_n}.$$

The following result can be considered as a multivariate generalization of the main theorem of [29]. **Theorem 8.** *Let Nn* → ∞ *in probability as n* → ∞*. Assume that the random vectors S*1, *S*2, ... *satisfy condition* (54) *with α* ∈ (0, 2] *and a positive definite matrix* Σ*. Then a distribution F such that*

$$\mathcal{L}(\mathbf{Z}\_n) \implies F \quad (n \to \infty), \tag{55}$$

*exists if and only if there exists a distribution function <sup>V</sup>*(*x*) *satisfying the conditions*

*(i) <sup>V</sup>*(*x*) = 0 *for x* < 0*;*

*(ii) for any A* ∈ B(R*r*)

$$F(A) = \mathbb{E}\mathfrak{S}\_{a,ll^{2/a}\Sigma}(A) = \int\_0^\infty \mathfrak{S}\_{a,\mathfrak{n}^{2/a}\Sigma}(A)dV(\mathfrak{u}), \ \mathbf{x} \in \mathbb{R}^1;$$

*(iii)* P(*bNn* < *dnx*) =⇒ *<sup>V</sup>*(*x*)*, n* → ∞*.*

**Proof.** *The 'if' part*. We will essentially exploit Theorem 7. For each *n* 1 let *an* = *cn* = 0, *Bn* = *b*1/*α n Ir*, *Dn* = *d*1/*α n Ir*. Let *U* be a random variable with the distribution function *<sup>V</sup>*(*x*). Note that the conditions of the theorem guarantee the tightness of the sequence of random variables

$$\|\|D\_n^{-1}B\_{\mathcal{N}\_n}\|\| = (b\_{\mathcal{N}\_n}/d\_n)^{1/n}, \ n = 1, 2, \dots$$

implied by its weak convergence to the random variable *U*1/*<sup>α</sup>*. Further, in the case under consideration we have *<sup>W</sup>n*(*g*)=(*bNn*/*dn*)1/*<sup>α</sup>* · *g*, *g* ∈ R*<sup>r</sup>*. Therefore, the condition *bNn*/*dn* =⇒ *U* implies *<sup>W</sup>n*(*g*) =⇒ *<sup>U</sup>*1/*<sup>α</sup>g* for all *g* ∈ R*<sup>r</sup>*.

Condition (54) means that in the case under consideration *H* = S*<sup>α</sup>*,Σ. Hence, by Theorem 7 *Zn* =⇒ *U*1/*α* ◦ *<sup>S</sup>*(*<sup>α</sup>*, Σ) (recall that the symbol ◦ stands for the product of *independent* random elements). The distribution of the random element *U*1/*α* ◦ *<sup>S</sup>*(*<sup>α</sup>*, Σ) coincides with <sup>E</sup>S*<sup>α</sup>*,*U*2/*α*Σ, see Section 5.

*The 'only if' part*. Let condition (55) hold. Make sure that the sequence {*<sup>D</sup>*−<sup>1</sup> *n BNn* }*n*<sup>1</sup> is tight. Let *Q d*= *<sup>S</sup>*(*<sup>α</sup>*, <sup>Σ</sup>). There exist *δ* > 0 and *ρ* > 0 such that

$$\mathbb{P}(\|\mathbf{Q}\| > \rho) > \delta. \tag{56}$$

For *ρ* specified above and an arbitrary *x* > 0 we have

$$\mathbb{P}(\left\|\mathbf{Z}\_{n}\right\|>\mathbf{x}) \geqslant \mathbb{P}\left(\left\|\left|d\_{n}^{-1/a}\mathbf{S}\_{N\_{n}}\right\|>\mathbf{x}; \left\|b\_{N\_{n}}^{-1/a}\mathbf{S}\_{N\_{n}}\right\|>\rho\right) =$$

$$= \mathbb{P}\left(\left(b\_{N\_{n}}/d\_{n}\right)^{1/a}>\mathbf{x} : \left\|b\_{N\_{n}}^{-1/a}\mathbf{S}\_{N\_{n}}\right\|^{-1}; \left\|b\_{N\_{n}}^{-1/a}\mathbf{S}\_{N\_{n}}\right\|>\rho\right) >$$

$$\geqslant \mathbb{P}\left(\left(b\_{N\_{n}}/d\_{n}\right)^{1/a}>\mathbf{x}/\rho; \left\|b\_{N\_{n}}^{-1/a}\mathbf{S}\_{N\_{n}}\right\|>\rho\right) =$$

$$= \sum\_{k=1}^{\infty} \mathbb{P}(N\_{n}=k)\mathbb{P}\left(\left(b\_{k}/d\_{n}\right)^{1/a}>\mathbf{x}/\rho; \left\|b\_{k}^{-1/a}\mathbf{S}\_{k}\right\|>\rho\right) =$$

$$= \sum\_{k=1}^{\infty} \mathbb{P}(N\_{n}=k)\mathbb{P}\left(\left(b\_{k}/d\_{n}\right)^{1/a}>\mathbf{x}/\rho\right)\mathbb{P}\left(\left|\left|b\_{k}^{-1/a}\mathbf{S}\_{k}\right|>\rho\right)\right) \tag{57}$$

(the last equality holds since any constant is independent of any random variable). Since by (54) the convergence *b*−1/*<sup>α</sup> k Sk* =⇒ *Y* takes place as *k* → <sup>∞</sup>, from (56) it follows that there exists a number *k*0 = *k*0(*ρ*, *δ*) such that

$$\mathbb{P}(||b\_k^{-1/\alpha}\mathbf{S}\_k|| > \rho) > \delta/2$$

for all *k* > *k*0. Therefore, continuing (57) we obtain

$$\mathbb{P}(\|\mathbf{Z}\_n\| > \mathbf{x}) \geqslant \frac{\delta}{2} \sum\_{k=k\_0+1}^{\infty} \mathbb{P}(N\_n = k) \mathbb{P}\left( (b\_k/d\_n)^{1/n} > \mathbf{x}/\rho \right) = $$

$$= \frac{\delta}{2} \left[ \mathbb{P}\left( (b\_{N\_n}/d\_n)^{1/n} > \mathbf{x}/\rho \right) - \sum\_{k=1}^{k\_0} \mathbb{P}(N\_n = k) \mathbb{P}\left( (b\_k/d\_n)^{1/n} > \mathbf{x}/\rho \right) \right] \geqslant 0$$

$$\delta \geqslant \frac{\delta}{2} \left[ \mathbb{P} \left( (b\_{N\_n}/d\_n)^{1/\alpha} > \infty/\rho \right) - \mathbb{P}(N\_n \leqslant k\_0) \right].$$

Hence,

$$\mathbb{P}\left(\left(b\_{N\_n}/d\_n\right)^{1/n} > \mathbf{x}/\mathcal{R}\right) \leqslant \frac{2}{\delta} \mathbb{P}(\left\|\mathbf{Z}\_n\right\| > \mathbf{x}) + \mathbb{P}(N\_n \leqslant k\_0).\tag{58}$$

From the condition *Nn P*−→ ∞ as *n* → ∞ it follows that for any > 0 there exists an *n*0 = *<sup>n</sup>*0() such that P(*Nn <sup>n</sup>*0) < for all *n n*0. Therefore, with the account of the tightness of the sequence {*<sup>Z</sup>n*}*n*<sup>1</sup> that follows from its weak convergence to the random element *Z* with L(*Z*) = *F* implied by (55), relation (58) implies

$$\lim\_{x \to \infty} \sup\_{n \ge n\_0(c)} \mathbb{P}\left( (b\_{\mathcal{N}\_n}/d\_n)^{1/a} > x/\rho \right) \leqslant \varepsilon,\tag{59}$$

whatever > 0 is. Now assume that the sequence

$$\|\|D\_n^{-1}B\_{\mathcal{N}\_n}\|\| = (b\_{\mathcal{N}\_n}/d\_n)^{1/a}, \ n = 1, 2, \dots$$

is not tight. In that case there exists an *γ* > 0 and sequences N of natural and {*xn*}*n*∈N of real numbers satisfying the conditions *xn* ↑ ∞ (*n* → <sup>∞</sup>, *n* ∈ N ) and

$$\mathbb{P}\left( (b\_{\mathcal{N}\_n}/d\_n)^{1/a} > \mathbf{x}\_n \right) > \gamma, \ n \in \mathcal{N}.\tag{60}$$

.

But, according to (59), for any > 0 there exist *M* = *<sup>M</sup>*() and *n*0 = *<sup>n</sup>*0() such that

$$\sup\_{n \ge n\_0(\varepsilon)} \mathbb{P}\left( (b\_{N\_n}/d\_n)^{1/a} > M(\varepsilon) \right) \leqslant 2\varepsilon. \tag{61}$$

Choose < *γ*/2 where *γ* is the number from (60). Then for all *n* ∈ N large enough, in accordance with (60), the inequality opposite to (61) must hold. The obtained contradiction by the Prokhorov theorem proves the tightness of the sequence {*<sup>D</sup>*−<sup>1</sup> *n BNn* }*n*<sup>1</sup> or, which in this case is the same, of the sequence {*bNn*/*dn*}*n*1.

Introduce the set W(*Z*) containing all nonnegative random variables *U* such that P(*Z* ∈ *A*) = <sup>E</sup>S*<sup>α</sup>*,*U*2/*α*<sup>Σ</sup>(*A*) for any *A* ∈ B(R*<sup>r</sup>*). Let *<sup>λ</sup>*(·, ·) be any probability metric that metrizes weak convergence in the space of *r*-variate random vectors, or, which is the same in this context, in the space of distributions, say, the Lévy–Prokhorov metric. If *X*1 and *X*2 are random variables with the distributions *F*1 and *F*2 respectively, then we identify *<sup>λ</sup>*(*<sup>X</sup>*1, *<sup>X</sup>*2) and *<sup>λ</sup>*(*<sup>F</sup>*1, *<sup>F</sup>*2)). Show that there exists a sequence of random variables {*Un*}*n*1, *Un* ∈ W(*Z*), such that

$$
\lambda \left( b\_{\mathcal{N}\_n} / d\_n, \, !I\_n \right) \longrightarrow 0 \; (n \to \infty). \tag{62}
$$

Denote

$$\beta\_{\mathcal{U}} = \inf \left\{ \lambda \left( b\_{N\_{\mathcal{U}}} / d\_{\mathcal{U}} \, \mathcal{U} \right) \, : \, \mathcal{U} \in \mathcal{W}(\mathbf{Z}) \right\}.$$

Prove that *βn* → 0 as *n* → ∞. Assume the contrary. In that case *βn δ* for some *δ* > 0 and all *n* from some subsequence N of natural numbers. Choose a subsequence N1 ⊆ N so that the sequence {*bNn*/*dn*}*n*∈N1 weakly converges to a random variable *U* (this is possible due to the tightness of the family {*bNn*/*dn*}*n*<sup>1</sup> established above). But then *<sup>W</sup>n*(*g*) =⇒ *<sup>U</sup>*1/*<sup>α</sup>g* as *n* → <sup>∞</sup>, *n* ∈ N1 for any *g* ∈ R*<sup>r</sup>*. Applying Theorem 7 to *n* ∈ N1 with condition (54) playing the role of condition (53), we make sure that *U* ∈ W(*Z*), since condition (55) provides the coincidence of the limits of all weakly convergen<sup>t</sup> subsequences. So, we arrive at the contradiction to the assumption that *βn δ* for all *n* ∈ N1. Hence, *βn* → 0 as *n* → ∞.

For any *n* = 1, 2, . . . choose a random variable *Un* from W(*Z*) satisfying the condition

$$\lambda \left( b\_{\mathcal{N}\_n} / d\_n, \; \mathcal{U}\_n \right) \lesssim \beta\_n + \frac{1}{n}.$$

This sequence obviously satisfies condition (62). Now consider the structure of the set W(*Z*). This set contains all the random variables defining the family of special mixtures of multivariate centered elliptically contoured stable laws considered in Lemma 1, according to which this family is identifiable. So, whatever a random element *Z* is, the set W(*Z*) contains at most one element. Therefore, actually condition (62) is equivalent to

$$b\_{\mathcal{N}\_n}/d\_{\mathfrak{n}} \implies \mathcal{U} \quad (n \to \infty),$$

that is, to condition (iii) of the theorem. The theorem is proved.

**Corollary 5.** *Under the conditions of Theorem 7, non-randomly normalized random sequences with independent random indices d*−1/*<sup>α</sup> n SNn have the limit stable distribution* <sup>S</sup>*<sup>α</sup>*,<sup>Σ</sup> *with some positive definite matrix* Σ *if and only if there exists a number c* > 0 *such that*

$$b\_{\mathcal{N}\_n}/d\_n \implies \mathfrak{c} \ (n \to \infty).$$

*Moreover, in this case* Σ = *<sup>c</sup>*2/*<sup>α</sup>*Σ*.*

This statement immediately follows from Theorem 8 with the account of Lemma 1.

Now consider convergence of the distributions of random sums of random vectors to special scale-mixed multivariate elliptically contoured stable laws.

In Section 4 (see (20)) we made sure that all scale-mixed centered elliptically contoured stable distributions are representable as multivariate normal scale mixtures. Together with Theorem 8 this observation allows to suspect at least two principally different limit schemes in which each of these distributions can appear as limiting for random sums of independent random vectors. We will illustrate these two cases by the example of the multivariate generalized Linnik distribution.

As we have already mentioned, 'ordinary' Linnik distributions are geometrically stable. Geometrically stable distributions are only possible limits for the distributions of geometric random sums of independent identically distributed random vectors. As this is so, the distributions of the summands belong to the domain of attraction of the multivariate strictly stable law with some characteristic exponent *α* ∈ (0, 2] and hence, for 0 < *α* < 2 the univariate marginals have very large moments of orders greater or equal to *α*. As concerns the case *α* = 2, where the variances of marginals are finite, within the framework of the scheme of geometric summation in this case the only possible limit law is the multivariate Laplace distribution.

Correspondinly, as we will demonstrate below, the multivariate generalized Linnik distributions can be limiting for negative binomial sums of independent identically distributed random vectors. Negative binomial random sums turn out to be important and adequate models of characteristics of precipitation (total precipitation volume, etc.) during wet (rainy) periods in meteorology [60–62]. However, in this case the summands (daily rainfall volumes) also must have distributions from the domain of attraction of a strictly stable law with some characteristic exponent *α* ∈ (0, 2] and hence, with *α* ∈ (0, <sup>2</sup>), have very large variances, that seems doubtful, since to have an very large variance, the random variable *must* be allowed to take *arbitrarily large* values with *positive* probabilities. If *α* = 2, then the only possible limit distribution for negative binomial random sums is the so-called variance gamma distribution which is well known in financial mathematics [54].

However, when the (generalized) Linnik distributions are used as models of statistical regularities observed in real practice and an additive structure model is used of type of a (stopped) random walk for the observed process, the researcher cannot avoid thinking over the following question: which of the two combinations of conditions can be encountered more often:


Since, as a rule, when real processes are modeled, there are no serious reasons to reject the assumption that the variances of jumps are finite, the second combination at least deserves a thorough analysis.

As it was demonstrated in the preceding sections, the scale-mixed multivariate elliptically contoured stable distributions (including multivariate (generalized) Linnik laws) even with *α* < 2 can be represented as multivariate normal scale mixtures. This means that they can be limit distributions in analogs of the central limit theorem for random sums of independent random vectors *with finite covariance matrices*. Such analogs with univariate 'ordinary' Linnik limit distributions were presented in [2] and extended to generalized Linnik distributions in [5]. In what follows we will present some examples of limit settings for random sums of independent random vectors with principally different tail behavior. In particular, it will de demonstrated that the scheme of negative binomial summation is far not the only asymptotic setting (even for sums of independent random variables!) in which the multivariate generalized Linnik law appears as the limit distribution.

**Remark 3.** *Based on the results of [63], by an approach that slightly differs from the one used here by the starting point, in the paper [64] it was demonstrated that if the random vectors* {*<sup>S</sup>n*}*n*<sup>1</sup> *are formed as cumulative sums of independent random vectors:*

$$\mathbf{S}\_{\text{ll}} = \mathbf{X}\_{1} + \dots + \mathbf{X}\_{n} \tag{63}$$

*for n* ∈ N*, where X*1, *X*2, ... *are independent r-valued random vectors, then the condition Nn P*−→ ∞ *in the formulations of Theorem 8 and Corollary 4 can be omitted.*

Throughout this section we assume that the random vectors *Sn* have the form (63).

Let *U* ∈ U (see Section 5), *α* ∈ (0, 2], Σ be a positive definite matrix. In Section 8 the *r*-variate random vector *Y<sup>α</sup>*,<sup>Σ</sup> with the the multivariate *U*-scale-mixed elliptically contoured stable distribution was introduced as *Y<sup>α</sup>*,<sup>Σ</sup> = *U*1/*α* ◦ *<sup>S</sup>*(*<sup>α</sup>*, <sup>Σ</sup>). In this section we will consider the conditions under which multivariate *U*-scale-mixed stable distributions can be limiting for sums of independent random vectors.

Consider a sequence of integer-valued positive random variables {*Nn*}*n*<sup>1</sup> such that for each *n* 1 the random variable *Nn* is independent of the sequence {*<sup>S</sup>k*}*k*1. First, let {*bn*}*n*<sup>1</sup> be an very largely increasing sequence of positive numbers such that convergence (54) takes place. Let {*dn*}*n*<sup>1</sup> be an very largely increasing sequence of positive numbers. The following statement presents necessary and sufficient conditions for the convergence

$$d\_n^{-1/n} \mathbb{S}\_{\mathcal{N}\_n} \implies \mathcal{Y}\_{n, \Sigma} \quad (n \to \infty). \tag{64}$$

**Theorem 9.** *Under condition* (54)*, convergence* (64) *takes place if and only if*

$$b\_{\mathcal{N}\_n}/d\_n \implies \mathcal{U} \qquad (n \to \infty).$$

**Proof.** This theorem is a direct consequence of Theorem 8 and the definition of *Y<sup>α</sup>*,<sup>Σ</sup> with the account of Remark 3.

**Corollary 6.** *Assume that ν* > 0*. Under condition* (54)*, the convergence*

$$d\_n^{-1/n} \mathbf{S}\_{N\_n} \implies L\_{n, \nu, \Sigma} \quad (n \to \infty)$$

*takes place if and only if*

$$b\_{N\_n}/d\_n \implies G\_{\nu,1} \quad (n \to \infty). \tag{65}$$

**Proof.** To prove this statement it suffices to notice that the multivariate generalized Linnik distribution is a *U*-scale-mixed stable distribution with *U d*= *G<sup>ν</sup>*,<sup>1</sup> (see representation (49)) and refer to Theorem 9 with the account of Remark 3.

Condition (65) holds, for example, if *bn* = *dn* = *n*, *n* ∈ N, and the random variable *Nn* has the negative binomial distribution with shape parameter *ν* > 0, that is, *Nn* = *NB<sup>ν</sup>*,*pn* ,

$$\mathbb{P}(\mathsf{N}\_{\nu, p\_n} = k) = \frac{\Gamma(\nu + k - 1)}{(k - 1)! \Gamma(r)} \cdot p\_n^{\nu} (1 - p\_n)^{k - 1}, \quad k = 1, 2, \ldots, n$$

with *pn* = *n*<sup>−</sup><sup>1</sup> (see, e.g., [65,66]). In this case <sup>E</sup>*NB<sup>ν</sup>*,*pn* = *nν*.

Now consider the conditions providing the convergence in distribution of scalar normalized random sums of independent random vectors satisfying condition (54) with some *α* ∈ (0, 2] and Σ to a random vector *<sup>Y</sup>β*,<sup>Σ</sup> with the *U*-scale-mixed stable distribution <sup>E</sup>S*β*,*U*2/*β*<sup>Σ</sup> with some *β* ∈ (0, *<sup>α</sup>*). For convenience, let *β* = *αα* where *α* ∈ (0, <sup>1</sup>).

Recall that in Section 8, for *α* ∈ (0, 1] the positive random variable *<sup>Y</sup><sup>α</sup>*,<sup>1</sup> with the univariate one-sided *U*-scale-mixed stable distribution was introduced as *<sup>Y</sup><sup>α</sup>*,<sup>1</sup>*d*= *U*1/*α* ◦ *<sup>S</sup>*(*<sup>α</sup>*, <sup>1</sup>).

**Theorem 10.** *Let α* ∈ (0, 1]*. Under condition* (54)*, the convergence*

$$d\_n^{-1/a} \mathbb{S}\_{N\_n} \Longrightarrow \mathcal{Y}\_{\alpha \alpha' \Sigma} \quad (n \to \infty)$$

*takes place if and only if*

$$b\_{N\_n}/d\_n \Longrightarrow \mathcal{Y}\_{n',1} \qquad (n \to \infty).$$

**Proof.** This statement directly follows from Theorems 5 and 7 with the account of Remark 3.

**Corollary 7.** *Let α* ∈ (0, 1]*, ν* > 0*. Under condition* (54)*, the convergence*

$$d\_{\mathfrak{n}}^{-1/a} \mathcal{S}\_{\mathbb{N}\_{\nu}} \Longrightarrow L\_{\mathfrak{n}\alpha', \Sigma, \nu} \quad (\mathfrak{n} \to \infty)$$

*takes place if and only if*

$$b\_{N\_n}/d\_n \Longrightarrow M\_{\alpha',\nu} \quad (n \to \infty).$$

**Proof.** This statement directly follows from Theorems 3 (see representation (50)) and 9 with the account of Remark 3.

From the case of heavy tails turn to the 'light-tails' case where in (54) *α* = 2. In other words, assume that the properties of the summands *Xj* provide the asymptotic normality of the sums *S<sup>n</sup>*. More precisely, instead of (54), assume that

$$b\_n^{-1/2} \mathbf{S}\_n \Longrightarrow \mathbf{X} \quad (n \to \infty). \tag{66}$$

The following results show that even under condition (66), heavy-tailed *U*-scale-mixed multivariate stable distributions can be limiting for random sums.

**Theorem 11.** *Under condition* (66)*, convergence* (64) *takes place if and only if*

$$b\_{\mathcal{N}\_n}/d\_n \Longrightarrow \mathcal{Y}\_{\mathfrak{n}/2,1} \quad (n \to \infty).$$

**Proof.** This theorem is a direct consequence of Theorem 8 and Corollary 4, according to which *Y<sup>α</sup>*,<sup>Σ</sup> *d*= 22*Yα*/2,1 ◦ *X* with the account of Remark 3.

**Corollary 8.** *Assume that Nn* → ∞ *in probability as n* → ∞*. Under condition* (66)*, non-randomly normalized random sums d*−1/2 *n SNn have the limit stable distribution* S*<sup>α</sup>*,<sup>Σ</sup> *if and only if*

$$b\_{N\_n}/d\_n \Longrightarrow \mathfrak{B}(\mathfrak{a}/2,1) \quad (n \to \infty).$$

**Proof.** This statement follows from Theorem 11 with the account of (13) and Remark 3.

**Corollary 9.** *Assume that Nn* → ∞ *in probability as n* → <sup>∞</sup>*, ν* > 0*. Under condition* (66)*, the convergence*

$$d\_n^{-1/2} \mathbb{S}\_{\mathbb{N}\_0} \implies L\_{n, \nu, \Sigma} \quad (n \to \infty)$$

*takes place if and only if*

$$b\_{N\_n}/d\_n \Longrightarrow 2M\_{\alpha/2,\nu} \qquad (n \to \infty).$$

**Proof.** To prove this statement it suffices to notice that the multivariate generalized Linnik distribution is a multivariate normal scale mixture with the generalized Mittag–Leffler mixing distribution (see definition (48)) and refer to Theorem 11 with the account of Remark 3.

Another way to prove Corollary 9 is to deduce it from Corollary 7.

Product representations for limit distributions in these theorems proved in the preceding sections allow to use other forms of the conditions for the convergence of random sums of random vectors to particular scale mixtures of multivariate stable laws.
